Skip to content
Home NSCP Token Economics Demos Research NightClaw
TokenArch

The Governance Pattern — Before It Becomes a Standard

The controls that data engineers apply to pipelines — lineage tracking, semantic definitions, compliance gates, audit trails — are the same controls that agent frameworks like OpenClaw and NemoClaw now need for autonomous execution. This page demonstrates that pattern using finance as the example domain.

The Numerate Semantic Control Plane

A reference architecture for governing AI across enterprise platforms. One semantic layer defines business rules. Deterministic engines run the math. AI agents handle only what code can’t — and every inference call is tracked by cost. The scenarios below show how it works using finance as the example domain.

Disclaimer: All data, schemas, scenarios, and architectures shown here are entirely generic and synthetic. They reference common industry tools and patterns — not the proprietary systems, intellectual property, or internal processes of any current or former employer. Nothing on this site represents or is derived from any organization’s confidential information.
The Problem
AI tools produce answers that change between runs. For anything that requires precise numbers — revenue, compliance, risk — that's not a minor inconvenience. It's a structural mismatch between how AI works and what regulated environments require. The question is where you draw the line between what code handles and where AI fits.
The Insight
The controls data engineers already use — lineage, semantic definitions, compliance gates, audit trails — are the same controls that agent frameworks like OpenClaw and NemoClaw are now applying to autonomous execution. The pattern isn't new. The execution layer is moving up.
The Pattern
Source data flows through a governed pipeline. Business terms are defined once in a semantic layer. Deterministic engines run the math. AI handles only what code can't — narrative, interpretation, exception routing. Each layer carries context from the previous one. That's the same trust propagation that OWASP AOS and Galileo Agent Control are working to standardize for agent-native workflows. The tabs below walk through each layer.
NSCP Architecture Stack
5 layers
Layer 5 — Top
Agentic Layer
6 Specialized Agents+Inference Gateway+Token Economics+Budget Controller
LLMs handle only bounded tasks: intent classification, exception routing, narrative generation. Every call is designed to be tracked by the Inference Gateway.
Layer 4
Control Fabric
Preventive Controls+Detective Controls+SOX / HIPAA / PCI-DSS+Audit Trail
Designed so that no agent takes regulated action autonomously. Every material output is routed through compliance controls before reaching downstream systems.
Layer 3
Semantic Layer
Business Ontology (16 concepts)+Metric Engine (18+ metrics)+Calculation Rules (SQL/Python)
Vendor-neutral business definitions that abstract across source systems. “Revenue” means one thing here, regardless of which ERP, CRM, or billing system it originates from. Agents query concepts, not tables. All calculations are deterministic and versioned.
Layer 2
Data Fabric
Raw Zone · CDC · KafkaCurated Zone · dbt · Quality8 Governed Data Products
CDC-driven pipelines with automated quality checks. Data products are contract-defined, SLA-tracked, and lineage-mapped.
Layer 1 — Bottom
Source Systems
Generic ERPData WarehouseLOB SystemsExcel/VBA/Python Scripts
Where raw transactional data lives — each system with its own schemas, naming conventions, and business logic. Multiple sources of truth, inconsistent definitions, manual handoffs.
What This Pattern Delivers
6 capabilities
Numeric Correctness
Agents never perform arithmetic. All calculations execute through deterministic SQL, Python, or rule engines with versioned, testable logic.
Full Auditability
Every KPI traces from presentation → metric definition → calculation rule → source data → control checks. PBC-ready evidence auto-assembled.
Regulatory Compliance
Preventive and detective controls mapped to data products and workflows. Adaptable to SOX, HIPAA, PCI-DSS, or any regulatory framework. No agent takes regulated action or approves its own output.
Token Cost Governance
Inference Gateway logs every LLM call. Budget Controller enforces limits. Model routing optimizes cost. Every AI dollar is allocated and auditable.
Semantic-Agentic Workflows
Agents interpret intent, plan steps, and compose narratives. Deterministic engines run the math. Humans approve material outputs.
Enterprise Integration
Connects ERP, warehouse, LOB systems, SIEMs, or any source system. The semantic layer abstracts sources — agents query concepts, not tables. Swap the domain, and the integration pattern holds.
Live Agent Feed
Simulated
These scenarios use finance workflows as concrete examples — period-end close, AP/AR, bank reconciliation. The underlying architecture pattern (semantic layer → deterministic computation → agent orchestration → compliance controls) applies equally to cybersecurity incident response, healthcare claims processing, supply chain compliance, or any domain where data integrity and auditability are non-negotiable.
Current State — Period-End Close · Pain Points
1
Manual GL Data Extract
Finance analyst logs into ERP, runs ad-hoc SQL query, exports to CSV. No automated scheduling. Data freshness: T+1 day.
Manual
2
Excel/VBA Processing
Analyst pastes data into shared workbook. 2,400-line VBA macro runs transformations. Fragile; breaks on data format changes.
Fragile
3
Email Distribution
Output emailed to 6 stakeholders as attachment. No version control. Recipient edits create conflicting versions.
No Audit Trail
4
Manual Variance Analysis
Senior analyst spends 2–3 hours comparing to prior period. Commentary written in email thread. Not linked to data.
~3 hrs
5
Manual Reconciliation
Cross-system reconciliation done manually in Excel. Breaks require escalation emails. No automated exception tracking.
Error-Prone
6
Journal Entry Posting
Preparer drafts journal in Word, emails to approver. Manual entry into ERP. SOX evidence gathered manually post-hoc.
SOX Risk
Total cycle time: ~4 hours per close · Error rate: ~3.2% requiring rework · Audit evidence: Manually assembled
Target State — NSCP Workflow · Period-End Close · Token Economics
1
Close Planner Agent — Interprets Schedule
Agent reads close calendar, sequences tasks by dependency, assigns agents and SLAs.
1.2K in / 0.8K out~$0.006Claude Sonnet 4.6
Agent
2
Semantic Layer — Metric Definition Lookup
Deterministic resolution of metric IDs to SQL definitions from the metric catalog. No inference required.
0 tokens (deterministic)
Deterministic
3
SQL Execution Against GL Event Store
Compiled SQL runs against fact_gl_event. Results verified against pre-defined control totals. Audit log written.
0 tokens (deterministic)
Deterministic
4
Variance Analysis Agent — Explains Movements
Receives structured variance data. Generates natural-language explanations grounded in semantic-layer definitions.
3.1K in / 1.5K out~$0.014Claude Sonnet 4.6
Agent
5
Control Validation — Automated Rules Engine
16 SOX controls evaluated deterministically. Pass/fail written to fact_control_execution.
0 tokens (deterministic)
Controls
6
Exception Triage Agent — Reviews Flags
Classifies 3 flagged exceptions by severity. Routes 2 to auto-resolve, 1 escalated to Controller.
2.8K in / 1.2K out~$0.012GPT-5.4
Agent
7
Compliance Agent — Drafts Journal Memo
Generates SOX-compliant evidence package: journal entries with rationale, control results, approver routing.
1.9K in / 2.1K out~$0.018Claude Sonnet 4.6
Agent
Cumulative — Period-End Close
~14.6K tokens · $0.058 per cycle
vs. Human Labor: ~4 hours @ $85/hr = $340.00
Marginal token cost: $0.058 per cycle (infrastructure, engineering, and operational costs not included)
Note: Token inference cost only. Total cost of ownership includes platform infrastructure (cloud compute, data pipelines, storage), engineering effort to build and maintain, and ongoing operational overhead. This comparison illustrates the marginal cost of AI inference vs. equivalent human labor — not full project ROI.
Current State — AP/AR Cash Cycle · Pain Points
1
Invoice Receipt & Manual Entry
AP staff manually keys invoice data into ERP. OCR tools used inconsistently. Duplicate entry risk.
Manual
2
3-Way Match — Manual
PO, receipt, and invoice matched manually in spreadsheet. 8% exception rate requires supervisor review.
8% Exceptions
3
Aging Report — Emailed Weekly
AR aging exported from ERP, formatted in Excel, emailed to collections team. Data is 24h stale.
Stale Data
4
Cash Application — Manual
Analyst matches bank deposits to AR invoices. Partial payments split manually. High error rate on short payments.
Error-Prone
Target State — NSCP · AP/AR Cash Cycle · Token Economics
1
Automated Invoice Ingestion (OCR + Rules)
Structured extraction from PDF invoices. Rules-based field validation against vendor master.
0 tokens (deterministic OCR pipeline)
Deterministic
2
Automated 3-Way Match
SQL join across fact_ap_invoice, PO table, GR/GI records. Automated matching with threshold tolerances.
0 tokens (deterministic)
Deterministic
3
Exception Triage Agent — AP Exceptions
Reviews unmatched invoices. Classifies root cause. Suggests resolution action.
2.1K in / 1.0K out~$0.009GPT-5 Mini
Agent
4
Automated Cash Application
Bank feed matched to AR invoices using multi-criteria scoring. Auto-applied for 94% of payments.
0 tokens (deterministic)
Deterministic
5
AR Analyst Agent — Aging Commentary
Generates structured aging commentary, identifies at-risk accounts, recommends collection actions.
1.8K in / 0.9K out~$0.007Claude Haiku 4.5
Agent
Cumulative — AP/AR Daily Cycle
~5.8K tokens · $0.016 per run
vs. Human Labor: ~2 hours @ $75/hr = $150.00
Marginal token cost: $0.016 per run (infrastructure, engineering, and operational costs not included)
Note: Token inference cost only — excludes platform infrastructure, engineering, and maintenance overhead.
Current State — Bank Reconciliation · Pain Points
1
Bank Statement Download
Analyst logs into online banking, downloads MT940 or CSV statement. Reformatted in Excel for import.
Manual
2
GL Extract & Comparison
GL cash balance extracted separately. VLOOKUP-based matching across two worksheets. ~15% unmatched on first pass.
15% Unmatched
3
Break Investigation
Analyst researches each break individually. Timing differences documented manually. Escalations via email.
~2hr Investigation
Target State — NSCP · Bank Reconciliation · Token Economics
1
Automated Bank Feed Ingestion
Direct bank API integration. MT940 parser normalizes transactions into fact_recon_match staging.
0 tokens (deterministic)
Deterministic
2
Automated Transaction Matching
Multi-pass SQL matching: exact → amount/date tolerance → fuzzy reference. Auto-clears 97.3%.
0 tokens (deterministic)
Deterministic
3
Reconciliation Agent — Break Analysis
Analyzes remaining 2.7% unmatched. Cross-references payment descriptions, vendor history. Proposes resolutions.
2.4K in / 1.1K out~$0.010Claude Sonnet 4.6
Agent
4
Compliance Agent — Recon Sign-Off Package
Compiles reconciliation evidence: matched summary, break notes, approver routing.
1.5K in / 1.8K out~$0.011Claude Sonnet 4.6
Agent
Cumulative — Daily Bank Recon
~6.8K tokens · $0.021 per day
vs. Human Labor: ~2.5 hours @ $80/hr = $200.00
Marginal token cost: $0.021 per cycle (infrastructure, engineering, and operational costs not included)
Note: Token inference cost only — excludes platform infrastructure, engineering, and maintenance overhead.
8 Sample Data Products
GL Event Store
fact_gl_event
AP Invoice Mart
fact_ap_invoice · dim_vendor
AR Invoice Mart
fact_ar_invoice · dim_customer
Recon Match Log
fact_recon_match
Asset Register
fact_asset · dim_asset_class
Control Execution Log
fact_control_execution
Agent Action Log
fact_agent_action
Token Usage Telemetry
fact_token_usage
Business Ontology — 16 Unified Concepts Abstracted Across Source Systems
Metric Catalog
Metric IDNameCategoryFormula / SQLData ProductsTarget
Regulatory Controls & Token Governance
IDControl NameTypeFrequencyOwnerData ProductsStatus
Interactive Demonstration

SQL Verification Demo

Everything described above — the governed pipeline, deterministic SQL, control fabric, inference gateway, token economics — running live in your browser against real Federal Reserve data. Same question, two architectures, side by side. Then you verify the math yourself.

Live SQL against 1,492 FRED observations
Governed pipeline with inference gate
5 real LLM failure patterns exposed
Token economics at scale
Launch the Demo

Opens in a new tab. All data is from the Federal Reserve Economic Data (FRED) API. No real client data. No API keys required.

Questions, feedback, or ideas about these architectures? Reach out.

human@tokenarch.com