We use cookies to improve your experience and analyse site traffic.
The platform where AI governance and AI engineering are the same workflow — from problem formulation to post-market monitoring..
Articles 5–111, Annexes I–XIII
Article 6 + Annex III
Technical documentation, auto-generated
Fundamental Rights Impact Assessment
Tamper-evident post-market monitoring
Zero-trust, tenant-isolated
The thesis
Today, AI governance sits outside the engineering workflow. Platforms inventory systems after they are built. Risk is assessed after models are trained. Compliance evidence is reconstructed after decisions have been made. Two teams, two timelines, two sets of tooling — connected by spreadsheets, meetings, and manual evidence collection. This is where AI failures originate.
Standard Intelligence collapses governance and engineering into a single workflow. The data scientist who profiles a dataset produces a Dataset Factsheet as a side effect. The engineer who runs an evaluation produces a Fairness Position Statement as a side effect. The team that promotes a model through the CI/CD gate produces an Article 11 model card as a side effect. No separate compliance workstream. No reconstructed evidence. The governed development process is the compliance evidence.
The shift, in three voices
For engineers
Governance happens inside your pipeline, not next to it. The CI/CD gate replaces multi-week compliance reviews with an automated check that runs in your deployment pipeline — artefact completeness, evaluation thresholds, fairness compliance, training verification, in seconds. Pass and ship. Fail and get a structured error report telling you exactly what's missing.
Built around
the governed CI/CD gate, the data profiling SDK, and the compliance-aware evaluation harness — all running locally on your infrastructure.
For compliance & legal
Documentation is contemporaneous and machine-verifiable, not reconstructed. Every governance artefact is generated by the engineering action that produced the decision it documents — linked to the article, the sub-provision, and the atomic question it satisfies. Article-level visibility replaces quarterly status meetings and email chases.
Built around
the regulatory knowledge base, the AI Regulatory Navigator, and continuous regulatory change intelligence across 81 instruments.
For executives
Speed and safety are the same thing on this platform. The rework cycle that adds 30–60% to every AI initiative timeline is eliminated, because compliance is engineered in, not bolted on. Portfolio-level visibility shows time to deployment, compliance coverage, open governance actions, and monitoring health across every AI system in your organisation — on one dashboard.
Built around
the portfolio dashboard, the CI/CD gate as a single point of control, and regulatory change impact reports.
The lifecycle
Standard Intelligence governs the AI lifecycle as five stages, each gated, each producing its own regulatory artefact as a side effect of the engineering work.
Frame the system. Inherit the regulations it's bound to.
Profile datasets locally. Raw data never leaves your infrastructure.
Compliance-aware harness runs on your data, in your environment.
CI/CD gate runs in seconds. Pass and ship, or fail with a report.
Attested interaction logs. Real-time drift and guardrail monitoring.
Depth as the moat
Standard Intelligence operates at sub-provision granularity. No other AI governance platform comes close.
Atomic compliance questions
Decomposed from the EU AI Act, GDPR, DORA, and adjacent instruments to the level of individual sub-provisions.
Regulatory instruments
Tracked at the article level. Every amendment, delegated act, and guidance update propagates automatically.
AI system archetypes
Each profiled against every applicable instrument — every system inherits its complete regulatory profile in seconds.
Applicability determinations
The cross product. A real answer instead of a research assignment.
Three customer archetypes
Standard Intelligence is in early-access deployment with three organisations covering the three most demanding regulatory profiles in AI today: autonomous decision-making at scale, regulated medical devices, and general-purpose AI provided as a product.
Autonomous commerce & payments
The regulatory situation
Autonomous AI systems making consequential financial decisions at scale. Sits at the frontier of EU AI Act enforcement — the regulatory profile that nobody else in the market can serve, because it requires governance designed for systems that act on their own behalf.
How they use it
Governing autonomous decision agents end to end — from problem formulation through tamper-evident post-market monitoring of every agent action.
Clinical AI / medical device
The regulatory situation
High-risk under EU AI Act Annex III and concurrently regulated under the Medical Device Regulation. Conformity assessment, technical documentation, post-market vigilance, and clinical safety oversight are all non-negotiable. Evidence must be defensible to a regulator and a notified body simultaneously.
How they use it
Generating Article 11 technical documentation as a byproduct of model development; running the FRIA module against every release; producing a single audit-ready evidence chain that satisfies AI Act and MDR obligations from one workflow.
General-purpose AI ISV
The regulatory situation
GPAI provider obligations under the EU AI Act (in force from 2 August 2025), plus downstream documentation duties to every customer who deploys the product into a high-risk use case. The hardest part: the obligations propagate down a chain the provider does not control.
How they use it
Producing transparent model documentation, downstream-deployer guidance, and continuously updated regulatory change tracking across the 81 instruments their customers operate under.
Three customers. Three regulatory profiles. One platform.
A worked example
One regulatory provision. Five steps. Every artefact generated by the engineering action that produced the decision it documents.
Step 1
Article 10(2)(f), EU AI Act
“Training, validation and testing data sets shall be examined in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination prohibited under Union law.”
Standard Intelligence has decomposed this single sub-provision into multiple atomic compliance questions — each independently testable, each mapped to a column in our database schema, each answerable with engineering evidence.
Step 2
One of those atomic questions:
Has the training dataset been profiled for proxy variables that correlate with protected characteristics under EU non-discrimination law?
Response type: structured (yes / no / not-applicable + evidence reference)
Conditional logic: required if system risk classification = high-risk and training data includes personal data
Database column: dataset_profile.proxy_audit_status
Step 3
The data scientist runs the profiling SDK on their dataset. Locally. On their infrastructure. Raw data never leaves the customer environment.
The SDK runs the proxy-discrimination audit, computes representativeness statistics, and writes its findings to the platform — no manual upload, no spreadsheet, no PII over the wire.
# illustrative
from standard_intelligence import profile
profile(
dataset="./data/training_v3.parquet",
system="clinical-triage-v2",
checks=["proxy-discrimination", "representativeness"],
)Step 4
A Dataset Factsheet is generated as a side effect of the profiling run. It contains:
Nobody assembled it. The engineer ran one command.
Step 5
The Dataset Factsheet is automatically linked to the atomic question, to Article 10(2)(f), and to the system clinical-triage-v2 it documents.
The compliance team sees it in the dashboard the moment it lands — with the engineering evidence, the article it satisfies, and the audit trail. No status meeting. No email chase. No reconstruction.
One engineering action. One regulatory question answered. One audit-ready artefact. In real time. Multiply by 2,597. That is the platform.
The platform modules
Nothing is generic compliance theatre. Each screen, each question, each output traces back to an Article, Annex, or Recital.
Automated EU AI Act risk classification against Article 6, Annex III, and the prohibited practices catalogue.
Implements
Articles 5, 6; Annex III
Section-by-section questionnaires mapped to Annex IV and XI requirements, with branching logic, collaborative routing, and evidence attachment.
Implements
Annex IV, Annex XI
Real-time compliance scoring with heat-map visualisation and prioritised remediation guidance.
Implements
Article 9 (risk management)
Conversational regulatory Q&A with two-stage citation validation. Grounded against the full corpus, not the public web.
Implements
All 81 instruments
Dynamically composed evaluation specifications drawn from the knowledge graph for the specific system being tested. Runs locally, on your data.
Implements
Article 15 (accuracy, robustness, cybersecurity)
Replaces multi-week governance reviews with an automated check in your deployment pipeline. Pass and ship; fail and get a structured error report.
Implements
Articles 11, 16, 17
Local dataset profiling that produces governance-ready documentation without sending raw data off-premises. Powers the worked example above.
Implements
Article 10 (data governance)
Fundamental Rights Impact Assessment structured across the eight categories of Article 27. Audit-ready output.
Implements
Article 27
Multi-stage approval workflows with digital signatures and manifest-verified output for conformity assessment submission.
Implements
Articles 43, 47, 48
Article 72 monitoring plan management with external metrics API, drift detection, and cryptographically attested interaction logs (patent pending GB2604505.4).
Implements
Article 72
Continuous monitoring of all 81 instruments. When a change lands, the knowledge graph propagates the impact to the specific systems and artefacts affected — a structured report, not a research assignment.
Implements
All 81 instruments
Org-level view of time to deployment, compliance coverage, open governance actions, and monitoring health across every AI system. The single screen for board reporting.
Implements
All of the above, rolled up
Enforcement timeline
EU AI Act enforcement is phased and final. Each date below has a meaning for engineering, compliance, and the board. None of them can be negotiated.
Until high-risk obligations enforce
110
Days
05
Hours
13
Minutes
52
Seconds
Every high-risk AI system on the EU market must satisfy Article 9 risk management, Article 10 data governance, Article 11 technical documentation, Article 14 human oversight, Article 15 accuracy and robustness, and Article 43 conformity assessment. This is the date the countdown above tracks.
AI systems already deployed before August 2026 must meet the full requirements. There is no grandfathering. Every system, regardless of deployment date.
What this means for you
For engineers
Risk management and technical documentation must be in your pipeline. Not a separate workstream — in your pipeline. Standard Intelligence is the only platform that puts the gate in CI/CD.
See it on a real articleFor compliance and legal
Reconstructed evidence will not survive a notified body inspection. Standard Intelligence generates the chain as a byproduct of engineering — there is nothing to reconstruct.
See the corpus depthFor executives
By 2 August 2026 every high-risk AI system needs a complete Article 11 dossier and a live compliance answer. You cannot produce that from a quarterly report. Standard Intelligence shows you the answer on one screen.
Get the portfolio briefThree doors in
Standard Intelligence is in Early Access Preview. Three doors in. The same platform behind every one.
For engineers
The SDK design, the CI/CD gate, the harness, and the integration story for your existing MLOps pipeline. One PDF. One 20-minute call if you want it.
Get the engineering briefFor compliance and legal
Walk through a real article on a real system with our regulatory team. You pick the provision; we show you the question chain, the evidence model, and the dashboard view. Thirty minutes.
Walk through an articleFor executives
Time-to-deployment, risk exposure, regulatory coverage, and the procurement case. One page for your CFO. Ten minutes for your audit committee.
Get the portfolio brief