EU AI Act compliance is an engineering problem. We built the engineering solution.
The EU AI Act requires runtime transparency and human oversight for high-risk AI systems — not in a policy document, but in the production architecture. ADJUDON provides the technical infrastructure to satisfy Article 13 and Article 14 requirements at the code level. No documentation theater. Runtime proof.
The Act demands infrastructure your stack doesn't have.
Most engineering teams discover the EU AI Act's requirements when legal sends a compliance questionnaire. The questions are specific: Can you show the confidence level for every individual decision? Can you prove no log entry was altered? Can you demonstrate that a human reviewed the flagged cases before execution? These are not policy questions. They are architecture questions — and the answer, for most AI deployments, is no.
The gap is not awareness. Teams know the Act is coming. The gap is infrastructure. Compliance requires a runtime evaluation layer that scores decisions independently, enforces policies deterministically, logs everything immutably, and routes uncertain decisions to humans automatically. Building this from scratch takes months. The enforcement deadline does not wait.
Article 13: Every decision must be traceable, explainable, and verifiable.
Article 13 requires providers of high-risk AI systems to ensure transparency — not in the abstract, but for every individual output. Recital 47 specifies that transparency includes the ability to explain individual outputs and the logic behind them. Here is what the Act demands and how ADJUDON satisfies it.
Requirement: Traceable decision records
Every AI system output must be logged with sufficient context to reconstruct the decision process.
Every trace is logged with full context — input, output, CPI score (3-pillar breakdown), policy matches, human overrides, timestamp, agent ID, IP address. Append-only storage. No reconstruction needed — the record is complete at write time.
Requirement: Independently assessable confidence
The system must provide a meaningful signal about its own uncertainty, not just the model's self-report.
The Confidence Engine produces a CPI score from three independent sources (base probability, entropy, vector similarity). The score is decomposable — an auditor can see exactly which signal contributed what. This is not the model's self-reported confidence.
Requirement: Tamper-evident logging
Decision records must be protected against alteration.
SHA-256 hash chain on every audit log entry. Each entry's hash incorporates the previous entry's hash. One-click integrity verification via GET /api/audit/verify. If any entry is altered, the chain breaks and the tampering is immediately detectable.
Requirement: On-demand export for regulators
Records must be available for regulatory review.
CSV and PDF export via API. Filterable by date range, agent, status, and policy match. Designed for BaFin submissions, GDPR DPA documentation, and external audit requests. One click from the dashboard.
Article 14: Humans must be able to intervene — and the system must enforce it.
Article 14 does not just require that a human could intervene. It requires that the system is designed so that humans do intervene when the risk profile warrants it. The system must detect when intervention is appropriate, enforce the pause, and give the reviewer everything they need.
Requirement: Automatic detection of uncertain decisions
The system must identify when confidence is low or the situation is novel.
The Confidence Engine flags decisions where the CPI score falls below your configured threshold. The Policy Engine routes these automatically to the human review queue. No manual triage. No missed cases.
Requirement: Enforced pause before execution
When oversight is required, the system must hold the decision.
The API returns 202 (flagged) and the trace enters the review queue. Your application receives the 202 and holds the action. The decision does not execute until a human resolves it. This is architectural enforcement — not a notification the team might miss.
Requirement: Full context for the reviewer
The human must have enough information to make an informed decision.
The review queue surfaces the complete decision context: input, output, CPI score with pillar breakdown, the policy that triggered the flag, and the agent's history on similar decisions. Reviewers approve, reject, or escalate with one click. Every action is identity-logged with rationale and timestamp.
The deadline is not abstract. It is August 2026.
August 2, 2025
Prohibitions on unacceptable-risk AI practices take effect.
August 2, 2026
Obligations for high-risk AI systems become enforceable. This is the deadline for Article 13 (transparency) and Article 14 (human oversight).
Penalties
Up to €35 million or 7% of global annual turnover for prohibited practices. Up to €15 million or 3% for other violations. Enforcement by national authorities (in Germany: BfDI / BNetzA).
Who is affected
Providers and deployers of high-risk AI systems operating in the EU. If your AI agents make decisions in credit scoring, insurance underwriting, recruitment, healthcare triage, or law enforcement — you are likely in scope.
Requirement → Feature → Evidence
A direct mapping from EU AI Act requirements to ADJUDON's technical capabilities and the evidence each produces.
| EU AI Act Requirement | ADJUDON Feature | Evidence Produced |
|---|---|---|
| Art. 13 — Traceable decision records | Full Audit Trail | Append-only log with input, output, CPI score, policy matches, overrides |
| Art. 13 — Independent confidence assessment | Confidence Engine (3-pillar CPI) | Decomposable CPI score per decision |
| Art. 13 — Tamper-evident logs | SHA-256 hash chain | One-click verification via API; broken chain = detected tampering |
| Art. 13 — On-demand regulatory export | CSV + PDF export | Filterable, formatted exports for BaFin, DPAs, auditors |
| Art. 14 — Detection of uncertain decisions | CPI threshold + Policy Engine | Automatic flagging when score < threshold |
| Art. 14 — Enforced pause before execution | API returns 202 + Review Queue | Application holds action until human resolves |
| Art. 14 — Full context for reviewer | Review Queue with decision context | Input, output, CPI breakdown, policy trigger, agent history |
| Art. 14 — Identity-logged oversight | Review audit trail | Reviewer identity, decision, rationale, timestamp |
August 2026 is closer than your next product cycle.
Start with a free trial. Connect one agent. See the audit trail, the confidence scores, and the review queue working on your real decisions.