Skip to content
ComplianceMarch 2026 · 10 min read

How the EU AI Act's Article 13 Changes How We Build Software

The liability sits at runtime. Here is what the Act demands in practical engineering terms.

By the ADJUDON Compliance Team

What Article 13 Actually Means at the Code Level

Article 13 of the EU AI Act mandates "transparency and provision of information to deployers." In non-legal terms, this means: if your AI system makes a decision, you must be able to explain how it arrived at that decision, in a way that is auditable and reproducible.

This is not a UX problem. This is not a documentation problem. This is a runtime instrumentation problem.

For every decision your AI makes, you need to log:

  • The input that triggered the decision
  • The model version and configuration used
  • The confidence score or uncertainty estimate
  • The reasoning path or chain-of-thought (if applicable)
  • Whether a human reviewed or overrode the decision
  • The timestamp and user context

And crucially: this log must be immutable and tamper-proof. You cannot edit it after the fact. You cannot delete entries. You cannot backfill missing data when an auditor shows up.

What Article 14 Means for Your System Architecture

Article 14 builds on Article 13 by introducing the concept of "human oversight". This is where things get architecturally interesting.

The Act distinguishes between three types of oversight:

  1. Human-in-the-loop: A human approves every decision before it is executed.
  2. Human-on-the-loop: A human can intervene and override decisions in real-time.
  3. Human-in-command: A human can deactivate the system at any time.

For high-risk AI systems (which includes most enterprise use cases), you need all three.

This means your architecture must support:

  • A review queue for decisions that require human approval
  • A real-time dashboard for monitoring and intervention
  • A kill switch that can halt all AI decision-making instantly
  • A fallback mechanism to route decisions to human operators when confidence is low

A Concrete Example: Automated Loan Decisioning

Let's say you're building an AI system that approves or denies loan applications. Here's what a compliant implementation looks like:

const decision = await ai.evaluate(application);

// Article 13: Log the decision
await auditLog.record({
  input: application,
  modelVersion: ai.version,
  confidence: decision.confidence,
  reasoning: decision.reasoning,
  timestamp: Date.now(),
  user: currentUser.id
});

// Article 14: Human oversight routing
if (decision.confidence < 0.85) {
  await reviewQueue.enqueue({
    applicationId: application.id,
    aiRecommendation: decision,
    requiresHumanApproval: true
  });
} else {
  await executeDecision(decision);
}

Notice what's happening here:

  • Every decision is logged before it is executed
  • Low-confidence decisions are routed to human review
  • The AI's recommendation is preserved even if a human overrides it
  • The entire chain is auditable and reproducible

What Changes for Engineering Teams

The EU AI Act fundamentally changes how you think about AI deployment. You can no longer treat an LLM as a black box API call. Instead, you need to build:

  • Confidence scoring: Every AI output must include a confidence metric
  • Policy-based routing: Decisions are routed based on confidence and risk level
  • Immutable audit logs: Every decision is logged with full context
  • Human review workflows: Low-confidence decisions go to human operators
  • Override mechanisms: Humans can intervene at any stage
  • Kill switches: The entire system can be halted instantly

This is not optional. This is not a "nice to have." This is the legal baseline for deploying AI in the EU starting in 2026.

And if you're thinking "we'll just wait and see," consider this: the penalties for non-compliance are up to €35 million or 7% of global annual turnover, whichever is higher.

From ADJUDON Engineering

ADJUDON provides the Confidence Engine, Policy Router, and immutable Audit Log as a single SDK integration — without latency overhead. Purpose-built for EU AI Act Article 13 and 14 compliance.