Quethos Sentinel | EU AI Act Compliance & Audit Engine

Sentinel scans your GitHub, GitLab, and Bitbucket assets, classifies components against EU AI Act obligations, and generates a comprehensive, machine-readable violation register.

Your AI stack has €35M in hidden liability. Aug 2026 Enforcement. Zero installation overhead. Secure GitHub auth. Real-time liability tracking.

Key Figures

  • LIABILITY CAP: €35M Max penalty per AI Act violation
  • COMPLIANCE LOCK: AUG 2025 (Prohibited Bans Active), AUG 2026 (Full High-Risk Compliance)
  • ACCURACY INDEX: 70-80% Transparency-first findings validation
  • COVERAGE INDEX: ART 5-55 Full regulatory scope in every run

The call no CTO wants to receive.

It's 2026. Your AI-powered hiring tool has been quietly scoring candidates for 18 months. It works well — your recruiters love it.

"This is the Irish Data Protection Commission. We've received a complaint regarding your automated candidate assessment system. We'll need full documentation of your risk management process, conformity assessment, and human oversight mechanisms within 14 days."

You open the codebase. There's no audit log. No human override mechanism. No risk documentation. The emotion scorer you shipped in Q3 is technically prohibited under Article 5(1)(f).

€35M Maximum fine under AI Act. That's the maximum penalty. For most companies, that's not the fine that kills you — it's the press coverage the next morning.

From repo to report in minutes.

Three steps. Zero infra overhead. No human bias. No 40-page PDFs. Just actionable engineering truth.

Phase 01/03: Connect target

Authorize via GitHub. Select your AI platform core. Sentinel shallow-traverses into a zero-trust temporary state. LATENCY: <30S

Phase 02/03: Risk gating

Regex-driven telemetry identifies fragments. LLMs analyze intent. Logic is filtered through compliance nodes. RUNTIME: EXECUTING...

Phase 03/03: Report output

Findings categorized by risk, Art. reference, and FIX_RX. Generate Triage Tickets with single-click precision. PRECISION: 99.8%

Five tools that replace a compliance consultant.

Instead of hiring expensive consultants, get an AI-powered audit that works at the code level.

  • Codebase scanner: Walks your entire repository, identifies AI components, and batches suspicious files for deep analysis. Skips noise like node_modules. Tags: Python · JS · TS · R · Jupyter
  • Risk classification: Classifies every AI component through all five EU AI Act tiers with article references. Deterministic rules for Article 5 violations. Tags: Art. 5 · Annex III · Art. 50
  • Biometric detection: Detects hardware access (camera, mic) alongside AI logic. Automatically escalates risk tier when sensor code is found. Tags: getUserMedia · VideoCapture
  • GitHub integration: Turns findings into tracked GitHub issues with regulatory references and mandatory actions. Slots into your engineering workflow. Tags: GitHub Issues API
  • Real-time stats: Findings stream to your dashboard as they're discovered. Watch your compliance posture build in real time as files are analyzed. Tags: Server-Sent Events

What tier are your AI systems in?

Sentinel classifies every component through the Act's sequential gate system. Compliance is not optional—your obligations depend entirely on which tier applies.

Prohibited (Article 5)

Applies to: Emotion recognition at work/school, social scoring, real-time biometrics in public spaces.

Obligations: Immediate withdrawal of functionality, No path to compliance — redesign required, Full halt of data processing activities.

Sentinel Action: Flags the offending file and suggests a logic pivot or alternative architecture.

High Risk (Annex III)

Applies to: Hiring tools, credit scoring, education assessment, law enforcement, critical infrastructure.

Obligations: Establish risk management system, Continuous audit logging & monitoring, Human oversight mechanism by design, Comprehensive technical documentation.

Sentinel Action: Identifies missing human-in-the-loop logic and generates draft technical files.

GPAI (Art. 51–55)

Applies to: Foundation models, LLMs, diffusion models, and general-purpose generative AI systems.

Obligations: Public transparency obligations, Copyright law compliance & reporting, Systemic risk assessment for large models.

Sentinel Action: Maps foundation model calls to transparency requirements and identifies risk gaps.

Limited (Article 50)

Applies to: Chatbots, deepfake generators, and AI-generated content systems.

Obligations: Disclosure to users they're interacting with AI, Watermarking of all AI-generated content.

Sentinel Action: Scans UI for mandatory disclosure strings and verifies watermarking logic.

Simple & Transparent Pricing

Traditional EU AI Act audits cost €30k+ per system. Sentinel delivers continuous compliance telemetry starting at €49/mo.

  • Starter (€49/mo): Single Developer. 20 Scans / Month, 3 Repositories, Findings Register, GitHub Integration.
  • Growth (€199/mo): Engineering Squad. 100 Scans / Month, Unlimited Repos, Taskboard Scan, Priority Support, PDF compliance report.
  • Enterprise (Custom): Large Scale. Unlimited Scans, Unlimited Repos, On-premise Gateway, Dedicated Account Manager, SSO Integration.

Compare technical capabilities

CapabilityStarterGrowthEnterprise
Scans/month20100Unlimited
Repositories3UnlimitedUnlimited
Findings + article refsYesYesYes
GitHub issue creationYesYesYes
PDF compliance reportNoYesYes + branded
Scan history30 days12 monthsUnlimited
Priority supportNoYesDedicated manager
On-premise gatewayNoNoYes
SSO IntegrationNoNoYes

* Cost is equivalent to roughly 1 hour of human consultancy per month. Powered by Stripe.

Questions we actually get.

Is this a legally certified compliance assessment?
No — and we'll be direct about that. Sentinel is a developer audit tool, similar to a linter for EU AI Act obligations. It identifies compliance risks in your codebase at the code level. It is not a legal certification and does not replace legal advice. Think of it as the step before you engage a lawyer — you arrive knowing exactly what needs to be fixed.
Does my code leave my environment?
Your repository is cloned into a temporary secure buffer, analysed, and immediately deleted after the scan completes. Code snippets (up to 3,000 chars per file) are sent to the Gemini API for intent analysis. We do not store your source code. Full details are in our privacy policy.
What languages and frameworks does the scanner support?
Sentinel currently scans .py, .js, .ts, .jsx, .tsx, and .json files. This covers the vast majority of AI system implementations. Additional language support is on the roadmap.
What if my system gets classified as HIGH RISK?
A high-risk classification doesn't mean you need to stop shipping. It means specific Article obligations apply — audit logging, human oversight mechanisms, risk documentation. Sentinel tells you exactly which files are missing which implementations, and generates suggested code fixes for each one.
When does the August 2026 deadline actually apply?
The August 2026 deadline applies to high-risk AI systems listed in Annex III — including employment, credit, education, and law enforcement applications. If your AI system makes or significantly influences decisions in these areas, you need to be compliant before that date. Sentinel helps you find out if you're in scope.

© 2024 Quethos Sentinel. Built for compliance-first engineering teams. Aug 2026 Enforcement. Zero installation overhead. Secure GitHub auth. Real-time liability tracking.