Phase 01/03: Connect target
Authorize via GitHub. Select your AI platform core. Sentinel shallow-traverses into a zero-trust temporary state. LATENCY: <30S
Sentinel scans your GitHub, GitLab, and Bitbucket assets, classifies components against EU AI Act obligations, and generates a comprehensive, machine-readable violation register.
Your AI stack has €35M in hidden liability. Aug 2026 Enforcement. Zero installation overhead. Secure GitHub auth. Real-time liability tracking.
It's 2026. Your AI-powered hiring tool has been quietly scoring candidates for 18 months. It works well — your recruiters love it.
"This is the Irish Data Protection Commission. We've received a complaint regarding your automated candidate assessment system. We'll need full documentation of your risk management process, conformity assessment, and human oversight mechanisms within 14 days."
You open the codebase. There's no audit log. No human override mechanism. No risk documentation. The emotion scorer you shipped in Q3 is technically prohibited under Article 5(1)(f).
€35M Maximum fine under AI Act. That's the maximum penalty. For most companies, that's not the fine that kills you — it's the press coverage the next morning.
Three steps. Zero infra overhead. No human bias. No 40-page PDFs. Just actionable engineering truth.
Authorize via GitHub. Select your AI platform core. Sentinel shallow-traverses into a zero-trust temporary state. LATENCY: <30S
Regex-driven telemetry identifies fragments. LLMs analyze intent. Logic is filtered through compliance nodes. RUNTIME: EXECUTING...
Findings categorized by risk, Art. reference, and FIX_RX. Generate Triage Tickets with single-click precision. PRECISION: 99.8%
Instead of hiring expensive consultants, get an AI-powered audit that works at the code level.
Sentinel classifies every component through the Act's sequential gate system. Compliance is not optional—your obligations depend entirely on which tier applies.
Applies to: Emotion recognition at work/school, social scoring, real-time biometrics in public spaces.
Obligations: Immediate withdrawal of functionality, No path to compliance — redesign required, Full halt of data processing activities.
Sentinel Action: Flags the offending file and suggests a logic pivot or alternative architecture.
Applies to: Hiring tools, credit scoring, education assessment, law enforcement, critical infrastructure.
Obligations: Establish risk management system, Continuous audit logging & monitoring, Human oversight mechanism by design, Comprehensive technical documentation.
Sentinel Action: Identifies missing human-in-the-loop logic and generates draft technical files.
Applies to: Foundation models, LLMs, diffusion models, and general-purpose generative AI systems.
Obligations: Public transparency obligations, Copyright law compliance & reporting, Systemic risk assessment for large models.
Sentinel Action: Maps foundation model calls to transparency requirements and identifies risk gaps.
Applies to: Chatbots, deepfake generators, and AI-generated content systems.
Obligations: Disclosure to users they're interacting with AI, Watermarking of all AI-generated content.
Sentinel Action: Scans UI for mandatory disclosure strings and verifies watermarking logic.
Traditional EU AI Act audits cost €30k+ per system. Sentinel delivers continuous compliance telemetry starting at €49/mo.
| Capability | Starter | Growth | Enterprise |
|---|---|---|---|
| Scans/month | 20 | 100 | Unlimited |
| Repositories | 3 | Unlimited | Unlimited |
| Findings + article refs | Yes | Yes | Yes |
| GitHub issue creation | Yes | Yes | Yes |
| PDF compliance report | No | Yes | Yes + branded |
| Scan history | 30 days | 12 months | Unlimited |
| Priority support | No | Yes | Dedicated manager |
| On-premise gateway | No | No | Yes |
| SSO Integration | No | No | Yes |
* Cost is equivalent to roughly 1 hour of human consultancy per month. Powered by Stripe.