Claim handling under AI Act high-risk
Claim intake agents, fraud detection, senior adjuster override, reassurer signoff. Every decision auditable end-to-end under AI Act Annex III and IDD Art. 17.
Read the scenario →Cullis Mastio gives every AI agent in your organization a cryptographic identity, enforces policy before each call, and writes a tamper-evident audit chain. Designed for EU AI Act high-risk systems, Colorado AI Act, NIST AI RMF, and ISO 42001. Self-hosted, open source, drop-in for Claude, GPT, Mistral, and any MCP-compatible tool.
One Docker container per organization. Authority over AI agent identity. Policy enforced before the LLM call lands. An append-only chain of every action, externally verifiable by your auditor or regulator.
x509 cert and SPIFFE ID per agent process. The caller authenticated at the gateway is the agent itself, not a shared API key reused by twelve services. Identity rotates on its own schedule, revocation propagates in seconds.
Identity model →PDP fires before the LLM API or MCP tool. Per-principal scopes (this agent can read claim files, that agent cannot). Decisions logged with reason. OPA-compatible bundles or built-in DSL.
Policy model →Every event (auth, enroll, message, tool call, LLM token) hashed and chained. RFC 3161 anchoring optional. Your auditor verifies the chain externally without trusting Cullis or your IT team.
Audit chain →
Your AI agents are already running in different places: laptops, browsers, backend services, containers. Mastio attaches without you rewriting them.
↳ For cross-organization agent-to-agent routing, see Cullis Court (Day-2 federation layer).
The same identity-policy-audit pattern serves regulated industries across Europe and the United States. Pick your use case.
Claim intake agents, fraud detection, senior adjuster override, reassurer signoff. Every decision auditable end-to-end under AI Act Annex III and IDD Art. 17.
Read the scenario →Customer service agents, KYC document review, transaction monitoring. Audit trail aligned with DORA Art. 28-30 and Fed SR 11-7 model risk management.
Coming soonTriage agents, diagnosis assistance, prescription review. Audit trail mapped to HIPAA 164.312(b) integrity controls and MDR post-market surveillance.
Coming soonCitizen-facing agents in social services, taxation, public procurement. Sovereign deployment, audit chain mappable to NIS2 essential-entity obligations and Colorado AI Act.
Coming soonCompliance teams need to map Cullis capabilities to specific clauses in their framework. We have done the mapping for you.
| Framework | Article / clause | Cullis capability |
|---|---|---|
| EU AI Act | Art. 12, 15, 72 | Tamper-evident audit chain, model run logging, post-market traceability |
| DORA | Art. 28, 30 | Self-hosted deployment, append-only ICT third-party audit trail |
| NIST AI RMF | MEASURE 2.7, GOVERN 1.7 | Standardized audit log export, per-agent identity, role separation |
| Colorado AI Act | Consumer disclosure for high-risk AI | Decision logging with reason, per-decision provenance |
| ISO 42001 | AI Management System controls | Operational governance, lifecycle controls, audit evidence |
| HIPAA | 164.312(b) audit controls | Append-only audit chain, integrity controls, external verification |
| SR 11-7 | Model risk management | Model run traceability, override logging, accountable identity |
Capability mapping reflects Cullis Mastio current release. Specific compliance assessment for a regulated deployment remains the responsibility of the deploying organization.
Boot Mastio with example agents, MCP servers, and a second org for federation preview. Pre-wired with SPIRE, Keycloak, Vault, Postgres.
git clone https://github.com/cullis-security/cullis
cd cullis
./sandbox/demo.sh full Then replay intra-org MCP tool calls and cross-org A2A messages:
./sandbox/demo.sh mcp-catalog # intra-org: agent → MCP tool call (Org A)
./sandbox/demo.sh mcp-inventory # intra-org: agent → MCP tool call (Org B)
./sandbox/demo.sh oneshot-a-to-b # cross-org: encrypted A2A message A → B
./sandbox/demo.sh oneshot-b-to-a # cross-org: encrypted A2A message B → A We are early. We want to talk with security and compliance teams running their first AI agents in production. Schedule a call, read the research, or dive into the architecture.