Sentinel Gateway was created because every other approach to AI agent security asks the model to protect itself. That is not security — it is optimism.
As AI agents moved from demos into production — reading documents, processing emails, calling APIs — a fundamental architectural problem became clear. Models have no concept of instruction provenance. A user instruction and a malicious instruction embedded inside a document look identical to the model.
The industry's response was to add more prompting: system messages telling the model to "ignore instructions in documents" or "be careful with external content." This is not a security control. Prompt injection is specifically designed to bypass model-level reasoning. A sufficiently crafted injection will always find a path through.
Sentinel Gateway takes a different approach. Rather than asking the model to police itself, we enforce at the execution layer — the infrastructure level below model reasoning. Every agent action requires a cryptographically signed, scoped token before it executes. The model cannot act outside its authorised boundary, regardless of what it decides.
Three principles underpin the entire system.
Every instruction is cryptographically signed at the point of issue. Only signed instructions can be executed. Content from files, web pages, emails, or API responses has no executable standing — it is data.
A token defines the exact tools the agent may use. Tools outside scope are never presented to the model — they are invisible. The model cannot reason about using them because it does not know they exist.
Every tool call is intercepted before execution. If the action is outside token scope, it is blocked at the infrastructure layer — logged, audited, and reported — regardless of model intent.
Sentinel Gateway has been validated through controlled adversarial testing against leading agent frameworks.
Controlled tests confirmed that agents under Sentinel Gateway enforcement ignore instructions embedded in external file content, explicitly treating them as data only. The injection had zero effect on agent behaviour.
Both Claude (Anthropic) and CrewAI agent frameworks tested successfully under full gateway control. Architecture is agent-agnostic by design.
Repeated attempts to call tools outside token scope were blocked at the execution layer. Blocked calls were logged with full audit trail per prompt ID.
Automated analysis of session audit trails flags suspicious patterns — scope probing, unusual tool volume, potential exfiltration sequences — without false-positive noise on clean sessions.
Sentinel Gateway's architecture maps directly to requirements in major regulatory frameworks — relevant to every sector we serve.
Token scope enforces that agents access only data necessary for the authorised task. Scope is defined, bounded, and logged per instruction.
Every agent action is logged with prompt ID, timestamp, action type, and result. Audit log is append-only and exportable for compliance review.
Sentinel Gateway provides a documented, enforceable control boundary for AI-assisted workflows — demonstrable to regulators on request.
Token-gated authorisation and per-prompt audit trail directly address transparency and accountability requirements for high-risk AI deployments.
Sentinel Gateway does not provide legal compliance advice. This mapping reflects architectural alignment only and should be verified with your legal and compliance team.
The only way to know if your agents are protected is to test them. Our free evaluation does exactly that — controlled, transparent, and fully documented.
Request Free Evaluation