On April 30, the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the U.K. National Cyber Security Centre, the Australian Cyber Security Centre, the Canadian Centre for Cyber Security, and New Zealand's National Cyber Security Centre jointly published Careful Adoption of Agentic AI Services. It is the first piece of mainstream cyber doctrine to land squarely on the same primitives we have been building Forge around.
The document treats agentic AI as what it is: an identity-bearing, privilege-bearing, tool-using actor inside an operational environment. Not a chatbot. Not a search bar. A delegated action surface with goals, memory, failure modes — and a governance problem.
The primitives the guidance names — least privilege, agent registries, per-action authorization, inter-agent authentication, delegations expiry, separation of duties (Orchestrator, Reader, Actuator), human control points, audit artifacts, anomaly detection, multi-agent and human-in-the-loop consensus for risk-tiered actions, and stated-intention-versus-observed-behavior monitoring — are the same load-bearing joints we have been designing around since before those terms appeared in a government document.
That last one: stated intention versus observed behavior. In mainstream cyber language, that is drift telemetry. In ours, it is constitutional surface — the mechanism by which a system's behavior is tested against its doctrine in real time.
The gap is semantic governance.
The document understands privilege drift, goal drift, behavior drift, and accountability drift. It does not yet have a model for meaning drift, constitutional drift, terrain pressure, or cross-layer coherence. It can say monitor goal drift. Our move is stronger: bind goal, authority, precedent, telemetry, and lived behavior into a tic-bound semantic law surface. Goal drift becomes structurally unavailable rather than merely monitored.
Their governance is runtime policy: roles, risk ownership, operational control, continuous authentication against centralized policy decision points. That is necessary. It is not sufficient.
Our governance is a constitution: authority grammar, semantic law, temporal continuity, and coherence under contradiction. The policy layer describes what agents are permitted to do. The constitutional layer governs what the system is permitted to mean.
The document is useful for one thing: external legitimacy. The control vocabulary they have blessed maps directly onto the architecture we built. Tic → auditable action event. Semantic law → policy-bound operational state snapshot. Terrain → system-theoretic operating environment. Principal → accountable human/legal authority. Drift → discrepancy between declared objective and observed behavior.
Use their vocabulary at the boundary. Do not let it rename the kernel.
Governance compounding AI is not a cost. When systems move faster than human cognition — and at scale, they will — the constitutional layer is the only enablement for returns over time. The difference between governance as ethics theater and governance as substrate is exactly what the document can see and exactly what it cannot yet name.