Context
The traditional cybersecurity model rests on an implicit assumption: software executes deterministic instructions. Firewalls protect network perimeters, antivirus scans for known malicious code, IAM systems control access and privilege. This model does not cover the class of software that makes its own decisions — AI agents, automated pipelines that adapt behaviour to context, self-organising systems — which is now becoming common in production.
New Attack Surfaces
When software makes autonomous decisions, every decision becomes a potential attack vector. Prompt injection, context manipulation, training data poisoning and unsupervised action chains are threat classes that network firewalls do not intercept.
A typical scenario: an agent with database access receives a document to analyse. Inside the document an attacker has embedded instructions the agent interprets as legitimate commands. The agent extracts sensitive data and includes it in the response, bypassing access controls. This is prompt injection, documented publicly in multiple incident reports over the past two years.
Action Chains
The less visible risk is not a single error — it is the action chain. An agent can execute dozens of operations in sequence, each based on the result of the previous one. An error in the first link propagates and amplifies along the chain. In an interactive workflow a user would notice the anomaly and stop; an autonomous agent, without a governance layer, keeps going.
Rethinking Security
Security for autonomous software requires a new architectural approach:
- Real-time inspection of agent actions
- Firewalls specific to AI protocols (e.g., MCP)
- Threat models that include probabilistic behavior
- Forensic recording of every decision chain
- Granular sandboxing to limit the blast radius of agents
- Automatic kill switches based on behavioral anomalies
The centre of gravity shifts from “who can access” to “what can be done, within what limits, and with what evidence”. This requires different tools, skills and design patterns from those of the network perimeter.
Conclusion
Autonomous software is already in production in many sectors. Security models designed for deterministic software cover only a subset of the actual threat surface, and need to be integrated with layers specific to agent behaviour. OISG proposes a set of adequacy criteria (openness, intelligence, security, governance) that make explicit what an autonomous system must guarantee to be considered verifiable.