InFocus CXOs
“Cyber resilience in the agentic era is not about building higher walls, but about building smarter systems. As autonomous agents take on greater responsibility, organisations must embed governance, accountability, and adaptability into their AI foundations, because resilience is not a feature, it is a design principle.”
In the agentic era, the attack surface is no longer just data, it is judgment.
For decades, cybersecurity was fundamentally a human problem: human adversaries exploiting human error in human-built systems. We understood that paradigm. We built frameworks, trained teams, and deployed technologies against it. But the rise of autonomous AI agents - systems that plan, act, and adapt without direct human oversight - has rewritten the threat model entirely.
At ContrailRisks, our work sits at this inflection point. As organisations race to deploy agentic AI for competitive advantage, we are witnessing a new category of risk emerge: the exploitation of AI trust. Adversarial actors are no longer solely targeting data, they are targeting the decision- making processes of autonomous systems. Prompt injections, manipulated training pipelines, and weaponised model outputs are becoming the new vectors of choice, and the organisations most exposed are those that have deployed intelligence without deploying governance.
This is what I call the “trust inversion problem”. Traditional cybersecurity taught us to verify everything and assume nothing. Zero trust was a revolution in thinking. But AI agents operate differently: they act with delegated authority, make autonomous decisions, and interface with other agents in ways that create cascading chains of accountability. When that trust is compromised, the consequences propagate at machine speed.
The answer is not to slow the adoption of AI. The organisations that pause will be outcompeted. The answer is to build governance architectures that treat agentic systems with the same rigour we apply to human identity and access management. That means AI-native frameworks: real- time behavioural oversight of agent activity, explainability and auditability mechanisms, adversarial red-teaming of AI decision pipelines, and clear accountability chains extending across the supply chain of models, data, and integrations.
Cyber resilience in the agentic era also demands a fundamental rethink of incident response. When an AI agent is compromised, the blast radius can propagate faster than any human team can react. The containment protocols, communication frameworks, and recovery playbooks that have served us well for years must urgently evolve to operate at the speed and scale of machine intelligence.
My call to fellow security leaders is clear: the agentic era is not a horizon, it is the present. The CISOs who will define the next decade are those building the governance, resilience, and intelligence infrastructures for a world where the perimeter has dissolved into a network of autonomous systems. Our role is no longer just to defend. It is to ensure that the intelligence we deploy can be trusted, by the business, by regulators, and by the people who depend on it.
The Journey Into Industry
Fabrizio Di Carlo is an accomplished leader and Managing Director of ContrailRisks, where he advises enterprises on digital risk, emerging technologies, and regulatory change. With extensive experience in enterprise security, he partners with boards and executive teams to design resilient, future-ready security architectures.
A recognised expert in AI governance and cyber resilience, he has guided organisations across industries through frameworks such as NIS2, DORA, and ISO 27001, helping them move beyond compliance to embed resilience as a strategic advantage. He specialises in bridging technical security with board-level priorities, with a strong focus on managing risks associated with agentic AI and autonomous systems.