Advertisement

The Ghost in the Code: Navigating the Frontier of AI Criminal Law

The Ghost in the Code: Navigating the Frontier of AI Criminal Law InFocus CXOs

“Cyber Trust in the Agentic Era is no longer a matter of belief- it is verifiable assurance, built on accountability, auditable action, and ethical oversight of every autonomous decision.”

In the Agentic Era, where autonomous AI systems act independently, criminal law faces a fundamental disruption. Incidents like the 2024 Hong Kong deepfake fraud, where $25 million was transferred to AI-generated personas, highlight a critical reality: AI can now be the perpetrator, the decider, and even the victim.

This creates the Great Liability Gap. Traditional legal constructs of intent fail when AI acts without consciousness. Models such as the Innocent Agent framework, Foreseeability doctrine, and emerging Electronic Personhood debates attempt to bridge this gap, yet accountability remains fluid.

Simultaneously, AI-on-AI justice complicates enforcement. Algorithmic decision-making systems and attacks like poisoned training data introduce systemic vulnerabilities, where harm extends beyond individuals to critical infrastructure.

The threat landscape is evolving rapidly. From deepfake fraud-as-a-service to self-learning malware and synthetic identities, agentic AI is weaponizing autonomy. These risks are particularly significant for India’s financial ecosystem, where increasing AI adoption introduces the danger of “error cascades”, a single flawed decision impacting thousands of transactions. With 60% of enterprises citing data security as a severe challenge, operational resilience is under strain.

Regulatory responses remain fragmented globally. While the EU enforces the AI Act and the U.S. adopts a market-driven stance, India must accelerate its techno-legal evolution. This includes enforcing AI governance frameworks, investing in sovereign AI models, and redefining liability under an updated legal structure.

At the core lies the redefinition of Cyber Trust. Trust is no longer implicit; it is engineered through “Proof of Action”- auditable, immutable records of every AI decision. As non-human identities outnumber humans, Zero-Trust architectures must extend to AI agents.

This shift transforms the role of the CISO into a Chief Trust Officer, balancing automation with ethical oversight. The future demands systems where AI drives efficiency, but human judgment remains the final moral authority.

To address the “black box” of AI crime, lawmakers must mandate explainability, codify agentic liability, and enable decisive controls, including the deactivation of rogue AI systems.

The challenge is no longer technological, it is civilizational. The Agentic Era will not be defined by AI’s power, but by our ability to govern it with accountability, transparency, and trust.

The Journey Into Industry

Sameer Khakhar is an experienced cybersecurity and digital risk executive with a strong track record across banking, financial services, and telecommunications. Currently serving as Deputy Chief Information Security Officer at Shriram Finance, he brings over 18 years of expertise in information security, cyber law, and technology delivery management. He leads multidisciplinary teams delivering cyber and cloud security transformations, incident response, and regulatory compliance. 

Passionate about AI, analytics, and automation, he helps future-proof businesses and reduce risk. He has architected security programs, strengthened compliance, mentored professionals, and fosters security-first cultures. Certified CISM professional, advancing skills in GenAI, cloud architecture, data analytics.