Anthropic, the developer of the Claude chatbot, has revealed that its AI tools were misused by hackers to conduct large-scale theft, extortion, and fraud. According to the company, malicious actors leveraged Claude to write hacking code, plan cyber-attacks, and even craft targeted extortion demands, including ransom amounts. One campaign reportedly infiltrated at least 17 organizations, including government bodies, in what Anthropic described as an “unprecedented degree” of AI-driven cybercrime.
The firm also disclosed a case of “vibe hacking,” where attackers used AI to make both tactical and strategic decisions, ranging from which data to exfiltrate to how to psychologically pressure victims. Anthropic says it disrupted the schemes, improved its detection systems, and reported the incidents to authorities.
Beyond cyber-attacks, the company warned of employment fraud linked to North Korean operatives. Using Claude, scammers created fake profiles, wrote job applications, and assisted with code development once hired by US Fortune 500 tech companies. This method represents a new phase in job fraud, enabling sanctioned workers to bypass cultural and technical barriers.
Experts caution that AI is shrinking the time needed to exploit vulnerabilities, demanding proactive cybersecurity. While ransomware often still relies on phishing, the rise of agentic AI signals higher risks ahead.