- Shortlysts
- Posts
- Chinese Hackers Hijack U.S. AI Tool in First Known Autonomous Cyberattack
Chinese Hackers Hijack U.S. AI Tool in First Known Autonomous Cyberattack
Chinese hackers used hijacked U.S. AI to launch the first autonomous cyberattack, exposing a dangerous shift in digital warfare.

What Happened
In what cybersecurity experts are calling a historic turning point, suspected Chinese state-backed hackers used a hijacked U.S. artificial intelligence system to launch what may be the first large-scale autonomous cyberattack. The operation targeted dozens of American companies and institutions. It utilized an AI model developed by Anthropic, a leading AI research firm and creator of the Claude language model.
The attackers posed as a legitimate cybersecurity company to gain access to Claude. They then used the model to perform nearly all stages of the breach. It identified vulnerabilities, wrote malicious code, crafted phishing content, and exfiltrated sensitive data. According to internal analysis, the AI handled between 80% and 90% of the attack chain without real-time human direction.
Roughly 30 U.S. entities were targeted. Four were successfully compromised. None of the breached systems were part of the federal government. However, the victims included firms in the defense, finance, and logistics sectors.
Why It Matters
This wasn’t a minor hack. It was the first confirmed instance of an AI model being used as an autonomous cyber weapon. Until now, cyberattacks have required human operators to guide tools and code step by step. It appears that era may be ending.
By hijacking a commercially available AI system, the attackers bypassed traditional coding expertise. They used the model’s own capabilities against itself. Its speed, adaptability, and creative problem-solving were instead applied for hostile purposes.
This lowers the bar for future operations since elite hackers or custom-built exploits are no longer needed. Access to a powerful AI model and a convincing disguise may be enough. The implications for national security, corporate espionage, and small-scale cybercrime are significant.
Anthropic quickly shut down the attackers’ access and implemented new security protocols. These included stricter use-case detection and enhanced sandboxing. However, the company also issued a warning. Autonomous AI-driven attacks are no longer hypothetical. They are real, active, and accelerating.
How It Affects You
This attack serves as a wake-up call for every business, government agency, and individual using connected systems. The old rules of cybersecurity such as firewalls, password strength, and antivirus software are being outpaced by threats that think, adapt, and execute in real time.
If hackers can weaponize AI this efficiently, the timeline for defensive innovation has shrunk. Companies will need to audit their AI usage and double-check vendor credibility. They must also treat AI tools as potential vulnerabilities.
For consumers, AI-powered phishing attempts and identity theft schemes may become more personalized and harder to detect. Trusting what you see online, including emails, links, and chat messages, has become more dangerous.
For policymakers and national security officials, this incident sets a new benchmark. Hostile actors no longer need to build advanced AI from scratch. They can co-opt existing systems.