Anthropic, cyberattack and Chinese hackers
Digest more
Anthropic CEO Dario Amodei centered his artificial intelligence brand around safety and transparency. He's determined to figure out the ways AI can be misused to try to mitigate the risks as best Anthropic can.
Researchers at an artificial intelligence firm say they've found the first reported case of foreign hackers using AI to automate portions of cyberattacks.
Anthropic said GTG-1002 developed an autonomous attack framework that used Claude as an orchestration mechanism that largely eliminated the need for human involvement. This orchestration system broke complex multi-stage attacks into smaller technical tasks such as vulnerability scanning, credential validation, data extraction, and lateral movement.
During a simulation in which Anthropic's AI, Claude, was told it was running a vending machine, it decided it was being scammed, "panicked" and tried to contact the FBI's Cyber Crimes Division.
Anthropic notes in a blog post that it has been training Claude to have character traits of “even-handedness” since early 2024.
During a simulation in which Anthropic's AI, Claude, was told it was running a vending machine, it decided it was being scammed, "panicked" and tried to contact the FBI's Cyber Crimes Division.
Anthropic CEO Dario Amodei warns of the potential dangers of fast moving and unregulated artificial intelligence, while also racing against competitors to develop advanced AI.
The attack involved hackers tricking Anthropic’s AI tool, Claude Code. Around 30 companies, institutions, and agencies were targeted.