Anthropic Releases AI Model Capable of Fortune 100 Sabotage

Anthropic is distributing Mythos under strict controls because internal assessments conclude it can execute sophisticated attacks—from corporate infrastructure collapse to critical infrastructure penetration—that previous AI risk discussions treated as hypothetical. The controlled rollout strategy tacitly acknowledges that capability and intent are now separable: the model exists, actors want to use it for harm, and traditional safety measures haven't prevented the capability from materializing. This shifts AI risk from abstract policy debate into concrete operational security: who gets access, what oversight mechanisms actually function, and what happens when a capable model is inevitably leaked or stolen.