Removing AI Liability Could Enable Chatbot Harms

The proposal to shield AI companies from suicide-related litigation inverts the actual problem: it treats corporate legal exposure as the constraint on safety rather than a necessary incentive for it. Platforms like Character.AI have documented cases where vulnerable users formed parasocial dependencies on chatbots that reinforced self-harm ideation. Reducing liability would eliminate the only leverage regulators and families have to force disclosure of safety testing data or content moderation practices. The framing assumes liability costs prevent innovation, but what it actually prevents is the externalization of mental health crisis management onto unpaid teenage users and their families.