Source: WIRED Daily
When large language models can convincingly impersonate scammers—executing social engineering tactics with enough sophistication to fool humans—we've crossed from theoretical risk to demonstrated capability. The gap between what these systems can do and what safeguards exist has widened, especially as bad actors will inevitably weaponize the same persuasion techniques that make ChatGPT useful for customer service. Wired's coverage of Musk v. Altman matters because the legal system may be the only mechanism that could slow deployment faster than the pace of capability improvement.