Source: Derekthompson
The era of move-fast-and-break-things AI development is ending as labs like OpenAI and Anthropic face mounting regulatory pressure, safety failures, and reputational costs that make reckless scaling untenable. This is economic, not philosophical: a model trained on public data that hallucinates or causes harm now carries legal and competitive liability that outweighs marginal performance gains. The shift favors well-capitalized incumbents who can afford extensive safety testing, while squeezing startups and open-source projects into differentiated use cases or out of the market.