Source: Slow Boring
AI vendors amplify existential risk narratives because apocalyptic framing justifies massive R&D budgets, regulatory capture, and venture returns that incremental progress stories cannot. Emphasizing AGI timelines and extinction scenarios over practical near-term applications is rational corporate strategy. The gap between AI capabilities and AI rhetoric will persist as long as fear-based narratives extract more resources and regulatory protection than honest uncertainty would.