// regulation/policy

All signals tagged with this topic

Sovereign AI Ambitions Crash Into Government Deployment Reality

Governments are announcing sovereign AI initiatives faster than they can build working systems. The gap between announcement and deployment exposes how much of the "AI sovereignty" narrative is political theater rather than technical strategy. The bottleneck isn't capability. Agencies lack the institutional structures, talent pipelines, and procurement frameworks to move from pilot projects to operational systems at scale. They face simultaneous pressure to prove independence from US or Chinese tech platforms—a constraint that inflates timelines and costs. This creates a choice: governments either commit serious resources and patience to build defensible AI infrastructure, or they continue announcing initiatives that stall during integration.

AI now powers 86% of phishing campaigns tracked by KnowBe4

The industrialization of phishing through generative AI is now operational baseline. When a security vendor finds that the overwhelming majority of active phishing uses AI, attackers have solved the scale problem: personalization, linguistic fluency, and psychological targeting no longer require human expertise or effort, just API access. This collapses the cost and skill floor for phishing while making detection harder for humans and traditional security tools trained on older attack patterns.

News Publishers Block Wayback Machine to Starve AI Training

Major outlets including the New York Times, CNN, and The Guardian are using robots.txt files to prevent the Internet Archive from indexing their content, directly targeting the historical corpus that AI companies have relied on for training data. Publishers are moving from legal posturing to technical infrastructure—they're no longer waiting for litigation outcomes but actively degrading the information commons that enabled the current AI boom. The shift exposes a real constraint on AI development: when training data sources dry up through coordinated publisher action rather than scarcity, models built on historical web text become harder to improve. This could accelerate the race toward licensed data partnerships and proprietary training datasets.

Big Tech's Carbon Credits Come From Engineered Trees

Octopus Energy Generation's $500 million bet on Living Carbon's genetically modified trees shows how corporate climate commitments are increasingly outsourced to speculative biotech rather than reducing actual energy consumption. The arrangement lets data centers and heavy industrials claim neutrality without operational change. The model depends on unproven carbon sequestration tech achieving scale and permanence. Funding experimental forestry is cheaper than redesigning power-intensive infrastructure or buying renewable energy at market rates. This positions carbon credits as a substitute for decarbonization, not a complement to it.

Why AI Companies Keep Training on Unlicensed Music

The economics of AI model training create a structural incentive to use copyrighted music without permission—the cost of licensing at scale is prohibitive, while the enforcement mechanisms remain scattered across fragmented rights holders and underfunded legal systems. As generative music tools become commercially viable, the situation echoes the MP3-era arbitrage where technical capability outpaced legal remedies, except this time the stakes involve entire creative professions rather than distribution chains. The pressure point isn't moral suasion but licensing infrastructure: whoever builds the first efficient, statutory solution for clearing training data rights at scale will alter both AI development and music industry economics.

Billionaire-Backed AI Disinformation Campaign Targets News Organizations

Nayib Bukele's investment in AI-generated deepfakes designed to discredit journalists monetizes synthetic media as a political weapon. When oligarchs can commission convincing fake videos of reporters, the cost of maintaining journalistic credibility spikes, shifting competitive advantage toward outlets with institutional resources to authenticate their work or those willing to abandon investigative reporting. The capacity is operational now, deployed to erode trust in specific news organizations with clear upstream funding and intentional strategic design.