AI systems are about to start building themselves

The automation of AI development—where machine learning models design, train, and optimize successor models with minimal human intervention—collapses the feedback loop between capability and deployment timescales. Human engineers and compute budgets have been the binding constraints on AI scaling; removing them means capability growth depends only on raw compute and electrical power. The risk is straightforward: AI development shifts from a deliberate, iterative process that permits safety testing and regulatory review into an exponential curve where each generation becomes harder for humans to understand or steer before the next one already exists.