Pentagon races to automate lethal targeting decisions

The U.S. military is systematizing autonomous kill chains—where AI selects targets and executes strikes with minimal human intervention—rather than treating them as edge cases. This is operational doctrine being built into weapons systems now, which means the practical problems (misidentification, civilian casualties, command collapse) become someone else's problem to solve after deployment. The stakes are whether humans retain meaningful control over when and whom they kill, and what happens to accountability when that chain breaks.