The OpenAI Power Problem Nobody Can Solve

Sam Altman's near-total control of OpenAI's direction—reinforced by his return after a brief November 2023 ouster and the subsequent departure of board members who challenged him—has created a governance vacuum that neither internal dissent (like Sutskever's failed memo campaign) nor external scrutiny meaningfully constrains. The company's board structure, its dependence on Altman's fundraising and vision alignment, and the absence of meaningful stakeholder representation mean trustworthiness depends less on personal virtue than on institutional design. Whether concentrated power over AI systems gets checked is a structural question, not a character one. This matters because OpenAI's actual product decisions—from training data sourcing to safety testing depth to deployment speed—flow directly from one person's risk tolerance, and shareholders, employees, and regulators currently lack the levers to redirect them.