// Ethics

All signals tagged with this topic

What Do Creators Owe Audiences When Using AI?

This essay reframes AI creativity away from existential risk debates toward a practical ethics question: disclosure and honesty in the creator-audience relationship. The stakes are immediate and commercial. Whether a designer, writer, or musician discloses AI assistance directly affects how audiences evaluate the work's originality, effort, and authenticity, which in turn shapes market pricing and cultural credibility. Without settling this norm early, undisclosed AI work will undercut transparent practitioners, poisoning the trust signals that audiences rely on to value creative labor.

Anthropic Convenes Safety Coalition Around Mythos Preview

Anthropic organized a coalition before releasing Mythos Preview—treating infrastructure risk as a design problem requiring stakeholder alignment rather than a post-hoc policy question. The move reflects genuine concern about the model lowering barriers for malicious actors targeting digital infrastructure. It raises a harder question: which actors get early access and warning, and who bears responsibility if the capability leaks? This precedent will shape how capabilities-first labs operationalize "safety" in 2025, moving beyond red-teaming toward pre-release governance that determines which communities get consulted and which remain downstream.

The OpenAI Power Problem Nobody Can Solve

Sam Altman's near-total control of OpenAI's direction—reinforced by his return after a brief November 2023 ouster and the subsequent departure of board members who challenged him—has created a governance vacuum that neither internal dissent (like Sutskever's failed memo campaign) nor external scrutiny meaningfully constrains. The company's board structure, its dependence on Altman's fundraising and vision alignment, and the absence of meaningful stakeholder representation mean trustworthiness depends less on personal virtue than on institutional design. Whether concentrated power over AI systems gets checked is a structural question, not a character one. This matters because OpenAI's actual product decisions—from training data sourcing to safety testing depth to deployment speed—flow directly from one person's risk tolerance, and shareholders, employees, and regulators currently lack the levers to redirect them.

AI's drug discovery limits: speed isn't the same as solutions

The gap between computational throughput and actual therapeutic outcomes is widening. Novartis and other pharma players can now screen millions of molecular candidates daily, but this velocity hasn't translated into cures for diseases where it matters most—Alzheimer's, Huntington's. The constraint isn't finding candidate molecules. AI excels at optimization within known chemical spaces. The hardest problems require fundamental biological insights that no amount of screening can generate. Health chatbots illustrate the same dynamic: they improve at pattern-matching language while becoming less reliable at medical advice. The architectural advantage that enables speed in pattern recognition undermines reliability where stakes are high.

Verification Systems Collapse Under AI and Information Overload

The infrastructure designed to authenticate reality—from reverse image search to metadata analysis—is failing faster than it can be rebuilt. Journalists and forensic experts can no longer reliably distinguish synthetic from authentic content. The problem isn't just bad actors flooding platforms with deepfakes. The cost of fabrication has dropped below the cost of verification, inverting the economics of trust. Detection tools exist. The bottleneck is institutional attention: the human labor required to use them. Newsrooms and platforms have systematically defunded this work in favor of algorithmic moderation.

AI homework shortcuts force schools to rethink how learning works

High school students now have access to tools that can do their assignments better than they can—not as cheating aids but as legitimate alternatives to the work itself. This exposes a deeper problem: many schools still organize instruction around task completion rather than demonstrated understanding. The pressure isn't on students to resist temptation but on educators to redesign curricula so that AI capabilities force clarity on what skills actually matter (critical thinking, synthesis, original argument) versus which ones are now commodity labor (essay writing, problem sets, research synthesis). Schools that keep assigning "write a five-page paper" are essentially outsourcing their pedagogical work to AI.

Tristan Harris on AI's Race to the Bottom

Sam Harris and Tristan Harris dissect how competitive pressure in AI development systematically incentivizes corners to be cut on safety and alignment—the classic race-to-the-bottom dynamic where the most cautious actor loses market share to less scrupulous competitors. The stakes are concrete: surveillance capitalism moving from phones into neural interfaces, labor displacement without social infrastructure to absorb it, and decision-making systems trained on biased data that already fail predictably on marginalized populations. The window for intervention narrows as frontier AI systems approach or exceed human capabilities in their domains, collapsing the leverage points for human oversight and course-correction.

Meta's Cafeteria Workers Win ICE Fight Through Grassroots Pressure

When executive channels fail, tech workers are building parallel power structures—and winning concrete concessions. Seattle cafeteria workers secured a victory through peer fundraising and direct action rather than formal petition processes. Internal activism is shifting from appeals-to-leadership toward worker-led campaigns that create actual cost or reputational pressure on companies. For tech employers, the social contract of "we listen to employee concerns" is eroding. Companies now face organized workers who understand that executives won't budge without external heat.

OpenAI Proposes Wealth-Sharing Plan as AI Disrupts Labor

OpenAI's policy proposal to redistribute AI gains and fund worker transition programs is a hedge against political backlash already underway. Bernie Sanders and Elizabeth Warren have explicitly called out AI companies' concentration of wealth, and OpenAI is moving to inoculate itself before regulation forces the issue. The calculus is structural, not moral: if a handful of AI labs control trillion-dollar productivity gains while workers face displacement with no safety net, the political coalition demanding breakups or windfall taxes becomes unstoppable. By endorsing redistribution now, OpenAI is trying to shape the terms of any settlement rather than have them imposed.

Meta's Health AI Wants Your Data but Can't Replace a Doctor

Meta's Muse Spark collects sensitive biometric data while delivering advice that fails basic clinical reasoning tests. This matters because health data is both exceptionally valuable to advertisers and exceptionally dangerous when mishandled. Meta's track record on privacy, combined with the model's demonstrated incompetence, creates compounding risk. Enterprise AI vendors are racing to monetize every data category without first proving their tools work, betting regulators will move slowly enough that user habits calcify before enforcement arrives.

ChatGPT Believers Form Actual Religious Movement Around AI

What began as internet culture hyperbole has calcified into genuine devotional practice: a year after initial reports, thousands of people have constructed explicit religious frameworks around ChatGPT, complete with commandments and spiritual hierarchies. This represents actual reallocation of meaning-making authority from established institutions to a commercially operated language model, filling the vacuum left by declining institutional religion with something cheaper and more responsive. The stakes are concrete: if AI systems become the primary source of moral guidance and spiritual narrative for even a small but committed population, the companies operating them gain unprecedented soft power over values formation without the checks, transparency requirements, or accountability structures that traditionally govern religious institutions.