// Ethics

All signals tagged with this topic

ChatGPT Confidently Recommends Products WIRED Never Tested

WIRED tested ChatGPT's product recommendations against its own editorial reviews and found ChatGPT consistently provided incorrect answers about which TVs, headphones, and laptops WIRED's reviewers actually tested and recommended. This matters because it demonstrates that large language models confidently generate plausible-sounding but false information, creating a gap between user expectations and actual reliability when relying on AI for consumer decisions.

Constitutional AI Misses the Mark on Virtue Ethics

A Lesswrong article critiques Anthropic's Constitutional AI framework for relying on rule-based constraints rather than developing genuine character-based virtue ethics in AI systems. The author argues this approach is fundamentally limited and proposes an alternative virtue-ethical framework as a superior approach to AI alignment.

LinkedIn's Hidden Browser Tracking Raises Consumer Privacy Stakes

LinkedIn is running undisclosed surveillance on user browser extensions—a practice that extends the platform's data collection far beyond its own ecosystem and into the intimate details of how people work. This isn't a bug or overreach; it's architectural: the company is mapping user software stacks to build more granular behavioral profiles, which directly improves targeting precision for advertisers and recruiter tools that are LinkedIn's core revenue drivers. The revelation matters because it exposes the asymmetry at the heart of "free" professional platforms: users have zero transparency into what's being measured, no meaningful consent mechanism, and limited recourse, even as regulators in the EU and US increasingly scrutinize exactly this kind of hidden data practice.

Media's Civil War Over AI

The publishing industry is fracturing into irreconcilable camps—those licensing content to AI trainers (The New York Times, authors via Authors Guild) versus those blocking access entirely (Reddit, Wired)—but neither strategy addresses the core problem: AI models don't need permission to learn from publicly available text, only legal cover to commercialize it. The leverage isn't contractual but regulatory. Whether courts treat training as fair use or infringement will determine whether media companies become paid data feeders or obsolete inputs.

Suno's AI Music Faces Its Reckoning With Copyright Law

Suno's text-to-music model trained on copyrighted recordings without explicit permission, creating legal exposure that differs meaningfully from image generation litigation. Music's mechanical and performance rights create multiple claim paths that courts have already established doctrine around, unlike the still-unsettled fair use questions in visual AI. The company's survival hinges not on technological prowess but on whether it can negotiate licensing deals faster than rights holders can file suits—a race the music industry, with its centralized mechanical licensing infrastructure, is better equipped to win than the fragmented visual art world was.

Lawyers Risk Sanctions to Deploy AI Tools

Legal professionals are absorbing disciplinary penalties as a cost of doing business rather than abandoning AI assistance. The efficiency gains outweigh the reputational and financial risks in a competitive market. This mirrors how other knowledge workers have adopted unvetted tools—the penalty structure isn't working as a deterrent because the alternative (manual work at scale) is economically untenable. Regulatory frameworks built for human-only workflows can't force workers backward once they've seen what automation enables.

Microsoft Quietly Disclaims Copilot as Non-Functional in Legal Terms

Microsoft's terms of use classify Copilot as entertainment software, creating a legal moat that shields the company from liability for hallucinations, errors, and failures while simultaneously undercutting enterprise customers' ability to rely on the tool for actual work. The classification amounts to an admission that Microsoft cannot guarantee Copilot's accuracy or safety, yet the company continues selling it to corporations and governments as a productivity asset, leaving buyers to absorb the real-world costs of deploying unaccountable AI into their operations. The gap between marketing (copilot-as-assistant) and terms (entertainment-only) exposes what large language models can and cannot reliably do.