// AI & ML

All signals tagged with this topic

OpenAI Reframes AI Safety as User Responsibility

OpenAI's latest positioning moves the burden of "responsible AI use" onto end users rather than the company's product design or deployment choices. By casting safety as a social contract issue—essentially a terms-of-service matter—the company can maintain aggressive release schedules and broad API availability without substantively changing how its models work or who can access them. This mirrors Big Tech's playbook of treating regulatory and ethical concerns as communication problems rather than engineering constraints. Policymakers and enterprise customers will likely adopt similar framings when evaluating AI risk.

Real Estate Photographers Face Unexpected Copyright Liability

Real estate photographers operate in a legal gray zone. They can be held liable for copyrighted architectural elements in their images—a risk most don't carry insurance for or understand exists. The question is whether architectural features, interior design choices, or furniture constitute protectable creative works that photographers are reproducing without license. If so, liability shifts from the property owner to the image maker. This creates a structural problem in the gig economy where individual contractors absorb legal risk that larger production companies would negotiate away through licensing agreements or indemnification clauses.

South Korea deploys ChatGPT robots to address elderly care shortage

With over 20% of South Korea's population now over 65, the country is treating AI-powered robotics as infrastructure rather than experimentation—a pragmatic response to a demographic crisis that most wealthy nations are still debating philosophically. This matters because it shows which countries will absorb the labor cost of aging populations through automation versus immigration or public spending, establishing de facto policy through procurement decisions rather than legislation. The question isn't whether the robots work, but whether this becomes a template other East Asian economies copy, potentially locking in a lower-cost care model that undercuts wage-dependent alternatives in Europe and North America.

OpenAI Pitches Tax Hikes and Public AI Funds to Fund Superintelligence

OpenAI is attempting to preempt regulatory capture by proposing its own fiscal framework—higher capital gains taxes, a sovereign wealth-style AI fund, and expanded social safety nets—before governments impose far stricter constraints on the industry. It's a classic defensive maneuver: by offering a palatable middle path that acknowledges concentration of AI wealth while preserving private incentives, OpenAI hopes to shape the political settlement around AGI rather than cede the conversation to antitrust hawks or socialist regulators. The move signals real anxiety that unfettered AI deployment could trigger a backlash severe enough to shift corporate tax policy and corporate governance, making the proposal as much a bet on technocratic credibility as on the merits of the proposals themselves.

ChatGPT Confidently Recommends Products WIRED Never Tested

WIRED tested ChatGPT's product recommendations against its own editorial reviews and found ChatGPT consistently provided incorrect answers about which TVs, headphones, and laptops WIRED's reviewers actually tested and recommended. This matters because it demonstrates that large language models confidently generate plausible-sounding but false information, creating a gap between user expectations and actual reliability when relying on AI for consumer decisions.

Constitutional AI Misses the Mark on Virtue Ethics

A Lesswrong article critiques Anthropic's Constitutional AI framework for relying on rule-based constraints rather than developing genuine character-based virtue ethics in AI systems. The author argues this approach is fundamentally limited and proposes an alternative virtue-ethical framework as a superior approach to AI alignment.