Anthropic's Claude rebellion. GPT-6 goes dark on launch day. Novo Nordisk bets big on OpenAI. Stanford says China closed the gap. And someone tried to kill Sam Altman. Eight stories from the week AI got personal.
Anthropic's power users are in open revolt over what they call a deliberately degraded Claude. OpenAI's GPT-6 missed its rumored April 14 launch and the real release window is narrowing fast. Novo Nordisk handed OpenAI the keys to its entire drug discovery pipeline. Stanford's AI Index confirmed what the industry feared: China has nearly erased America's model performance lead. And a 20-year-old traveled from Texas to San Francisco with kerosene and a plan to kill the CEO of OpenAI. Eight stories from the week agentic AI got deeply, uncomfortably personal.
Anthropic's Most Loyal Users Say Claude Got Worse.
The company that built its brand on transparency is facing a credibility crisis of its own making. Across GitHub, X, and HackerNews, Claude Code power users are documenting what they call a systematic performance collapse: more syntax errors, shallow reasoning on multi-file tasks, and a model that feels "lazier" on complex workflows. An AMD senior director wrote that "Claude has regressed to the point it cannot be trusted to perform complex engineering." A Microsoft researcher called recent sessions "extremely sloppy." The root cause, confirmed by Anthropic's own Boris Cherny: the company quietly reduced Claude's default "effort" level to medium in March to economize on tokens. The backlash isn't just about quality. It's about the gap between Anthropic's transparency brand and a change users discovered only after their workflows broke.
"Claude has regressed to the point it cannot be trusted to perform complex engineering."
Anthropic's moat was never just model quality — it was trust. Quietly degrading performance and letting users discover it through broken code is the fastest way to burn that moat. The fix is straightforward: give users explicit effort controls and be transparent about compute tradeoffs. The damage to brand credibility is harder to repair.
Novo Nordisk Just Gave OpenAI the Keys to Its Drug Pipeline.
The world's most valuable pharma company just made the most ambitious AI partnership in the industry's history. On April 14, Novo Nordisk announced a strategic deal with OpenAI to deploy AI across its entire operation — from drug discovery to manufacturing to commercial distribution. Pilot programs launch immediately across R&D, manufacturing, and commercial ops, with full integration by year-end. OpenAI will also upskill Novo's global workforce. The context: Novo is locked in a weight-loss drug war with Eli Lilly and needs to accelerate its pipeline to defend Wegovy's market share. This isn't an innovation experiment. It's a competitive weapon deployed under existential pressure, with strict data governance baked in from day one.
This is the deal that proves OpenAI's enterprise strategy is working. When a $570B pharma giant hands you its drug pipeline under competitive duress, you've graduated from vendor to strategic infrastructure. Watch for Eli Lilly's counter-move within 90 days — it'll likely be an Anthropic or Google deal.
Anthropic Rebuilt Claude Code and Launched Cloud Agents.
On the same day its users were publicly revolting over quality, Anthropic shipped its most ambitious product update yet. The Claude Code desktop app got a ground-up redesign on April 14: multi-session support in a single window, integrated terminal, faster diff viewer for large changesets, and an in-app file editor with expanded preview. But the real story is "Routines" — saved Claude Code configurations that run on Anthropic's cloud infrastructure, triggered on schedule or by events, executing even when your laptop is off. Pro users get 5 per day, Max gets 15, Team and Enterprise get 25. Anthropic calls them "dynamic cron jobs powered by AI agents." The timing is either ironic or strategic: launching the tool that makes Claude indispensable on the same day trust in Claude hit its lowest point.
Routines transform Claude Code from a developer tool into developer infrastructure — agents that run in the background, on a schedule, on Anthropic's cloud. That's a fundamentally different product category. If the quality issues get resolved, this is the feature that locks enterprises in.
A Man Tried to Kill Sam Altman Over AI Fears.
At 4 a.m. on April 10, Daniel Moreno-Gama, 20, threw an incendiary device at Sam Altman's San Francisco home, setting the exterior gate on fire. An hour later, he showed up at OpenAI headquarters with kerosene and a lighter, smashing glass doors with a chair. He was arrested on site. Federal and state charges now include attempted murder and potential domestic terrorism. Moreno-Gama had traveled from Spring, Texas, and written extensively about AI's existential risk to humanity. No one was injured. Hours later, Altman posted a photo of his husband and toddler, writing that he hoped it might "dissuade the next person." The online response split sharply along generational lines — a signal that anti-AI sentiment has metastasized from policy debate into something far more volatile.
This is a watershed moment. Anti-AI anxiety has escalated from online discourse to physical violence targeting named executives. Every AI lab now faces a security calculus that didn't exist six months ago. The industry's response — more transparency, more public engagement, more visible safety work — just became urgent infrastructure, not optional PR.
Stanford's AI Index Confirms What Everyone Suspected.
The 2026 AI Index Report from Stanford HAI, published April 13, delivered the most comprehensive snapshot of where AI actually stands — and it's not where American exceptionalism assumed. The US-China model performance gap has narrowed to
, with DeepSeek and Alibaba models trailing Anthropic's lead by a margin that's essentially noise. AI adoption is outpacing both the personal computer and the internet, reaching
population penetration in three years. SWE-bench coding scores jumped from 60 to nearly 100 in a single year. But the trust data is sobering: only