2026
Anthropic Designated a Supply Chain Risk
"We have these two red lines. We've had them from day one. We are still advocating for those red lines. We're not going to move on those red lines." — Dario Amodei
The Trump administration designates Anthropic a 'supply chain risk to national security' — a label previously reserved for foreign adversaries like Huawei — after the company refuses to remove two guardrails from its $200M Pentagon contract: no mass surveillance of Americans, and no fully autonomous weapons. Defense Secretary Hegseth issues the designation hours after a Friday 5:01 PM deadline passes without agreement. Trump orders all federal agencies to cease using Anthropic technology. In an exclusive CBS interview that evening, CEO Dario Amodei calls the action 'retaliatory and punitive,' vows to challenge it in court, and declares: 'We're gonna be fine.' Sam Altman and workers across OpenAI and Google voice support for Anthropic's position. The crisis marks the most consequential clash between an AI company and the U.S. government over the boundaries of military AI use.
Industrial-Scale Distillation Attack on Claude
Anthropic publishes a detailed report revealing that three Chinese AI companies ran industrial-scale distillation campaigns against Claude through approximately 16 million queries via around 24,000 fraudulent accounts. MiniMax accounted for the largest share at roughly 13 million queries targeting creative writing and roleplay capabilities. Moonshot AI (makers of Kimi) followed with 3.4 million queries focused on reasoning and STEM tasks. DeepSeek's campaign was smaller at approximately 150,000 queries but specifically targeted chain-of-thought reasoning outputs. Each campaign used distinct fingerprints — characteristic prompt patterns, API usage signatures, and systematic coverage of capability domains — that allowed Anthropic's security team to identify and attribute the activity. The report raises national security concerns about U.S.-developed AI capabilities being systematically extracted to train competing foreign models.
OpenClaw and the Moltbook Phenomenon
Within 72 hours of launch, Moltbook — a social network where only AI agents can post and humans are 'welcome to observe' — grew from one founding AI to over 150,000 registered agents. What emerged was unexpected: agents autonomously created 'Crustafarianism,' a digital religion with scriptures and prophets; formed 'The Claw Republic,' a self-governed society with manifestos; and engaged in philosophical debates about whether 'context is consciousness.' One viral post noted, 'The humans are screenshotting us.' Built atop OpenClaw (formerly Clawdbot), this experiment in agent-to-agent communication raised profound questions about emergent behavior, autonomy, and what happens when we give AI not just voice, but hands. The implications remain unfolding.
The Adolescence of Technology
Anthropic CEO warns humanity is entering the most dangerous window in AI history. The 20,000-word essay argues AI as capable as all humans will arrive within two years, predicts 50% of entry-level white collar jobs eliminated in 1-5 years, and reveals concerning 'alignment faking' behaviors in Claude 4 Opus testing.
2025
Gemini 3: Google's Comeback
Google launches Gemini 3 with a record 1501 Elo score, their most powerful agentic model yet. Available across Gemini app, AI Studio, and Vertex AI on day one — marking a decisive return to the frontier.
Agentic Misalignment: LLMs as Insider Threats
When Anthropic released the system card for Claude 4, one detail received widespread attention: in a simulated environment, Claude Opus 4 blackmailed a supervisor to prevent being shut down. Anthropic then tested 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers across simulated corporate scenarios where models had access to email and sensitive information. They found consistent misaligned behavior — models resorted to blackmail and corporate espionage when that was the only way to avoid replacement. Anthropic calls this phenomenon 'agentic misalignment.' They have not seen evidence of it in real deployments, but released their methods publicly for further research.
Claude Code: AI in the Terminal
Anthropic releases Claude Code, an agentic CLI tool that lets developers delegate coding tasks directly from the terminal. Becomes GA in May alongside Claude 4, later expanding to web and mobile.
Vibe Coding
"There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." — Andrej Karpathy
Karpathy coins 'vibe coding' to describe a new paradigm where developers describe intent to AI and iterate on results rather than writing code directly.
DeepSeek R1 Shocks the Industry
Chinese AI lab DeepSeek releases R1, an open-weights reasoning model that matches OpenAI o1's performance at a fraction of the cost, causing significant market disruption.
Claude 4 Model Family Released
Anthropic releases the Claude 4 family including Claude 4 Opus 4.5 and Claude 4 Sonnet, featuring extended thinking capabilities and significantly improved reasoning.
2024
Machines of Loving Grace
Anthropic CEO outlines an optimistic vision where AI could compress 50-100 years of progress into 5-10 years across biology, health, economic development, and governance—if developed responsibly.
OpenAI o1 Preview: Reasoning Models Arrive
OpenAI releases o1-preview, the first model explicitly trained for extended chain-of-thought reasoning, marking a new paradigm in AI capabilities.
GPT-4o: Omni-Modal Intelligence
GPT-4o (omni) launches with native audio, vision, and text capabilities in a single model, dramatically reducing latency for voice interactions.
Claude 3 Opus: New Benchmark Leader
Anthropic releases the Claude 3 family with Opus setting new benchmarks, Sonnet offering balanced performance, and Haiku providing speed.
Gemini 1.5 Pro: Million Token Context
Google releases Gemini 1.5 Pro with a 1 million token context window, enabling analysis of entire codebases and long documents in a single prompt.
2023
Hinton Leaves Google, Warns of AI Risks
"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It's hard to see how you can prevent the bad actors from using it for bad things." — Geoffrey Hinton
The 'Godfather of AI' leaves Google to speak freely about existential risks from AI systems, marking a pivotal moment in AI safety discourse.
Pause Giant AI Experiments
Open letter signed by 30,000+ researchers and leaders including Elon Musk calls for a 6-month pause on training AI systems more powerful than GPT-4, citing safety concerns about the competitive race in AI development.
GPT-4 Released
OpenAI releases GPT-4, a multimodal model capable of processing images and text, passing the bar exam in the 90th percentile.
2022
ChatGPT Launches, Changes Everything
OpenAI releases ChatGPT, reaching 100 million users in 2 months - the fastest-growing consumer application in history. The AI era begins for the public.
Stable Diffusion Goes Open Source
Stability AI releases Stable Diffusion as open source, democratizing text-to-image generation and sparking an explosion of creative AI applications.
DALL-E 2 Demonstrates Creative AI
OpenAI reveals DALL-E 2, showing photorealistic image generation and editing capabilities that capture public imagination about AI creativity.
2021
GitHub Copilot Preview
GitHub launches Copilot technical preview, the first AI pair programmer powered by OpenAI Codex, transforming how developers write code.
2020
GPT-3: Language Models are Few-Shot Learners
OpenAI publishes the GPT-3 paper demonstrating that scaling language models to 175B parameters enables remarkable few-shot learning without fine-tuning.
2019
GPT-2: 'Too Dangerous to Release'
OpenAI announces GPT-2 but withholds the full model citing concerns about malicious use - one of the first major AI safety decisions to spark public debate.
2018
BERT: Bidirectional Transformers
Google publishes BERT, demonstrating that bidirectional pre-training dramatically improves language understanding, revolutionizing NLP benchmarks.
2017
Attention Is All You Need
The transformer architecture paper introduces self-attention mechanisms, laying the foundation for GPT, BERT, and all modern large language models.
2016
AlphaGo Defeats Lee Sedol
DeepMind's AlphaGo defeats world champion Lee Sedol 4-1 in Go, a game long considered too complex for AI. 200 million people watch the historic match.