Anthropic Launches Claude 2
Summary
Anthropic released Claude 2, a major upgrade to its AI assistant that demonstrated improved reasoning, coding, and mathematical abilities alongside a 100K-token context window. The launch positioned Anthropic as a credible competitor to OpenAI and established its safety-focused brand identity in the commercial AI market.
What Happened
On July 11, 2023, Anthropic launched Claude 2, its second-generation large language model, making it publicly available in the US and UK via claude.ai. The model showed significant improvements over its predecessor: it scored 76.5% on the multiple-choice portion of the bar exam (up from 73.0% for Claude 1.3), achieved 71.2% on a Python coding assessment (up from 56.0%), and scored above the 90th percentile on GRE reading and writing sections.
A defining feature was its 100,000-token context window — roughly 75,000 words — which allowed users to input entire books, codebases, or lengthy documents for analysis. This was substantially larger than GPT-4's initial 8K context window, and positioned Claude as particularly useful for document-heavy professional work.
Claude 2 was built using Anthropic's Constitutional AI (CAI) approach and Reinforcement Learning from Human Feedback (RLHF), which the company described as producing a model that was "helpful, harmless, and honest." The model also reduced hallucination rates compared to Claude 1.3.
Why It Matters
Claude 2's release established Anthropic as the most credible alternative to OpenAI in the frontier model market. While Google had released Bard and Meta had released Llama, neither had achieved the same combination of model capability and product polish that Anthropic demonstrated with Claude 2.
The 100K context window was genuinely novel at launch and previewed a dimension of model competition beyond raw benchmark performance. It demonstrated that context length — how much information a model can process at once — was becoming a key differentiator.
More broadly, Claude 2 validated the commercial viability of Anthropic's safety-focused approach. The company proved that emphasizing safety research and responsible deployment was not incompatible with building competitive products, a claim that skeptics had challenged since Anthropic's founding.