Anthropic's Claude 4 Launches, Outperforming Rivals
Anthropic has launched its long-awaited Claude 4 AI model family, which reportedly surpasses competitors like OpenAI’s GPT-4.1 and Google’s Gemini 2.5 Pro on coding benchmarks. Claude Opus 4 scored 72.5% on the SWE-bench coding assessment, significantly higher than its rivals. The model can autonomously code for up to seven hours and manage context windows of nearly 1 million tokens. Priced at $75 per million output tokens, Claude 4 models are notably more expensive than some open-source alternatives. The launch is accompanied by Claude Code, a tool for developers that automates coding tasks. Anthropic’s revenue for Q1 2025 reached $2 billion, demonstrating strong market demand and growth. The models are designed with a focus on safety and user privacy, maintaining a competitive edge through extensive testing and application across various platforms, including Amazon and Google Cloud. Overall, Claude 4 is positioned to be a powerful tool for coding and reasoning tasks in the AI landscape.
Source 🔗