OpenAI announced the release of GPT-4.1, a new suite of AI models featuring context windows of up to one million tokens. This lineup includes standard GPT-4.1, Mini, and Nano versions, poised to significantly enhance performance for developers. Notably, GPT-4.1 achieved 55% accuracy on the SWEBench coding benchmark, a notable improvement over its predecessors while also reducing costs by 26%. The Nano variant claims to be OpenAI's smallest and cheapest model, operating at just 12 cents per million tokens. OpenAI indicated that there would be no additional charges for processing long contexts. During a demonstration, GPT-4.1 generated a complete web application from a lengthy NASA log file, showcasing the model's advanced capabilities. OpenAI’s naming strategy has drawn attention, as it introduced GPT-4 shortly before GPT-4.5, leading to an upcoming deprecation announcement for the latter. Despite the confusing model naming conventions, which even deviated into numerical puns, the core advancements suggest an evolution in AI capabilities that may soon phase out older versions.

Source 🔗