Details

  • OpenAI released GPT-5.4 mini today, available immediately in ChatGPT, Codex, and the API, optimized for coding, computer use, multimodal understanding, and subagents; it is 2x faster than GPT-5.4 mini.
  • GPT-5.4 nano launched simultaneously in the API, targeting efficient, lightweight applications.
  • Builds on GPT-5.4 foundation model with 1 million token context window in API, enabling analysis of entire codebases or long agent trajectories.
  • Introduces native computer use capabilities, first for a mainline model, scoring 75% on OSWorld-Verified benchmark, surpassing human baseline of 72.4%.
  • Features Tool Search for automatic tool discovery, reducing token usage by up to 47% in agentic systems; supports high-resolution image inputs over 10 million pixels.
  • Improves accuracy with 33% fewer errors in individual claims vs GPT-5.2; excels in coding, vision, long tasks, and business workflows like analytics and finance.

Impact

OpenAI's release of GPT-5.4 mini and nano intensifies competition in efficient frontier models, directly challenging Anthropic's Claude and Google's Gemini with 2x speed gains and native computer use that outperforms humans on OSWorld-Verified benchmarks. The 1M token context and Tool Search lower costs for developers building agentic systems, potentially accelerating adoption in coding and enterprise workflows by slashing token usage 47% while maintaining accuracy. This positions OpenAI ahead in on-device and API efficiency, pressuring rivals to match multimodal and subagent capabilities amid tightening GPU constraints. Over 12-24 months, it could redirect funding toward agent-focused R&D, widening the gap in practical AI deployment for professional tasks like software interaction and multi-step planning.