Details

  • Baidu released ERNIE 5.1, a foundation model built on ERNIE 5.0's pre-training foundation with significantly reduced computational cost, requiring only approximately 6% of the pre-training expense of comparable frontier models.
  • ERNIE 5.1 ranks No. 4 globally on LMArena's Search Leaderboard with a score of 1,223, and No. 13 on the Text Leaderboard with 1,476 points, placing in the global top 10 across multiple category leaderboards including Legal, Government, Math, and Business Management.
  • The model compresses total parameters to roughly one-third of ERNIE 5.0 (which had 2.4 trillion parameters) while reducing active parameters to approximately half, achieving this efficiency through decoupled fully-asynchronous reinforcement learning and scaled agentic post-training.
  • Key capability upgrades include enhanced search retrieval and synthesis for multi-source content generation, improved reasoning, knowledge Q&A, creative writing, and agentic capabilities for enterprise applications and AI assistants.
  • Multi-dimensional elastic pre-training technology allows one training process to generate models at different scales, enabling cost-efficient model variants; ERNIE 5.1 is currently available for public access at ernie.baidu.com.

Impact

ERNIE 5.1 narrows the competitive gap with Western frontier models like GPT-5.1 and Claude Opus by achieving top-tier global rankings at substantially lower computational cost, signaling a shift toward efficiency-driven post-training optimization over pure parameter scaling. The model's strong performance in professional categories such as legal and government applications suggests enterprise adoption potential, particularly in regions where cost-per-inference remains a critical purchasing factor. This efficiency-first approach aligns with industry-wide trends toward smaller, more specialized models, pressuring rivals to justify higher pre-training investments when comparable performance is achievable at fraction of the cost.