Details

  • NVIDIA broke new ground by sweeping all seven MLPerf Training v5.1 benchmarks, which span tasks like large language model pretraining, image generation, recommendation systems, vision, and graph neural networks. The results were announced on November 12, 2025.
  • As the sole platform to enter every benchmark, NVIDIA demonstrated unmatched breadth, while 65 unique systems and a dozen different accelerators participated overall—underscoring the intensity of competition in AI infrastructure.
  • The debut of the Blackwell Ultra GPU architecture delivered standout gains, achieving 4x faster Llama 3.1 405B pretraining and nearly 5x faster Llama 2 70B fine-tuning compared to its Hopper predecessor, using the same number of GPUs. The groundbreaking Quantum-X800 InfiniBand network doubled bandwidth to 800 Gb/s for massive model scaling.
  • NVFP4 precision, a pioneering advance, made its first appearance in MLPerf, with Blackwell Ultra processing FP4 calculations at triple the speed of FP8, all while meeting MLPerf's stringent accuracy standards—a feat unmatched by others this cycle.
  • NVIDIA set a new record by training Llama 3.1 405B in just 10 minutes with over 5,000 Blackwell GPUs, and led freshly introduced tasks such as Llama 3.1 8B (5.2 minutes) and FLUX.1 image generation (12.5 minutes), which went uncontested by rivals.

Impact

NVIDIA’s clean sweep in MLPerf v5.1 cements its supremacy in the AI training market, with NVFP4 technology creating a notable edge competitors have yet to match. The active involvement of 15 enterprise partners, including Dell, HPE, and Lenovo, points to strong market demand for NVIDIA’s certified infrastructure. As rivals chase performance gains, NVIDIA’s blend of technical innovation and ecosystem reach positions it as the platform to beat in AI computing.