Details
- NVIDIA introduced the Vera Rubin NVL144 MGX rack at the OCP Global Summit, sharing full specifications and revealing that over 50 MGX partners are aligning with the new architecture, while more than 20 companies are supporting the adoption of 800-volt direct current (VDC) infrastructure for large-scale AI data centers.
- The announcement highlighted an extensive partner ecosystem, featuring infrastructure giants like Foxconn, CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure, and Together AI; hardware vendors such as Vertiv (with 800 VDC MGX reference architecture) and HPE (supporting Kyber and Spectrum-XGS); major silicon companies including Analog Devices, Infineon, Renesas, and Texas Instruments; and the expansion of NVLink Fusion with Intel (adding NVLink to x86 CPUs) and Samsung Foundry for custom silicon production.
- Vera Rubin NVL144 boasts a 100% liquid-cooled, modular design with a cable-free midplane, 45°C liquid cooling, modular networking bays for ConnectX-9 800GB/s modules and Rubin CPX, plus a new busbar offering 20 times more energy storage. The upcoming NVIDIA Kyber rack, expected in 2027, will accommodate 576 Rubin Ultra GPUs using vertical compute blades (up to 18 per chassis), integrated NVLink switch blades, and cable-free architecture.
- The move to 800 VDC from traditional 415 or 480 VAC systems enables 150% greater power transmission through existing copper, eliminates the need for heavy copper busbars, and reduces infrastructure costs by millions per rack. NVIDIA is contributing both rack and compute tray designs as open standards to the Open Compute Project, with the MGX footprint supporting both current and future GPU platforms. Foxconn’s new 40-megawatt data center in Taiwan will showcase 800 VDC in action.
- The expanded NVLink Fusion ecosystem allows custom silicon integration into NVIDIA’s architecture, with Intel adopting NVLink Fusion in its x86 CPUs, and Samsung Foundry providing custom design services. This 800 VDC approach parallels advancements in electric vehicle and solar industries, where similar voltage choices have improved scalability and efficiency.
Impact
NVIDIA’s move to open standards and a vast partner ecosystem could redefine the landscape for AI data center infrastructure, competing with the proprietary models of other hyperscalers. The 800 VDC transition tackles major power density and cost challenges, positioning NVIDIA for leadership as demand for cutting-edge AI training and inference soars. With Vera Rubin and Kyber on the horizon, NVIDIA is locking in long-term strategic momentum for the next generation of AI computing.