Details

  • OpenAI has announced a partnership with Broadcom to deploy 10 gigawatts of custom-designed AI chips, with Broadcom handling manufacturing and supply chain responsibilities.
  • This new hardware initiative complements OpenAI’s existing GPU deals with Microsoft, Nvidia, and others, but operates independently.
  • The 10 GW figure represents the expected total power usage for these accelerators, roughly equivalent to running several million high-end GPUs in data centers.
  • By designing its own chips, OpenAI can optimize features like memory bandwidth, interconnects, and energy efficiency specifically for large language model tasks.
  • Broadcom, which already manufactures custom ASICs for Google and Meta, will oversee design finalization, packaging, and high-volume production using advanced TSMC manufacturing processes.
  • No official launch date has been announced, though typical timelines suggest the first chips could be sampled within 18 to 24 months.
  • OpenAI describes this move as part of its strategy to address the rising global demand for advanced AI capabilities, indicating ambitions for much larger operational scale.

Impact

This vertical integration reduces OpenAI's reliance on Nvidia, echoing custom hardware efforts by Google and Amazon as the AI chip race heats up. A 10 GW chip deployment will set a new scale for AI infrastructure, prompting industry rivals to rethink resource acquisition and hardware strategies. As custom accelerators become more common, the AI landscape may shift toward model developers controlling their own hardware, with significant implications for competition and regulatory oversight.