Microsoft Azure Unveils World’s First NVIDIA GB300 NVL72 Supercomputing Cluster for OpenAI

Microsoft Azure at this time introduced the brand new NDv6 GB300 VM sequence, delivering the trade’s first supercomputing-scale manufacturing cluster of NVIDIA GB300 NVL72 techniques, purpose-built for OpenAI’s most demanding AI inference workloads.

This supercomputer-scale cluster options over 4,600 NVIDIA Blackwell Extremely GPUs linked by way of the NVIDIA Quantum-X800 InfiniBand networking platform. Microsoft’s distinctive techniques strategy utilized radical engineering to reminiscence and networking to offer the huge scale of compute required to realize excessive inference and coaching throughput for reasoning fashions and agentic AI techniques.

In the present day’s achievement is the results of years of deep partnership between NVIDIA and Microsoft purpose-building AI infrastructure for the world’s most demanding AI workloads and to ship infrastructure for the following frontier of AI. It marks one other management second, making certain that modern AI drives innovation in the USA.

“Delivering the trade’s first at-scale NVIDIA GB300 NVL72 manufacturing cluster for frontier AI is an achievement that goes past highly effective silicon — it displays Microsoft Azure and NVIDIA’s shared dedication to optimize all components of the trendy AI knowledge middle,” mentioned Nidhi Chappell, company vp of Microsoft Azure AI Infrastructure.

“Our collaboration helps guarantee clients like OpenAI can deploy next-generation infrastructure at unprecedented scale and velocity.”

Contained in the Engine: The NVIDIA GB300 NVL72

On the coronary heart of Azure’s new NDv6 GB300 VM sequence is the liquid-cooled, rack-scale NVIDIA GB300 NVL72 system. Every rack is a powerhouse, integrating 72 NVIDIA Blackwell Extremely GPUs and 36 NVIDIA Grace CPUs right into a single, cohesive unit to speed up coaching and inference for large AI fashions.

The system gives a staggering 37 terabytes of quick reminiscence and 1.44 exaflops of FP4 Tensor Core efficiency per VM, creating an enormous, unified reminiscence house important for reasoning fashions, agentic AI techniques and sophisticated multimodal generative AI.

NVIDIA Blackwell Extremely is supported by the full-stack NVIDIA AI platform, together with collective communication libraries that faucet into new codecs like NVFP4 for breakthrough coaching efficiency, in addition to compiler applied sciences like NVIDIA Dynamo for the best inference efficiency in reasoning AI.

The NVIDIA Blackwell Extremely platform excels at each coaching and inference. Within the latest MLPerf Inference v5.1 benchmarks, NVIDIA GB300 NVL72 techniques delivered record-setting efficiency utilizing NVFP4. Outcomes included as much as 5x greater throughput per GPU on the 671-billion-parameter DeepSeek-R1 reasoning mannequin in contrast with the NVIDIA Hopper structure, together with management efficiency on all newly launched benchmarks just like the Llama 3.1 405B mannequin.

The Material of a Supercomputer: NVLink Change and NVIDIA Quantum-X800 InfiniBand

To attach over 4,600 Blackwell Extremely GPUs right into a single, cohesive supercomputer, Microsoft Azure’s cluster depends on a two-tiered NVIDIA networking structure designed for each scale-up efficiency inside the rack and scale-out efficiency throughout all the cluster.

Inside every GB300 NVL72 rack, the fifth-generation NVIDIA NVLink Change cloth gives 130 TB/s of direct, all-to-all bandwidth between the 72 Blackwell Extremely GPUs. This transforms all the rack right into a single, unified accelerator with a shared reminiscence pool — a essential design for large, memory-intensive fashions.

To scale past the rack, the cluster makes use of the NVIDIA Quantum-X800 InfiniBand platform, purpose-built for trillion-parameter-scale AI. That includes NVIDIA ConnectX-8 SuperNICs and Quantum-X800 switches, NVIDIA Quantum-X800 gives 800 Gb/s of bandwidth per GPU, making certain seamless communication throughout all 4,608 GPUs.

Microsoft Azure’s cluster additionally makes use of NVIDIA Quantum-X800’s superior adaptive routing, telemetry-based congestion management and efficiency isolation capabilities, in addition to NVIDIA Scalable Hierarchical Aggregation and Discount Protocol (SHARP) v4, which accelerates operations to considerably enhance the effectivity of large-scale coaching and inference.

Driving the Way forward for AI

Delivering the world’s first manufacturing NVIDIA GB300 NVL72 cluster at this scale required a reimagination of each layer of Microsoft’s knowledge middle — from customized liquid cooling and energy distribution to a reengineered software program stack for orchestration and storage.

This newest milestone marks a giant step ahead in constructing the infrastructure that can unlock the way forward for AI. As Azure scales to its purpose of deploying tons of of 1000’s of NVIDIA Blackwell Extremely GPUs, much more improvements are poised to emerge from clients like OpenAI.

Be taught extra about this announcement on the Microsoft Azure weblog

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles