A
AndyD750
With the advent of artificial intelligence and machine learning (AI/ML), hyperscale datacenters are increasingly accommodating AI accelerators at scale, demanding higher power at higher density than is customary in traditionally air-cooled facilities.
As Microsoft continues to expand our growing datacenter fleet to enable the world’s AI transformation, we are faced with a need to develop methods for utilizing air-cooled datacenters to provide liquid cooling capabilities for new AI. Additionally, increasing per-rack-density for AI accelerators necessitates the use of standalone liquid-to-air heat-exchangers to support legacy datacenters that are typically not equipped with the infrastructure to support direct-to-chip (DTC) liquid cooling.
A solution: standalone liquid cooling heat exchanger units.
Microsoft’s Maia 100 platform marked the first introduction of a liquid cooling heat exchanger in existing air-cooled data centers for direct-to-chip liquid cooling. Since that time, we have continued to invest in novel cooling techniques to accommodate newer, more powerful AI/ML processors. Today at OCP 2024, we are sharing contributions for designing advanced liquid cooling heat exchanger units (HXU). By open sourcing our design approach through the Open Compute Project, we hope to share our HXU development work to enable closed-loop liquid cooling in AI datacenters across the entire computing industry.
Our designs for HXUs focus on enabling advanced cooling capacity for modern AI processors, improving operating efficiency to reduce power demand, and enabling AI accelerator racks to operate in traditionally air-cooled data centers.
Microsoft’s vision for enhanced effectiveness centers on using the same chilled air that legacy datacenters are already providing for air-cooled platforms. Our engineering spec for HXUs targets the relative liquid and air flow rates required to supply the cooling liquid at the required temperature to the IT equipment.
The design principles for HXUs are the result of a close partnership with Delta and Ingrasys. Working with these partners has helped us evolve our approach, including double-wide rack to increase heat dissipation capacity, and specialized packaging to ensure leak-free transport. Envisioning HXUs with a modular design allows field servicing of key components, including pumps, fans, filters, printed circuit board assembly, and sensors. Quick disconnects and strategically placed leak detection ropes, along with drip pans that guide liquids to the base of an HXU, help mitigate and contain liquid leaks. Fans are placed at the rear to avoid pre-heating within an HXU and eliminate entrainment issues in the cold aisle. The modular fluid connections between HXUs and server racks allow for various configurations.
We welcome further collaboration from the broader OCP community in enabling the future of datacenter power and cooling innovation with state-of-the-art infrastructure engineering capabilities.
Continue reading...
As Microsoft continues to expand our growing datacenter fleet to enable the world’s AI transformation, we are faced with a need to develop methods for utilizing air-cooled datacenters to provide liquid cooling capabilities for new AI. Additionally, increasing per-rack-density for AI accelerators necessitates the use of standalone liquid-to-air heat-exchangers to support legacy datacenters that are typically not equipped with the infrastructure to support direct-to-chip (DTC) liquid cooling.
A solution: standalone liquid cooling heat exchanger units.
Microsoft’s Maia 100 platform marked the first introduction of a liquid cooling heat exchanger in existing air-cooled data centers for direct-to-chip liquid cooling. Since that time, we have continued to invest in novel cooling techniques to accommodate newer, more powerful AI/ML processors. Today at OCP 2024, we are sharing contributions for designing advanced liquid cooling heat exchanger units (HXU). By open sourcing our design approach through the Open Compute Project, we hope to share our HXU development work to enable closed-loop liquid cooling in AI datacenters across the entire computing industry.
Heat Exchanger Unit Design Principles
Our designs for HXUs focus on enabling advanced cooling capacity for modern AI processors, improving operating efficiency to reduce power demand, and enabling AI accelerator racks to operate in traditionally air-cooled data centers.
Microsoft’s vision for enhanced effectiveness centers on using the same chilled air that legacy datacenters are already providing for air-cooled platforms. Our engineering spec for HXUs targets the relative liquid and air flow rates required to supply the cooling liquid at the required temperature to the IT equipment.
The design principles for HXUs are the result of a close partnership with Delta and Ingrasys. Working with these partners has helped us evolve our approach, including double-wide rack to increase heat dissipation capacity, and specialized packaging to ensure leak-free transport. Envisioning HXUs with a modular design allows field servicing of key components, including pumps, fans, filters, printed circuit board assembly, and sensors. Quick disconnects and strategically placed leak detection ropes, along with drip pans that guide liquids to the base of an HXU, help mitigate and contain liquid leaks. Fans are placed at the rear to avoid pre-heating within an HXU and eliminate entrainment issues in the cold aisle. The modular fluid connections between HXUs and server racks allow for various configurations.
We welcome further collaboration from the broader OCP community in enabling the future of datacenter power and cooling innovation with state-of-the-art infrastructure engineering capabilities.
Continue reading...