Comparison of HPC Telecom Data Center Cooling Methods by Operating and Capital Expense
. by Dr. Alexander Yatskov Ph.D, PE,
Current high-performance computing (HPC) and Telecom trends have shown that the number of transistors per chip has continued to grow in recent years, and data center cabinets have already surpassed 30 kW per cabinet (or 40.4 kW/m2). It is not an unreasonable assumption to expect that, in accordance with Moore's Law, power could double within the next few years. However, while the capability of CPUs has steadily increased, the technology related to data center cooling systems has stagnated, and the average power per square meter in data centers has not been able to keep up with CPU advances because of cooling limitations. With cooling systems representing up to ~50% of the total electric power bill for data centers, growing power requirements for HPC and Telecom systems present a growing operating expense (OpEx). Brick and mortar and (especially) mobile, container based data centers cannot be physically expanded to compensate for the limitations of conventional air cooling methods.
In the near future, in order for data centers continue increasing in power density, alternative cooling methods, namely liquid cooling, must be implemented at the data center level in place of standard air cooling. Although microprocessor-level liquid cooling has seen recent innovation, cooling at the blade, cabinet, and data-center level has emerged as a critical technical, economic, and environmental issue.
In this article, three cooling solutions are assessed to provide cooling to a hypothetical, near-future computing cluster.