over the past couple of months several infrastructure providers of data centers have announced that they are introducing liquid cooling equipment. the press releases mention the benefits of liquid cooling over air cooling and the cost reduction resulting from such a change.
reading these announcements one might think that something revolutionary must have happened to allow liquid cooling in data centers. upon closer examination of the news, however, you will realize that this is really not liquid cooling as we know it. the rack and the components are still cooled with cold air. the only difference is that now you can have a little heat exchanger sitting next to each rack that uses the building's chiller water to cool the hot air from the rack. this is obviously an improvement over room level air conditioning.
so, air is still used to remove heat from the components and this is where the biggest inefficieny of the current data centers lie. as the power dissipation of the components and the heat generation density increases, we have to take the next step of bringing liquid cooling to the component. this, however, seems to be a tall order because the entire supply chain must work in concert to make it happen.
the large thermal resistance due to the heat spreader and tim combination makes heat removal difficult even for the best liquid cooled cold plate. to begin with the manufacturers of the hot components must find a way to embed an efficient liquid cooled (note: i did not say water) cold plates in the package by replacing the integrated heat spreader (ihs) with an efficient micro-channel cold plate. if the cold plate can be made with materials with cte close to that of silicon then the mechanical integrity of the package is also preserved. if the liquid is an inert fluid then the old fear of coolant leakage must go away.
if the cold plate can be embedded into the package, we can hope to manage the heat at the most fundamental level. a high performance cold plate can remove several hundred watts/cm2 with a small approach temperature difference allowing the inlet temperature to be set a higher value to save money.
getting cooler embedded packages is not sufficient. oems and data center operators and infrastructure providers must also come on board for this revolution to take place.
given the ever increasing hunger for more power packed into smaller areas, the question isn't whether we will eventually adopt this approach. the only question is when?
|