Today we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of experienced data center executives – Jack Pouchet of Vertiv, Intel’s Jeff Klaus, Erich Sanchack of Digital Realty and Dennis VanLith of Chatsworth Products – discuss trends in server rack power density and what they mean for cooling.
The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.
Data Center Frontier: For some time we have seen predictions that rack power density would begin to increase, prompting wider adoption of liquid cooling and other advanced cooling techniques. What’s the state of rack density in 2018, and is density trending higher at all?
Jack Pouchet: Rack densities are top of mind for many within the industry, and for the most part we are seeing modest increases – from an average of about 6kW/rack to perhaps 8kW/rack. The colocation industry provides us with a fine barometer for current density and future trends, and large colocation facilities are being built out to accommodate somewhere in the range of 8 to perhaps 10kW/rack with provisions for up to 30kW/rack in select positions or zones. These facilities typically are air-cooled.
We are seeing some enterprise facilities that are continuing to consolidate hardware, virtualize applications, and migrate to much higher density racks of 20 to 35kW. These racks often are cooled with either pumped refrigerant or rear-door heat exchangers. The existing CW or DX cooling system also is upgraded for efficiency and optimized controls as the balance of IT racks are still in the 8 to 10kW range. With all of these new compute capabilities and data analytics comes a requirement for ever more storage, and the good news for data center operators is storage racks are relatively low-density – making them easier to cool and manage. It’s not uncommon to see facilities with 60 percent of the racks outfitted with storage.
Bottom line: Higher densities are coming – and with them alternative cooling technologies – but it will be a gradual evolution. Sudden, drastic change would require fundamental changes to a data center’s form factor, and in most cases that is not going to happen.
Dennis VanLith: Rack density is growing slightly, but on average not to the levels served by direct liquid cooling. Rack densities average 8 kW to 16 kW, and there is a practical limitation in powering equipment beyond 16 kW per rack, if you are using standard 208/230 VAC power and redundant power feeds. It is possible to cool a 30 kW rack with air in a system that uses containment with good seal and low leakage, so traditional perimeter cooling and other air-based systems are still very good options. Direct liquid cooling, which is focused on 30 kW to 50 kW racks, is a good solution for high-density compute applications and some containerized solutions.
It is also important to recognize that compute power is increasing. As chip manufacturers add cores to processors (CPUs), we continue to get the benefit of Moore’s law driving continued increases in compute per watt. Further, the size of the CPU package continues to increase, so the heat flux due to the CPU is decreasing. Basically, servers are more power efficient and support higher utilization.
So rack densities will probably not climb significantly, but the amount of compute power (utilization) per rack will. The only wild card here is AI. Typically, the PCI cards required to drive AI run at 100% power when models are being trained. When AI takes off, expect extremely large and sustained loads on the data center and system that will increase the average workload. However, I still think direct liquid cooling will be extremely niche, and only for very specific use cases. With power densities averaging below 16 kW, there is no need for liquid cooling.
Erich Sanchack: Recent advancements in storage technology are increasing rack density exponentially. We can fit so much more storage into so much less space than we could even a year ago, much less five years ago, and it’s great. That said, the amount and types of data that our customers are generating and using are growing at a similar or even greater pace, meaning that our physical footprint and power/cooling needs and capabilities are trending up, vs. down. We have a very advanced strategy and roadmap that allows us to lead the change, as opposed to falling victim to it, which is why we’ve achieved five 9s uptime for the last 10 years in a row.
There are some limiting factors on how much density can be achieved. Laws of physics do come into play. We may have a terabyte where we used to have a gigabyte, but the power usage is still fairly similar at this point. There will be greater efficiencies when we start to reduce the cooling needs, through the use of quantum computing, storing information deeper in the silicon vs. more broadly on the surface, and adoption of other advanced technologies. Quantum is going to be the part that really, really drives the exponential jump in the density of those racks.
Here’s a fun fact – you could store all the data in the world on a pencil if you stored it in volume versus on the surface of the pencil. All the molecules of the pencil itself could store all the data usage that we use globally. Quantum computers – we have three of them in the United States in three distinct locations – they’re able to store, they’re just not able to retrieve. That’s a little bit of a problem, currently. As we solve for that and other issues, though, we will see these efficiencies increase dramatically.
Jeff Klaus: Increased density is always desired, but it has a lot of headwinds in the form of physical constraints. These constraints in existing power and cooling infrastructure are hard to overcome. Also, it’s still only appealing to only a subset of tenants because liquid cooling usually requires more prescribed rack configurations.
Finding ways to optimize the hardware in the racks through real-time monitoring of power, thermals, and implementing power control at the server can drive density just as much as other more costly techniques. This ties into the AI data streams and identifying actionable data on usage and capacity as well.
Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below: