Balancing electricity demands and costs of high-performance computing
By Gina Mantica for Hariri
Some computational problems are so complex that they require many times more calculations than a typical laptop can handle. High performance computing (HPC) combines a number of computer servers to carry out these large-scale computations that often operate on massive data sets. But data centers for HPC require immense amounts of power and limiting power consumption, though beneficial for the environment and cost, can lead to undesirable performance.
Hariri Institute Research Fellows Ayse Coskun (ECE) and Ioannis Paschalidis (ECE, BME, SE, CISE) are working to help data centers create programs that prioritize performance while responding to power demands for HPC and keeping electricity costs low. Their findings were published recently in the IEEE Transactions on Sustainable Computing.
Q&A
Hariri asked Coskun and Daniel Wilson about HPC and sustainable computing programs.
Daniel is a PhD student in Computer Engineering and co-author on the study.
Q: Why does HPC require so much electricity?
The processors in a data center can compute a lot of numbers in a small amount of time, and that’s not everything that draws power in a data center! Those computations have additional effects beyond the computing servers, like demanding more power from data servers, networking equipment, and cooling infrastructure.
Q: What are the benefits to regulating data center power consumption locally, nationally, and globally?
By adjusting power consumption of a data center or a collection of centers, we can address challenges in the power grid. For example, renewable power generation resources may not be equally available in high capacities at all places and that creates challenges in matching supply and demand. Data centers can regulate their power (or one can transmit loads to other centers, if possible) to absorb this volatility. As a result, you can enable wider adoption of green energy. In addition, in this way, we can increase the cost-efficiency of data centers and save on energy costs.
Q: How are you working to reduce the electricity cost of data centers?
Our paper addresses challenges in constraining undesirable performance impacts that result from limiting power consumption in a data center. One of our goals is to help data centers participate in “demand response” programs, which allow an electricity consumer to respond to requests from the power operator (and in this way, consumers get better contracts for their electricity usage). Through demand response programs, a consumer can spend less money on energy, and power operators can more reliably supply enough energy for a stable electrical grid. Solving the performance impact on user workloads of such programs is essential for any practical adoption of data center demand response.
Q: Does using less energy come at a computational cost?
It depends, but that’s not always bad news! Servers offer a high degree of control over their power consumption. Some workloads running on a server are more sensitive to power management than other workloads, so a careful application of power limits will reduce energy consumption with only minor impacts to computational performance.
Q: Are the data center energy management methods in development broadly applicable? Even to countries with less computing power?
While not all electricity markets offer demand response options, data centers may still have other reasons to care about performance-aware power budgeting. Our methods aim to make controlled tradeoffs between performance and power, which could also be helpful in related challenges such as responding to variable energy pricing and respecting commitments to a power contract.
***
In a new project funded by the Hariri Institute and the Institute for Sustainable Energy, Coskun and Paschalidis, in collaboration with Richard Stuebi at Questrom School of Business, will study the impact of data center demand response under a variety of real-world conditions from different markets across several countries.