Since the beginning of the field of high-performance computing (HPC) after World War II, there has been a rapid increase in computing resources and simulation complexity. This has enabled significant scientific and technological advances. However, large numerical simulations have important environmental costs, both to fabricate the hardware and to power the computations.
In this talk, we discuss the power consumption of HPC and efforts to improve computation efficiency both at the software and hardware levels. By analyzing data from the TOP500 supercomputers ranking, we show that despite important and steady efficiency gains since 1970, the total power footprint of HPC keeps rising. Indeed, much like in Jevons’ paradox, the gained efficiency is harnessed to increase the model complexity and to apply simulation to new research and industry domains.
In recent years, AI models, specifically neural networks, have grown fast in complexity, reaching billions of parameters. This complexity growth enables flexibility, finer model accuracy, and impressive feats such as large language model’s text generation, which is difficult to distinguish from human-written text. However, it also comes with a spike in power consumption, both during training and inference.
To curb energy usage, we can reduce the model complexity to match the required accuracy of the problem addressed, either by computing with less precision or using a simpler model (which may also improve model explainability). This requires weighing the allocated resources against the expected requirements and outcomes for the computation, raising specific, contextual, and political questions. Faced with a shrinking energy budget, we should regulate our usage of computation resources and use numerical simulation judiciously.
Pablo de Oliveira Castro (https://sifflez.org) is a professor at the University of Versailles Saint Quentin and co-coordinates the first year of the Université Paris-Saclay master’s degree in High-Performance Computing and Simulation. He received his Ph.D. in 2010 on Parallel Data Flow Languages at the CEA. His research interests include floating-point arithmetic, compilers, and high-performance computing. He also participates in an interdisciplinary group, Écopolien (https://ecopolien.org), that brings together researchers and teachers concerned by the climate crisis.