Get the latest delivered to your inbox
Privacy Policy

Now Reading

Update: AMD 30x25 Energy Efficiency Goal in High-Performance Computing and AI-Training

MARK PAPERMASTER, Executive Vice President and Chief Technology Officer

Update: AMD 30x25 Energy Efficiency Goal in High-Performance Computing and AI-Training

MARK PAPERMASTER, Executive Vice President and Chief Technology Officer

Published 05-06-22

Submitted by AMD

"Application-Specific optimization provides better performance per Watt" with graph
Application specific accelerated compute nodes enable greater efficiency

As more and more devices become “smart devices” containing embedded processors with internet connectivity and often cameras, the explosion of data creation continues at an exponential pace. Artificial Intelligence (AI) and High Performing Computing (HPC) are transforming the computing landscape, enabling this massive data trove to be analyzed, leading to higher quality analytics, automated services, enhanced security, and many more purposes. The challenge: the scale of these advanced computations demands more and more energy consumption.

As a leader in creating high-performance processors to address the world’s most demanding analytics, AMD has prioritized energy efficiency in our product development. We do this by holistically approaching the design for power optimization across architecture, packaging, connectivity, and software. Our focus on energy efficiency aims to reduce costs, preserve natural resources, and mitigate the climate impacts.

Prioritizing energy efficiency at AMD is not new. In fact, we voluntarily set a goal for ourselves in 2014 to accelerate the typical use energy efficiency of our mobile processors to 25x by 2020. We met this goal, and exceeded it by achieving a 31.7x improvement.

Last year we announced a new vision - a 30x25 goal - to achieve 30x energy efficiency improvement by 2025 from a 2020 baseline for our accelerated data center compute nodes.i Built with AMD EPYC™ CPUs and AMD Instinct™ accelerators, these nodes are designed for some of the world’s fastest growing computing needs in AI training and HPC Applications. These applications are essential to scientific research in climate predictions, genomics, and drug discovery, as well as training AI neural networks for speech recognition, language translation and expert recommendation systems. The computing demands for these applications are growing exponentially. Fortunately, we believe it is possible to optimize energy use for these and other applications of accelerated compute nodes through architectural innovation.

AMD, along with our industry, understands the opportunity for data center efficiency gains to help reduce greenhouse gas emissions and increase environmental sustainability. For example, if all global AI and HPC server nodes were to make similar gains, we project up to 51 billion kilowatt hours (kWh) of electricity could be saved from 2021-2025 relative to baseline industry trends, amounting to $6.2B USD in electricity savings as well as carbon benefits from 600 million tree seedlings grown for 10 years.ii

Practically speaking, achieving the 30x goal means that in 2025, the power required for these AMD accelerated compute nodes to complete a single calculation will be 97% lower than in 2020. Getting there will not be easy. To achieve this goal means that we will need to increase the energy efficiency of an accelerated compute node at a rate that is more than 2.5x faster than the aggregate industry-wide improvement made during the period 2015-2020.iii

One Year Progress Update

So how are we doing? Nearly midway through 2022, we are on track toward achieving 30x25, having reached 6.79x improvement in energy efficiency from the 2020 baseline using an accelerated compute node powered by one 3rd Gen AMD EPYC CPU and four AMD Instinct MI250x GPUs. Our progress report utilizes a measurement methodologyiv validated by renowned compute energy efficiency researcher and author, Dr. Jonathan Koomey.

"Comparative Global Energy Use Projections for Data Center Compute Nodes Running HPC and AI Training Workloads" with graph
2022 update on 30x25 energy efficiency goal. AMD actual achievements are on track to achieve the 30X goal and well above the industry improvement trend from 2015-2020.

The business-as-usual “baseline industry trend” estimates global energy use for 2020-2025 following the same historical trend observed in 2015-2020 data. The AMD goal trendline shows global energy use based on the efficiency gains represented by the AMD 30x25 goal with the desirable result of lower energy consumption. The AMD actual trendline shows global energy use based on AMD compute node energy efficiency gains reported to date.

"Comparative Global Energy Use Projections for Data Center Compute Nodes Running HPC and AI Training Workloads" with graph
Comparative energy use projections for data center compute nodes globally running AI-training and HPC workloads. Source: AMD Internal Data 

While there is more to go to reach our 30x25 goal, I am pleased by the work of our engineers and encouraged by the results so far. I invite you to check in with us as we continue to annually report on progress.


i​Scenario based on all AI and HPC server nodes globally making similar gains to the AMD 30x goal, resulting in cumulative savings of up to 51.4 billion kilowatt-hours of electricity from 2021-2025 relative to baseline 2020 trends. Assumes $0.12 cents per kwh x 51.4 billion kwh = $6.2 million USD. Metric tonnes of CO2e emissions, and the equivalent estimate for tree plantings, is based on entering electricity savings into the U.S. EPA Greenhouse Gas Equivalency Calculator on 12/1/2021.

iiBased on 2015-2020 industry trends in energy efficiency gains and data center energy consumption in 2025.

iiiCalculation includes 1) base case kWhr use projections in 2025 conducted with Koomey Analytics based on available research and data that includes segment-specific projected 2025 deployment volumes and datacenter power utilization effectiveness (PUE) including GPU HPC and machine learning (ML) installations, and 2) AMD CPU socket and GPU node power consumptions incorporating segment-specific utilization (active vs. idle) percentages and multiplied by PUE to determine actual total energy use for calculation of the performance per Watt.

6.79x = (base case HPC node kWhr use projection in 2025 x AMD 2022 perf/Watt improvement using DGEMM and typical energy consumption + Base case ML node kWhr use projection in 2025 *AMD 2022 perf/Watt improvement using ML math and typical energy consumption) /(2020 perf/Watt * Base case projected kWhr usage in 2025). For more information on the goal and methodology, visit

ivIncludes AMD high-performance CPU and GPU accelerators used for AI training and High-Performance Computing in a 4-Accelerator, CPU hosted configuration. Goal calculations are based on performance scores as measured by standard performance metrics (HPC: Linpack DGEMM kernel FLOPS with 4k matrix size. AI training: lower precision training-focused floating-point math GEMM kernels such as FP16 or BF16 FLOPS operating on 4k matrices) divided by the rated power consumption of a representative accelerated compute node including the CPU host + memory, and 4 GPU accelerators.

AMD logo



About AMD

For more than 50 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) website, blog, LinkedIn and Twitter pages.

More from AMD

Join today and get the latest delivered to your inbox