News Archives

RSS Feed

[Colloquium] The Case for Efficiency in High-Performance Computing

October 21, 2011

Watch Colloquium: 

M4V file (684 MB)

  • Date: Friday, October 21, 2011 
  • Time: 12:00 pm — 12:50 pm
  • Place: Centennial Engineering Center 1041

David Lowenthal
Department of Computer Science, The University of Arizona

Traditionally, high-performance computing (HPC) has been performance based, with speedup serving as the dominant metric. However, this is no longer the case; other metrics have become important, such as power, energy, and total cost of ownership. Underlying all of these is the notion of efficiency.

In this talk we focus on two different areas in which efficiency in HPC is important: power efficiency and performance efficiency. First, we discuss the Adagio run-time system, which uses dynamic frequency and voltage scaling to improve power efficiency in HPC programs while sacrificing little performance. Adagio locates tasks off of the critical path at run time and executes them at lower frequencies on subsequent timesteps. Second, we discuss our work to improve performance efficiency. We describe a regression-based technique to accurately predict program scalability. We have applied our technique to both strong scaling, where the problem size is fixed as the number of processors increases, as well as time-constrained scaling, where the problem size instead increases with the number of processors such that the total run time is constant. With the former, we avoid using processors that result in inefficiency, and with the latter, we allow accurate time-constrained scaling, which is commonly desired by application scientists yet nontrivial. We conclude with some ideas about where efficiency will be important in HPC in the future.

 

Bio: David Lowenthal is a Professor of Computer Science at the University of Arizona. He received his Ph.D at the University of Arizona in 1996, and was on the faculty at the University of Georgia from 1996-2008 before returning to Arizona in 2009. His research centers on addressing fundamental problems in parallel and distributed computing, such as scalability prediction and power/energy reduction, through a system software perspective. His current focus is on solving pressing power and energy problems that will allow exascale computing to become a reality within the decade.