To introduce you to Exascale computing, as well as its challenges, we interviewed the distinguished Professor Jack Dongarra (University of Tennessee), an internationally renowned expert in high-performance computing and the leading scientist behind the TOP500 (http://www.top500.org/), a list which ranks supercomputers according to their performance.
Exascale computing, the ability of a computer system to perform a million trillion operations per second, has been long anticipated as the next major step in computer engineering. The unprecedented levels of computing power offered by an exascale machine are expected to significantly enhance our understanding about various complex phenomena of our time, for it will allow to simulate and study their behavior in greater detail and fidelity.
However, the importance of exascale computing does not solely reflect the ability to perform this massive amount of computations, in fact, to mimic the words of a famous Greek poet, “it is not the destination that matters, but the journey itself”. Conquering exascale computing will imply considerable advances in both hardware and software which we use today to operate our supercomputers. These advances will virtually benefit every single consumer electronics product, ranging from smartphones to cameras, and will create new paths of research. This is not new story though. Supercomputing has been long paving the road for the faster and more reliable processors which are found in nowadays desktops, laptops, and smartphones. To see this, just think that the smartphone that you hold in your hands is orders of magnitude more powerful than the computing system that NASA used to launch its spacecraft back in 1969; the very year that Neil Armstrong set his foot on the moon.

Interview with Professor Jack Dongarra
XRDS: There has been a lot of excitement regarding exascale computing and the extreme computing power it will make available to the scientific community as well as industry. What should we expect in the exascale computing era?
Jack Dongarra (JD): Exascale computing will provide capability benefits to a broad range of industries, including energy, pharmaceutical, aircraft, automobile, entertainment, and others. More powerful computing capability will allow these diverse industries to more quickly engineer superior new products that could improve a nation’s competitiveness. In addition, there are considerable flow-down benefits that will result from meeting both the hardware and software high performance computing challenges. These would include enhancements to smaller computer systems and many types of consumer electronics, from smartphones to cameras.
XRDS: What are some of the application domains that would benefit from exascale computing, and how exascale computing will help their progress?
JD: Some of these domains are: weather and climate forecasting, oil exploration, bio-medical research, high-end equipment development, new energy research, animation design, new material research, engineering design, simulation and analysis remote sensing data processing, financial risk analysis, etc.
Supercomputers enable simulation – that is, the numerical computations to understand and predict the behavior of scientifically or technologically important systems – and therefore accelerate the pace of innovation. Simulation enables better and more rapid product design. Simulation has already allowed Cummins to build better diesel engines faster and less expensively, Goodyear to design safer tires much more quickly, Boeing to build more fuel-efficient aircraft, and Procter & Gamble to create better materials for home products. Simulation also accelerates the progress of technologies from laboratory to application. Better computers allow better simulations and more confident predictions. The best machines today are 10,000 times faster than those of 15 years ago, and the techniques of simulation for science and national security have been improved.
Sustaining and more widely exploiting the U.S. competitive advantage in simulation requires concerted efforts toward two distinct goals. First, we must continue to push the limits of hardware and software. Second, to remain competitive globally, U.S. industry must better capture the innovation advantage that simulation offers. But bringing such innovation to large and small firms in diverse industries requires public-private partnerships to access simulation capabilities largely resident in the national laboratories and universities.
XRDS: What are the main challenges we face en route to exascale computing? When exascale computing becomes a reality, how is it going to affect the way we program high-performance machines today?
JD: Challenges at the hardware: Power challenges today are at the 10MW range for our largest systems. Without changes in the hardware this could grow to 200 MW, clearly unacceptable.
Data movement challenges: Achieving adequate rates of data transfer, or bandwidth, and reducing time delays, or latency, between the levels of memory hierarchy.
Reducing power requirements: Based on current technology, scaling today’s systems to an exaflop level would consume more than a gigawatt of power, roughly the output of Hoover Dam. Reducing the power requirement by a factor of at least 100 is a challenge for future hardware and software technologies.
Coping with run-time errors: Today’s systems have approximately 500,000 processing elements. By 2020, due to design and power constraints, the clock frequency is unlikely to change, which means that an exascale system will have approximately one billion processing elements. An immediate consequence is that the frequency of errors will increase (possibly by a factor of 1000) while timely identification and correction of errors becomes much more difficult.
Exploiting massive parallelism: Mathematical models, numerical methods, and software implementations will all need new conceptual and programming paradigms to make effective use of unprecedented levels of concurrency.
Biography of Prof. Dr. Jack Dongarra (Source: http://www.scientificcomputing.com.)

Dr. Jack Dongarra is a Distinguished Professor at the Electrical Engineering and Computer Science Department at University of Tennessee, Knoxville. Dr. Dongarra specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. He was awarded the IEEE Sid Fernbach Award in 2004 and in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing’s award for Career Achievement. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a member of the National Academy of Engineering.
Very interesting article on exascale computing!