POWER > Trends > IBM Research

Gordon Bell Prize Winners Simulate Earth’s Mantle


Copyright: Vadim Sadovski
 

ISM: What did you actually discover based on the simulation?
MG:
What we’ve discovered is a much more intimate coupling of the forces in the deep interior of the earth, hundreds of kilometers down, and how that coupling is key to driving the plates themselves. It’s highly likely that the coupling between the deeper interior and the plates is stronger than we thought.

The second aspect is that many people thought energy is dissipated inside this system in a place near Japan, for example, where the plates are actually sliding by one another. What we found is that the energy is released throughout the whole system. Those seem to be esoteric points, but now we have a framework that more realistically incorporates forces inside the earth that we can link to observations on a global scale.

ISM: What were some of the issues with developing a simulation like this?
Cristiano Malossi (CM):
The starting point was the amazing work the group of Omar [Ghattas, director, Center for Computational Geosciences, University of Texas at Austin], Georg [Stadler], Mike [Gurnis] and Johann [Rudi, graduate student, University of Texas at Austin, co-advised by Ghattas and Stadler] did together over the previous five years. They studied the problem from many angles, including geophysics, math, models and parallelization.

This preliminary work was fundamental to arriving at such a realistic simulation in less than two years. When we started to work together in early 2014, their models were running on systems with 10,000 cores without multithreading capacity. Our mission was to bring them to scale on Sequoia, a BG/Q system with 1.6 million cores, where each core can run four threads; so, we’re talking about 6 million parallel threads simultaneously performing as part of the simulation. To reach excellent performance on such a supercomputer, every tiny region of the code must be optimized so that the work of all 6 million threads is perfectly balanced and synchronized.

Therefore, the first part of the work focused on optimizing interprocess communications. This is important because when you have so many cores, you want to let them do as much work as possible independently. At the same time, when you need to solve a global problem, you must communicate information between these cores. The more cores you’re using, the more information you’re exchanging. This leads to a problem, which is basically optimizing the communication between the cores and at the same time not losing computational time.

Another problem we faced was managing the system’s memory. Every little structure in a code that is unnecessarily duplicated on many cores, will globally waste a tremendous amount of memory on large systems. For instance, on Sequoia, each duplicated structure is replicated on 1.6 million cores; thus, a few megabytes on a single core become several terabytes wasted over the entire machine. As you can see, this number can easily explode.

Another key aspect to consider when vying for the Gordon Bell Prize is that two different categories compete for the same prize: peak performance and scalability. Most submissions typically go for peak performance, a number that is easy to measure, quantify, and compare. However, many real-world applications, such as ours, are not suitable for this category because they’re based on more complex partial differential equations, where computation, memory access (read/write) and communication are all equivalently important.

For these problems, the capacity to reduce by a factor of two the time-to-solution each time you double the size of the machine (that’s what we call scalability) is more important than the pure performance, even though both are evaluated. Therefore, to reach the result that won the prize, we had to not only balance the work of 6 million threads and show that we can scale up the problem size without practically losing anything in parallel efficiency but also show that our computational performance was the best possible for this type of problem.

ISM: What’s the difference between explicit and implicit solvers?
Georg Stadler (GS):
The mantle flow problem we’re solving requires an implicit solver because the problem is globally coupled and instantaneous.

Jim Utsler, IBM Systems Magazine senior writer, has been covering the technology field for more than a decade. Jim can be reached at jjutsler@provide.net.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store