ACADEMIA
Science at Scale: SciDAC Astrophysics Code Scales to Over 200K Processors
- Written by: Writer
- Category: ACADEMIA
Performing high-resolution, high-fidelity, three-dimensional simulations of Type Ia supernovae, the largest thermonuclear explosions in the universe, requires not only algorithms that accurately represent the correct physics, but also codes that effectively harness the resources of the next generation of the most powerful supercomputers.
Through the Department of Energy's Scientific Discovery through Advanced Computing (SciDAC), Lawrence Berkeley National Laboratory's Center for Computational Sciences and Engineering (CCSE) has developed two codes that can do just that.
MAESTRO, a low Mach number code for studying the pre-ignition phase of Type Ia supernovae, as well as other stellar convective phenomena, has just been demonstrated to scale to almost 100,000 processors on the Cray XT5 supercomputer "Jaguar" at the Oak Ridge Leadership Computing Facility. And CASTRO, a general compressible astrophysics radiation/ hydrodynamics code which handles the explosion itself, now scales to over 200,000 processors on Jaguar—almost the entire machine. Both scaling studies simulated a pre-explosion white dwarf with a realistic stellar equation of state and self-gravity.
These and further results will be presented at the 2010 annual SciDAC conference to be held July 11-15 in Chattanooga, Tennessee.
Both CASTRO and MAESTRO are structured grid codes with adaptive mesh refinement (AMR), which focuses spatial resolution on particular regions of the domain. AMR can be used in CASTRO to follow the flame front as it evolves in time, for example, or in MAESTRO to zoom in on the center of the star where ignition is most likely to occur.
Like many other structured grid AMR codes, CASTRO and MAESTRO use a nested hierarchy of rectangular grids. This grid structure lends itself naturally to a hybrid OpenMP/MPI parallelization strategy. At each time step the grid patches are distributed to nodes, and MPI is used to communicate between the nodes. OpenMP is used to allow multiple cores on a node to work on the same patch of data. A dynamic load-balancing technique is used to adjust the load.
Using the low Mach number approach, the time step in MAESTRO is controlled by the fluid velocity instead of the sound speed, allowing a much larger time step than would be taken with a compressible code. This enables researchers to evolve the white dwarf for hours instead of seconds of physical time, thus allowing them to study the convection leading up to ignition. MAESTRO was developed in collaboration with astrophysicist Mike Zingale of Stony Brook University, and in addition to the SNe Ia research, is being used to study convection in massive stars, X-ray bursts, and classical novae.
MAESTRO and CASTRO share a common software framework. Soon, scientists will be able to initialize a CASTRO simulation with data mapped from a MAESTRO simulation, thus enabling them to study SNe Ia from end to end, taking advantage of the accuracy and efficiency of each approach as appropriate.