ENGINEERING
Germans develop new programming model
- Written by: Tyler O'Neal, Staff Editor
- Category: ENGINEERING
The demand for even quicker, more accurate, and conjointly energy-saving supercomputer clusters is growing in every sector. The new asynchronous programming model GPI from Fraunhofer ITWM would possibly become a key building block towards realizing successive generation of supercomputers.
Supercomputing is the key information technology for various applications that we've got come to take as a right – everything from Google searches to weather forecasting and climate simulation to bioinformatics require an ever increasing quantity of computing resources. Huge information analysis to boot is driving the demand for even quicker, simpler, and energy-saving clusters. The quantity of processors per system has currently reached the millions and appears set to grow even quicker within the future. However one thing has remained mostly unchanged over the past twenty years which is that the programming model for these supercomputers. The Message Passing Interface (MPI) ensures that the microprocessors within the distributed systems can communicate. For some time now, however, it has been reaching the boundaries of its capability.
“I was trying to solve a calculation and simulation problem related to seismic data,” commented Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM. “But existing methods weren’t working. The problems were a lack of scalability, the restriction to bulk-synchronous, two-sided communication, and the lack of fault tolerance. So out of my own curiosity I began to develop a new programming model.” This development work ultimately resulted in the Global Address Space Programming Interface – or GPI – which uses the parallel architecture of supercomputers with maximum efficiency.
GPI is predicated on a completely new approach: An asynchronous communication model, that is based on remote completion. With this approach, each processor can directly access all data – regardless of which memory it is on and while not affecting other parallel processes. Together with Rui Machado, also from Fraunhofer ITWM, and Dr. Christian Simmendinger from T-Systems Solutions for research, Dr. Carsten Lojewski is receiving a Joseph von Fraunhofer prize this year.
Like the programming model of MPI, GPI wasn't developed as a parallel programing language, but as a parallel programming interface, which means it can be used universally. The demand for such a scalable, flexible, and fault-tolerant interface is large and growing, particularly given the exponential growth in the range of processors in supercomputers.
Initial sample implementations of GPI have worked very successfully: “High-performance computing has become a universal tool in science and business, a fixed part of the design process in fields such as automotive and aircraft manufacturing,” said Dr. Christian Simmendinger. “Take the example of aerodynamics: one of the simulation cornerstones in the European aerospace sector, the software TAU, was ported to the GPI platform in a project with the German Aerospace Center (DLR). GPI allowed us to significantly increase parallel efficiency.”
Even though GPI is a tool for specialists, it has the potential to revolutionize algorithmic development for high-performance software. It is considered a key component in enabling the next generation of supercomputers, which are 1,000 times faster than the mainframes of today.
Supercomputing is the key information technology for various applications that we've got come to take as a right – everything from Google searches to weather forecasting and climate simulation to bioinformatics require an ever increasing quantity of computing resources. Huge information analysis to boot is driving the demand for even quicker, simpler, and energy-saving clusters. The quantity of processors per system has currently reached the millions and appears set to grow even quicker within the future. However one thing has remained mostly unchanged over the past twenty years which is that the programming model for these supercomputers. The Message Passing Interface (MPI) ensures that the microprocessors within the distributed systems can communicate. For some time now, however, it has been reaching the boundaries of its capability.
“I was trying to solve a calculation and simulation problem related to seismic data,” commented Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM. “But existing methods weren’t working. The problems were a lack of scalability, the restriction to bulk-synchronous, two-sided communication, and the lack of fault tolerance. So out of my own curiosity I began to develop a new programming model.” This development work ultimately resulted in the Global Address Space Programming Interface – or GPI – which uses the parallel architecture of supercomputers with maximum efficiency.
GPI is predicated on a completely new approach: An asynchronous communication model, that is based on remote completion. With this approach, each processor can directly access all data – regardless of which memory it is on and while not affecting other parallel processes. Together with Rui Machado, also from Fraunhofer ITWM, and Dr. Christian Simmendinger from T-Systems Solutions for research, Dr. Carsten Lojewski is receiving a Joseph von Fraunhofer prize this year.
Like the programming model of MPI, GPI wasn't developed as a parallel programing language, but as a parallel programming interface, which means it can be used universally. The demand for such a scalable, flexible, and fault-tolerant interface is large and growing, particularly given the exponential growth in the range of processors in supercomputers.
Initial sample implementations of GPI have worked very successfully: “High-performance computing has become a universal tool in science and business, a fixed part of the design process in fields such as automotive and aircraft manufacturing,” said Dr. Christian Simmendinger. “Take the example of aerodynamics: one of the simulation cornerstones in the European aerospace sector, the software TAU, was ported to the GPI platform in a project with the German Aerospace Center (DLR). GPI allowed us to significantly increase parallel efficiency.”
Even though GPI is a tool for specialists, it has the potential to revolutionize algorithmic development for high-performance software. It is considered a key component in enabling the next generation of supercomputers, which are 1,000 times faster than the mainframes of today.