PHYSICS
New 64 Bit Supercomputer Goes Live at AIP
The development of computers powerful enough to perform billions of calculations per second and create realistic simulations to test theories of the universe's formation has led to the rapid development of Computational Astrophysics as a major field of research.
Matthias Steinmetz, director at The Astrophysical Institute Potsdam (AIP) says, “Computational Astrophysics as a field of research has become equal in size to traditional Observational and Theoretical Astrophysics and the three fields have at last converged to bring clarity among those trying to explain the mysteries of the universe.”
One limitation of whole-universe supercomputer simulations is that they have failed to resolve the small-scale structure of the universe, individual galaxy clusters and galaxies. Frustratingly, the computational power simply hasn’t been there and explanations of the universe have remained incomplete. With the newest generation of supercomputers well within budget this is something that is about to change. “We are about to break through this barrier” announces Matthias. “It’s exciting times.”
Matthias’ determination to make the AIP a premier research institute for astrophysical supercomputing in the world led the Federal Ministry of Education and Research to grant the Institute £500,000.
High-performance computing and the analysis of large datasets is one of the major fields of work at the AIP. The focal point of research is the use of particle and grid simulations to solve the gravitational N-body problem as well as problems in the fields of hydrodynamics and magneto-hydrodynamics.
The grant was awarded to fund large scale cosmological simulations intended to answer the big questions; how did the universe originate? How will it end? What is the structure and geometry of space? How did galaxies and galaxy clusters form from the ripples left over from the “big bang”? How do they develop with time? How were stars and planets, including our sun and solar system, born? And how do they age?
Hubba Hubba Hubble
“The Hubble Space Telescope has given us the deepest portrait of the visible universe ever achieved by humankind. Each day Hubble’s orbiting observatory generates 500 gigabytes of data its digital archive delivers more than 1,600 gigabytes of data per day” explains Matthias. “To create simulations using the data provided by telescopes of this nature we knew we would need to invest in computers that would give us significantly more processing power than the 50 Gflops we currently had available.”
Clusters at a tenth of the cost
Until recently, this would have meant investing millions of pounds in an expensive supercomputer and paying the manufacturer 10 to 20 percent of the purchase price each year to maintain it, but now there is a real alternative, offering true return on investment.
“The alternative is clusters of computers of standard design which are configured to perform many tasks in parallel, like a true parallel high-performance computer” explains Matthias. “The performance is the same as a traditional supercomputer, but at a tenth of the cost.”
Another big advantage of taking the cluster path rather than the shared-memory super-computing path is the flexibility it affords. The cluster solution can be scaled up, as and when the budget allows, and can be reconfigured to meet the needs of the day.
There was no choice to make. The next step was to define a specification.
No choice….until now
“We bought the solution in two stages” explains Matthias. “In 2002 we a small 32bit machine with 72 CPUs and spent almost one year testing the technology to find out what it was we needed to invest in to get the optimum performance. We struggled with the limited 4GB per CPU of memory that 32bit technology offered, but at the time the only alternative was Intel’s 64-bit Itanium processors and they were way out of our budget” he explains.
By the time the Institute was ready to go out to tender for stage two, AMD had launched an affordable 64-bit rival to Intel’s Itanium processor, designed to exploit PC technology. “This new technology was of great interest to us” explains Matthias. “It was the first microprocessor on the market to natively support both 32-bit and 64-bit applications and although we hadn’t tested it out, there was data to suggest that this would give us what we needed.”
32-bit ‘v’ 64-bit
“The tender document we drafted included our benchmarks and programme requirements” explains Matthias. “It gave potential suppliers the opportunity to make a case for both 32-bit and 64-bit processors and as long as there was evidence to show that our programmes would run on the solution specified we were happy to consider either at this point.”
The AIP received 13 offers in response to the tender issued. “We were looking for companies with specialist experience and expertise in building clusters; solutions that offered us the largest amount of memory, the fastest CPU speed and a fast network connection and clear evidence that our benchmarks and programme requirements had been met.”
Testing time
Four companies, two offering a 32-bit solution and two offering a 64-bit solution, were shortlisted and an intensive period of benchmarking tests took place with each of the companies on the shortlist.
“Each of the four companies gave us remote access to log in and run our programmes” explains Matthias. “We compiled our codes and then fine tuned them to test the performance. The two 64-bit computing solutions outperformed the 32-bit solutions and was superior for large memory jobs” he adds.
The two remaining solutions were scrutinised further, together with the credentials of the companies involved.
Compusys capable
“In the end it was an easy decision” says Matthias. “We chose Compusys because of their track record of building supercomputers from commodity equipment controlled by the Open Source Linux operating system, which has become the de facto standard supercomputer operating system. We are not novices and are therefore well aware of the issues with commodity 64-bit computing. We had to be sure that the company we chose was capable of making such a high profile solution perform for us. Compusys clearly was.”
The technical solution proposed was based on 270 64-bit AMD Opteron processors organised in 133 Operton 244 processor dual nodes and one Opteron 844 processor quad node, with half a Terabyte of main memory, 10 Terrabytes of Hard Disk Storage and a high speed Gigabit Ethernet connection.
Easy does it
“We liked the fact that Compusys has a very close working relationship with AMD and that the solution proposed used Open Source software distributions integrated and controlled using completely separate Compusys cluster management software. It made it easy to configure and operate every type of node within the cluster.
A race against time
Due to constraints of several AIP projects, dictating that the solution had to be in place before the end of 2003, Compusys had just three months in which to agree the final specification, build it, configure it, test it, dismantle it, deliver it to Germany and reconfigure it.
“It was a real feat” says Matthias. “We spent some time working together discussing the issues, making technical changes and fine tuning the solution before Compusys could even begin to build it.”
Finally, in early December the Supercomputer ‘SANSSOUCI’ as it was named was ready to be shipped to Potsdam for reassembly. It took Compusys just two days to rebuild it, in conditions that proved extremely challenging.
Cool Runnings
“When Compusys came to rebuild SANSSOUCI, the air conditioning unit had not yet been installed into the new machine room built to house it” explains Matthias. “The individual computer processing units can heat up to 70 degrees Celsius and without extra ventilation and air-conditioning to cool the computer down, it would malfunction or even be destroyed.”
The AIP solved the problem with large industrial fans that blew the cold winter air from the windows into the cluster. This interim measure kept SANSSOUCI at a safe temperature until the air conditioning units were installed in the Spring.
“Testing, fine tuning and reconfiguration work doesn’t stop” says Matthias. “Compusys have been helping us fine tune the cluster to give us the best possible chance of making it into the TOP500 Supercomputer Sites list. They log into the cluster regularly to test it and upgrade it and have even been back to reconfigure the memory to give us better performance. In the field of astrophysics we have the most capable supercomputer in the world thanks to Compusys.”
Performance rockets by over 1000%
The Astrophysical Institute Potsdam is waiting to hear whether a recorded floating-point rate of execution of 623.3 Gflops on the Linpack benchmark is enough to secure it a place in the list. The results will be released in June at the International Supercomputer Conference, in Heidelberg, Germany.
The next steps are already planned. After further benchmarking tests The Institute is intending to connect the small 32bit machine with 72 CPUs to the SANSSOUCI and upgrade with a high speed interconnect which should bring performance to at least 750-800 Gflops.
The supercomputer will employ a range of applications from research on turbulences and convections on the surface of the sun and other stars to fundamental research on the genesis and development of galaxies and other large structures in the universe.
Simulated "mock data" will be shared with the community via the Virtual Observatory as well as datasets drawn from the Institute’s collaborations on projects like RAVE; the all-sky survey by an international consortium of twenty-one astronomers from eleven countries to unveil the mysteries of how the Milky Way Galaxy formed.
To infinity and beyond
However, Matthias has much bigger plans for the cluster than simply “making mock universes and producing “mock data” in Potsdam.
“I am already in discussions with colleagues at other institutions in Germany, Switzerland and the UK about combining processing power and working together to simulate the development of the whole Universe using GRID technology” explains Matthias. “This is the future of computational astrophysics and we are moulding it.”
*The Institute is also one of Germany's main nodes of the “Virtual Observatory”, a network of astrophysical organisations around the world that interconnects large scale electronic data archives using GRID technology.