INDUSTRY
InfiniBand 10Gb/sec Technology Demonstrates Excellent High Performance Computing
Santa Clara, Calif. and Yokneam, Israel, -- MellanoxTM Technologies Ltd., the leader in InfiniBandSM silicon, today announced new High Performance Computing (HPC) cluster performance data provided by Linux Networx demonstrating that InfiniBand delivers excellent scalability and superb LINPACK benchmark results. These results build upon previously demonstrated InfiniBand record performance of greater than 850 MB/sec of MPI bandwidth and excellent sub-6 microsecond user level latencies. The InfiniBand architecture was developed by the top server and storage OEM companies to enable industry-standard clustering for the server, database, communications, data storage, embedded and HPC markets. InfiniBand has now demonstrated the ability to effectively scale into high port count HPC clusters. “We have been anxious to evaluate the performance of InfiniBand HPC clusters,” said Joshua Harr, chief technology officer of Linux Networx. “As a leader in high performance computing solutions, we are always on the lookout for new technologies that can deliver better computing price-performance to our customers. The performance and scalability that we’ve achieved with these InfiniBand benchmarks indicates that this is a HPC technology with great potential.”
In a recent test at Linux Networx, an InfiniBand cluster delivered near-linear efficiency as the cluster size was scaled to 64 nodes. A measured LINPACK performance result of 505 Gflops was achieved, and these results were attained without the opportunity to optimize the cluster. In a separate test cluster of 48-nodes the near-linear scalability efficiency was also demonstrated by the InfiniBand interconnect.
“With only a modest 64-node cluster of Intel standard servers, InfiniBand Technology was able to deliver over one-half teraflop of performance,” said Dana Krelle, vice president of marketing for Mellanox Technologies, Ltd. “We consider this a key step in demonstrating the superior level of performance that can be achieved with 10 Gb/s InfiniBand HPC clusters. Mellanox will continue to optimize all aspects of these clusters to further improve the results.”
The InfiniBand 10Gb/sec test cluster results are based upon 48-nodes of Linux Networx Evolocity® II (E2) with dual Intel® processors, and a second separate cluster with 64-nodes of Linux Networx Evolocity II (E2) with dual Intel processors. Both used GNU Compiler 3.2, OSU MVAPICH 0.9, Mellanox MTPB23108 HCAs and a Mellanox InfiniScale 96-Port Switch. MPI software, MVAPICH (MPI for InfiniBand on VAPI Layer), was provided by the Department of Computer and Information Science at Ohio State University.