INDUSTRY
InfiniCon Attains Breakthrough Performance Levels for Clusters
144-Node "Niobe" Cluster at AMD Developer Center Shatters 80% Efficiency Barrier in High-Performance Linpack Testing -- InfiniCon Systems, the premier provider of clustering and I/O virtualization solutions for next-generation server networks, announced today that its InfinIO family of InfiniBand-based solutions has attained more than 80% average efficiency and up to 88% peak efficiency for scaling CPUs in benchmarking performed on a 144-node, AMD Opteron processor-based computer cluster at the AMD Developer Center. Codenamed Niobe, the cluster consists of InfiniCon's InfinIO 3000 Switch Series and InfiniServ(TM) software providing the interconnect for Appro's HyperBlade and Rackable Systems' C1000 Series servers running on SuSE Linux Enterprise Server. Niobe was installed at the AMD Developer Center in the first quarter of 2004 to afford users the opportunity to prototype how their application sets would perform across large, production-class fabrics based on AMD's advanced 64-bit processing technology and InfiniBand - a low-latency, 10Gbps networking architecture. In separate tests, the InfiniCon solutions have also shown industry-leading performance metrics in both latency (under 4.5 microseconds) and bandwidth (880MB/second) using industry-standard performance measurement tools. This combination of low latency, high bandwidth, and ultra-efficient scaling enables commodity clusters to achieve unprecedented levels of price-performance for demanding applications such as computational fluid dynamics, EDA projects, simulations, and visualization problems.
"Latency and efficiency are the key metrics to allow 'scale out' commodity clusters to effectively replace proprietary 'scale up' systems, enabling processors across nodes to act as if they shared the same motherboard," stated Vernon Turner, group vice president of Global Enterprise Server Solutions at IDC. "The numbers demonstrated by InfiniCon achieve the required levels of performance, and should have an impact on the adoption of these technologies in the HPC market."
The more than 80% efficiency level for scaling CPUs - derived through the industry-standard High Performance Linpack test - shatters typical performance marks rendered by alternative interconnect technologies used for building high performance computing (HPC) clusters. The ranking of the most powerful computer systems in the world, compiled in the Top500 Supercomputer list last published in November 2003, reveals for example that the average Gigabit Ethernet-based cluster similar in scope to Niobe attained only 63% of maximal performance relative to theoretical peak performance; similarly, multiple Top500 clusters built with Myrinet(TM) technology and the AMD Opteron processor performed at 69% average efficiency. The average efficiency for all Myrinet clusters listed in the Top500 was 60.7%, while the average efficiency for all Ethernet clusters was 41.4%.
Highly Efficient Server Scale-Out
The efficiency of CPU scaling has bottom-line benefits for businesses and research organizations that range from the ability to complete more transactions or tasks per second, to accelerating discovery and product development, to the savings realized from reducing the amount of server hardware and management effort required to support data-intensive compute environments.
"The ability to increase performance from advanced interconnects like InfiniBand is a significant complement to the power and versatility of platforms based on the AMD Opteron processor," said Ben Williams, vice president, AMD's Enterprise and Server/Workstation Business. "Through collaborations with leading systems companies like InfiniCon and the processing efficiencies that are now possible for high-performance servers, AMD is helping to advance software development and meet end user application requirements on numerous fronts."