ENGINEERING
Mellanox Accelerates Purdue’s Supercomputer to Petascale-Class Performance
- Written by: Tyler O'Neal, Staff Editor
- Category: ENGINEERING
Mellanox Connect-IB adapters provide Purdue with unprecedented message rate performance to enhance fluid dynamics, and nanoelectronic and performance modeling
“The increasing complexity of science and engineering research at Purdue is driving a need for increasingly faster and scalable computational resources,” said Michael Shuey, HPC system manager at Purdue University. “Mellanox’s FDR InfiniBand solutions, and in particular their Connect-IB adapters, allow MPI codes to scale more readily than our previous systems. This enables more detailed simulations, and helps empower Purdue scientists to push the envelope on their research in weather, bioscience, materials engineering and more.”
“We are pleased to have Mellanox’s FDR InfiniBand solution as the interconnect of choice for Purdue’s Conte supercomputer, the nation's fastest university-owned supercomputer,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Utilizing Mellanox’s Connect-IB adapters, Purdue is able to take advantage of the adapter’s leading message rate and bandwidth performance to provide its scientist with unmatched performance and capabilities to enhance and accelerate their highly-complex simulation modeling.”
Connect-IB is the world’s most scalable server and storage adapter solution for SuperComputing environments. Connect-IB adapters deliver the highest throughput of 100Gb/s utilizing PCI Express 3.0 x16, unmatched scaling with innovative transport services, sub-microsecond latency and 137 million messages per second – 4X higher message rate over competing solutions.
Available today, Mellanox’s FDR 56Gb/s InfiniBand solution includes Connect-IB adapter cards, SwitchX-2 based switches (from 12-port to 648-port), fiber and copper cables, and ScalableHPC accelerator and management software.