INDUSTRY
PathScale Introduces InfiniPath InfiniBand Adapters for PCI Express
PathScale, the developer of innovative software and hardware solutions to accelerate high performance computing, today announced, at the Intel Developer Forum, the release of the PathScale InfiniPath InfiniBand Adapter for PCI Express. This extends PathScale's leadership position with its unique HyperMessaging Architecture by expanding support to the Intel Xeon single-core and dual-core architectures. The superior performance of the InfiniPath interconnect will enable scientists, researchers and engineers to tackle a wider range of applications faster and more efficiently. The PathScale InfiniPath InfiniBand interconnect is being demonstrated at IDF and will be available in May to customers who own or are planning to purchase servers with PCI Express slots. "Intel is creating an excellent foundation for building cluster computing systems. With the release of the InfiniPath InfiniBand interconnect for PCI Express, PathScale is now providing customers in the HPC community with a means to dramatically improve performance, application scaling and cost on Intel-based clusters," said Scott Metcalf, CEO of PathScale. "By adding PCI Express support to our InfiniPath products, PathScale can deliver its industry-leading performance across a broad range of systems and servers. This allows end-user organizations to improve productivity and maximize return on investment from their clusters."
The PathScale InfiniPath InfiniBand PCI Express x8 adapter was designed specifically for optimal performance on the next-generation Intel server platform, code-named "Bensley." The InfiniPath InfiniBand adapter promises to improve data throughput on PCI Express-based systems by eliminating bottlenecks that can slow down communications, improving the time-to-results for complex simulations.
The InfiniPath HyperMessaging Architecture is the key to InfiniPath's tremendous performance advantage, delivering the highest message rate and highest effective bandwidth of any cluster interconnect available. It allows InfiniPath to support more than 10 million messages per second, or 10X-MR, which is over 10 times more messages per second than any other cluster interconnect. As a result, end-user organizations relying on InfiniBand-based clustered systems for high-performance computing tasks will significantly increase the scalability of their applications, reduce network fabric conjestion and gain overall cluster efficiency.
"Both InfiniBand customer adoption and technology advancement continue to accelerate rapidly," said Jim Pappas, director of Intel Initiative Marketing. "InfiniBand adapters from companies such as PathScale provide important capabilities to Intel's platforms by taking advantage of PCI Express and Intel multi-core technology."
The message transmission rate is one of the most critical performance metrics for cluster interconnects, because it determines the bandwidth delivered to the application and overall scalability. Every PathScale InfiniPath product is built on the InfiniPath HyperMessaging Architecture, which combines a highly pipelined, cut-through design with direct hardware support for multi-core processors in a connection-less software environment. This design delivers 10X-MR capability, which is the key to application scalability and cluster efficiency. By driving messaging rates well beyond that of any other interconnect, PathScale is enabling scientists, engineers and researchers to solve the most challenging classes of computational problems on a scale previously unreachable by commodity clusters.