GOVERNMENT
Cross-Continental InfiniBand Cluster Staged for Deployment at SC|05
The OpenIB Alliance today announced that nearly thirty organizations are confirmed to showcase the world's largest, cross-continental InfiniBand data center in conjunction with SCinet at next week's Supercomputing 2005 conference (SC|05) in Seattle, Washington (Washington State Convention and Trade Center). The SCinet InfiniBand cluster will host over 6TFlops of supercomputing performance, which would rank the cluster as high as 50 on the TOP500 list of the world's most powerful supercomputers. In addition, the SCinet InfiniBand fabric will have direct access to native attached InfiniBand storage solutions hosted by the StorCloud initiative. The strong industry mix of InfiniBand component vendors, infrastructure equipment suppliers, software vendors, and research and university end-users that are participating include:
AMD, Ames Laboratory, Appro, ASUS, Cisco, Cornell Theory Center, Dell, Emcore, Hewlett Packard, IBM, Intel, IWILL USA Corp., Lawrence Berkeley National Laboratory, Mellanox, Microsoft, NCSA, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Obsidian Research, Pathscale, Pittsburgh Supercomputing Center, Rackable, RedHat, Sandia National Laboratories, Silicon Graphics, SilverStorm, Sun Microsystems, Tyan Computer, and Voltaire.
SC|05 participants with the OpenIB Alliance will use the OpenIB software stack for both Linux and Windows host systems, and the OpenIB subnet manager (OpenSM) will manage the entire Infiniband network. The availability of the OpenIB stack from all vendors is the industry's solution to widespread interoperability and mass deployment of InfiniBand fabrics in data centers for enterprise and performance computing.
The SCinet InfiniBand fabric locally interconnects participating booths on the SC|05 exhibit floor via InfiniBand over MPO multi-mode fiber using Emcore SmartLink modules.
The three remote InfiniBand clusters are at Lawrence Livermore National Laboratory in California, the Intel Dupont facility in Washington, and the Naval Research Laboratory in Virginia. These locations are connected through the wide area network (WAN) point of presence for SCinet, the Pacific Northwest Gigapop at Washington University, and utilize optical long-haul equipment from Ciena, Cisco, and Juniper running over networks provided by Abilene, Esnet, Internet2, National LambdaRail, and Qwest. At each endpoint, InfiniBand over optical (either DWDM or SONET OC192) is converted by Longbow XRs from Obsidian Research. The Longbow enables globally distributed InfiniBand fabrics to seamlessly cross connect by encapsulating 4X InfiniBand over OC-192c SONET, ATM, or 10GbE WANs at full InfiniBand data rates. The conversion is totally transparent to the InfiniBand fabric and is interoperable with OpenIB's software stack and subnet manager.
The SCinet InfiniBand network will also be directly connected to native InfiniBand storage solutions using standard block-level, file-level, and clustered-file system technologies. Hosted by the StorCloud initiative at SC|05, this cluster of native InfiniBand storage solutions highlights the wide availability and end-user demand for higher performance, lower latency storage solutions.
"The significant vendor participation in the SCinet InfiniBand network is a testament to the remarkable progress by the developers of the OpenIB open-source software stack which was substantially funded by the DOE NNSA ASC PathForward program," said Bill Boas, Vice-Chair of OpenIB. "InfiniBand is the lowest latency, highest bandwidth fabric for data center and performance computing applications today. The multi-vendor interoperability at SC/05 validates its success and verifies the widespread acceptance of OpenIB by customers."
Additional information about the OpenIB network at SC|05 is available at its Web site.