AUTOMOTIVE
ASU to Roll Out Obsidian Long Haul Infiniband Technology
- Written by: Writer
The Longbow technology allows an Infiniband network, normally a short-range network used within supercomputers, to be extended via optical fiber over varying distances. ASU will use this technology to link supercomputing, storage, and visualization resources around the campus into a tightly integrated campus grid, providing seamless access to research computing resources. For instance, the grid will link the visualizations of ASU’s Decision Theater, a state-of-the-art center for policy and decision making, with the rapid simulation capability of the High Performance Computing Institute. “This type of bandwidth alters our entire approach to integrating simulation models and visualization” said Dan Stanzione, the Director of the HPCI, “Near real-time interaction with models is now possible, allowing us to respond immediately with what-if scenarios in direct response to user queries”. The Longbow technology will also be used to take advantage of the various high-end computing resources on the ASU campus, turning the grid into a large-scale virtual supercomputer. The Longbow devices are already operating on the ASU Tempe campus, and grid operation will begin immediately. Cisco Systems, a leader in InfiniBand clustering technology, was instrumental in bringing the project participants together to help bridge the gap between leading edge technology and commercial products. Cisco SFS 7000 series InfiniBand Server Switches were deployed at ASU for cluster connectivity. Cisco also contributed the Cisco High Performance Subnet Manager and SFS 7000P for use during the project. The Obsidian Longbow LP has contributed two Longbow ER InfiniBand range extenders that enable globally distributed InfiniBand fabrics to connect over OC-192c SONET, ATM, or dark fiber at full InfiniBand data rates. The Naval Research Laboratory (NRL) initiated early development of this capability and both NRL and Canada’s National Research Council provided support for its development. Cisco InfiniBand switches were used in the geographically distributed clusters to improve fault tolerance, reroute around local areas of congestion, and to combine InfiniBand routing with a network-aware job schedule to help make the most efficient use of computing resources. ASU provided the test bed and personnel resources for creation of the campus grid.