INDUSTRY
Indiana University Powers High Performance Distributed Cyberinfrastructure
- Written by: Writer
- Category: INDUSTRY
Force10 Networks, the pioneer in building and securing high performance networks, today announced that Indiana University (IU) has deployed the TeraScale E-Series family of switch/routers to build a distributed, flexible high performance cyberinfrastructure. In addition to IU’s AVIDD and Big Red supercomputer clusters, the cyberinfrastructure includes the university’s Data Capacitor, which provides researchers across the nation with a unique facility for temporary massive data storage. “To support broad research initiatives across our campuses as well as within the TeraGrid project, we needed an infrastructure that could deliver the flexibility and scalability that would allow us to interconnect our computing clusters as well as enable us to build new ones,” said Matt Davy, chief network engineer at Indiana University. “Force10’s 16-port 10 Gigabit Ethernet cards give us the flexibility to reconfigure our network as research projects demand without complicating the architecture while providing the scalability to add more computing capacity.” The Force10 TeraScale E-Series and the state of Indiana’s I-Light network form the foundation of the high performance 10 Gigabit Ethernet network that connects IU’s Indianapolis and Bloomington campuses, which are 55 miles apart. With the TeraScale E1200, TeraScale E600 and Force10’s 16-port 10 Gigabit Ethernet line cards, IU is building the high density, scalable network it needs to support advanced research needs across both campuses. The Force10 TeraScale E-Series also serves as a “machine room backplane” for IU’s two Advanced Cyberinfrastructure Facilities. Over this infrastructure, tools such as an open message passing interface (MPI) can be used to run MPI applications across multiple locations, supporting, for example, the use of Myricom protocols within one cluster and Gigabit Ethernet between clusters and campuses, leveraging the long haul fiber optic capabilities of the TeraScale E-Series. The machine room backplane also allows researchers to utilize systems such as the IBM e1350 Blade Center Big Red cluster, the 31st largest supercomputer in the world, to generate massive amounts of data that can be stored seamlessly on the Data Capacitor while awaiting analysis. “Indiana University is a part of the research and education tradition that is committed to leveraging the most advanced technology to further research into complex problems,” said Mark Cooper, senior vice president of worldwide sales at Force10 Networks. “The TeraScale E-Series provides the density and reliability Indiana requires to be on the forefront of computer science research.” The high 10 Gigabit Ethernet density of the Force10 TeraScale E1200 – up to 224 ports in a single chassis – allows IU to simplify its high performance architecture as well as reduce the capital and operational costs of network ownership. Additionally, the leading per card density translates into a longer product lifecycle by enabling the university to scale its network as computing demands increase without the expense of an upgrade. Indiana University also serves as one of the nine partner sites in TeraGrid, an open scientific discovery infrastructure that integrates high performance computers, data resources, tools and leading experimental facilities around the country to create a persistent computational resource. With the Force10 TeraScale E-Series at the foundation of its connection to the TeraGrid, IU has the high capacity it requires to interconnect with other universities and high performance research centers around the country, providing access for its students to more than 100 discipline-specific databases. In addition to IU, nearly all other TeraGrid sites rely on the leading density and resiliency of the Force10 TeraScale E-Series to power their networks and build a countrywide grid network. Among these are the National Center for Supercomputing Applications, the San Diego Supercomputer Center, Oak Ridge National Laboratory and the Texas Advanced Computing Center.