ACADEMIA
Optical Network is Key to Next-Generation Research Cyberinfrastructure
At TeraGrid ’08 Conference, UC San Diego’s Smarr Urges University Campuses to Remove Network Bottlenecks to Supercomputer Users: The director of the California Institute for Telecommunications and Information Technology (Calit2), a partnership of UC San Diego and the UC Irvine, said today that all the pieces are in place for a revolution in the usability of remote high performance computers to advance science across many disciplines. He urged early adopter application scientists to drive the creation of end-to-end dedicated lightpaths connecting remote supercomputers to their labs, greatly enhancing their local capability to analyze visually massive datasets generated by TeraGrid’s terascale to petascale computers. In a featured keynote today at the TeraGrid ’08 Conference being held in Las Vegas this week, Calit2 Director Larry Smarr said “the last ten years have established the state, regional, national, and global optical networks needed for this revolution, but the bottleneck is on the user’s campus.” However, as a result of research funded by the National Science Foundation (NSF), there now is a clear path forward to removing this last bottleneck.
This opens the possibility for end users of the NSF’s TeraGrid to begin to adopt these optical network technologies. The TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities from the eleven partner sites around the country.
"The NSF-funded OptIPuter project has been exploring for six years how user-controlled, wide-area, high-bandwidth lightpaths – termed lambdas – on fiber optics can provide direct uncongested access to global data repositories, scientific instruments and high performance computational resources from the researchers’ Linux clusters in their campus laboratories," said Smarr. “This research is now being rapidly adopted because universities are beginning to acquire lambda access through state or regional optical networks interconnected with the National LambdaRail, the Internet2 Dynamic Circuit Network, and the Global Lambda Integrated Facility."
The OptIPuter project, led by Smarr, is not designed to scale to millions of sites like the normal shared Internet, but to create private networks with much higher levels of data volume, accuracy, and timeliness for a few data-intensive research and education sites. Led by Calit2, the San Diego Supercomputer Center (SDSC), and the University of Illinois at Chicago’s Electronic Visualization Laboratory (EVL), OptIPuter ties together the efforts of researchers from over a dozen campuses.
The OptIPuter uses dedicated lightpaths to form end-to-end uncongested 1- or 10-Gbps Internet protocol (IP) networks. The OptIPuter’s dedicated network infrastructure – and supporting software – has a number of significant advantages over shared Internet connections, including high bandwidth, controlled performance (no jitter), lower cost per unit bandwidth, and security. “The OptIPuter essentially completes the Grid program,” said Smarr. “In addition to allowing the end user to discover, reserve, and integrate remote computers, storage, and instruments, the OptIPuter enables the user to do the same for dedicated lambdas, creating a high-performance LambdaGrid.”
In his talk, Smarr described how the user-configurable OptIPuter global platform is already being used for research in collaborative work environments, digital cinema, biomedical instrumentation, and marine microbial metagenomics. He issued a challenge to the TeraGrid users to begin to adopt this technology to support remote use of the TeraGrid resources.
“OptIPuter technologies can enhance the ability of scientists to use remote high-performance computing resources from their local labs, particularly applications with persistent large data flows, real-time visualization and collaboration, and remote steering,” Smarr said.
A key OptIPuter technology, the OptIPortal, was prototyped by EVL and developed by Calit2 under the NSF-funded OptIPuter partnership. The OptIPortal is a networked and scalable, high-resolution LCD tiled display system, driven by a PC graphics cluster. Designed for the user’s laboratory, each OptIPortal can be constructed with commodity commercial displays and processors. While most of the PC clusters run Linux, there are some that run on Mac (Calit2@UC Irvine and UCSD’s Scripps Institution of Oceanography) or on Windows (UCSD’s National Center for Microscopy and Imaging Research) clusters.
“OptIPortals are the appropriate termination device for 10Gbps lambdas, allowing the end user to choose the right amount of local storage, compute, and graphics capacity needed for their application,” said Smarr. “In addition, the tiled walls provide the scalable pixel real estate necessary to analyze visually the complexity of supercomputing runs.”
The OptIPuter project prefers OptIPortal clusters to run on SDSC’s Rocks, an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. Rocks is developed under an NSF-funded SDCI project led by SDSC’s Philip Papadopoulos, who is also a co-principal investigator on the OptIPuter project. There are currently over 1,300 registered clusters running Rocks, providing a global and vibrant open-source software community. The Rocks “Rolls” provide a convenient method of distribution of software innovations coming from community members.
OptIPortals range in size from four to 60 tiles, offering screen resolutions ranging from 8 million pixels to the nearly-¼-billion-pixel HIPerSpace wall – the highest-resolution display system in the world, located in the Calit2 building on the UCSD campus. OptIPortals do not need to be restricted to planar tiled walls, Smarr said. Smarr showed pictures of Calit2’s StarCAVE immersive environment driven by 34 high-definition projectors, and a 60-LCD semi-cylindrical tiled wall autostereo Varrier display, both providing three-dimensional virtual reality, driven by the same type of Linux clusters that drive the HIPerWall, all connected at multiples of 10Gbps to the OptIPuter.
To handle multi-gigabit video streams, OptIPuter researchers at EVL developed the Scalable Adaptive Graphics Environment (SAGE), specialized graphics middleware that supports collaborative scientific visualization environments with potentially hundreds of megapixels of contiguous display resolution. In collaborative scientific visualization, it is crucial to share high-resolution imagery as well as high-definition video among groups of collaborators at local or remote sites.
SAGE enables the real-time streaming of extremely high-resolution content − such as ultra-high-resolution 2D and 3D computer graphics from remote rendering and compute clusters and storage devices, as well as high-definition video camera output − to scalable tiled display walls over high-speed networks. SAGE serves as a window manager, allowing users to move, resize, and overlap windows as easily as on standard desktop computers. SAGE also has standard collaboration desktop tools, such as image viewer, video player, and desktop sharing capabilities, enabling participants to resize, pan, zoom and move through the data.
In addition to SAGE other windowing software environments have been developed by research groups that were not part of the original NSF proposal, including the Calit2 lab of UCSD Professor Falko Kuester, developer of Cluster CGX, which allows OpenGL applications to be displayed on a visualization cluster like a tiled display.
Although scalable visualization displays have been under development for over a decade, first as arrays of projectors, the use of commodity hardware and open-source software in the OptIPortal makes this visualization power affordable to individual researchers. The typical cost of an N-tiled wall is about the same as N/2 deskside PCs. As a result, adoption of OptIPortals has been rapid over the past two years. Besides the United States there are OptIPortals installed in Australia, Taiwan, China, Japan, Korea, Canada, the UK, the Netherlands, Switzerland, the Czech Republic, and Russia, as well as a number of corporations.
However, there has been a critical “missing link” blocking widespread adoption of the OptIPuter/ OptIPortal metacomputer: few campuses have installed the optical fiber paths needed to connect from the regional optical network campus gateway to the end user. Smarr quoted NSF Director Arden Bement, who three years ago said prophetically: “Those massive conduits [e.g., NLR lambdas] are reduced to two-lane roads at most college and university campuses. Improving cyberinfrastructure will transform the capabilities of campus-based scientists.”
To make effective use of the 10Gbps lightpaths from the TeraGrid resources to the campus gateways, Smarr said, “the user’s campus must invest in the equivalent of city ‘data freeway’ systems of switched optical fibers connecting the campus gateway to specific buildings and inside the buildings to the user’s lab.”
A full scale experiment of this vision is underway at UCSD with funds provided by the campus and an NSF-funded Major Research Instrumentation grant called Quartzite, which has SDSC’s Papadopoulos as PI and Calit2’s Smarr as one of the co-PIs. The Quartzite optical infrastructure includes a hybrid packet-circuit switched environment, interconnecting over 45 installed 10Gbps channels crisscrossing the UC San Diego campus, with 15 more planned by the end of this year. More than 400 endpoints are connected to Quartzite through access or direct connection to the core switch. Geographically, these are located in seven different buildings, including 17 laboratories within these buildings. Large projects (CAMERA, CineGrid) use Quartzite directly.
The Quartzite switching complex is able to switch packets, wavelengths or entire fiber paths, allowing fast configuration, under software control, of the different types of network layouts and capabilities required by the end user. This optical complex will provide this year an aggregate bandwidth of ~½ Terabit/sec from dedicated lightpaths coming into a central, reconfigurable switching complex and from there connecting to UCSD researchers. This testbed also enables a broad set of “Green Cyberinfrastructure” research projects to be conducted on a campus scale. As a result, we can experiment at UCSD with one model of the “campus of the future,” from which robust solutions can be provided to other interested campuses.
“Quartzite provides the ‘golden spike’ which allows completion of end-to-end 10Gbps lightpaths running from TeraGrid sites to the remote user’s lab,” said Smarr, adding: “Like the OptIPortal, Quartzite was designed using commercial technologies that can be easily installed on any campus.”
With this complete end-to-end OptIPuter now in hand, the stage is set for a wide variety of applications to be developed over this global high performance cyberinfrastructure. “When we were conceptualizing the OptIPuter seven years ago, I always thought that remote supercomputer users would provide the killer applications,” said Smarr, the founding director in 1985 of the National Center for Supercomputing Applications (NCSA). “TeraGrid users are located in research campuses across the nation, but they all share the characteristic that they need to carry out interactive visual analysis of massive datasets generated by a remote supercomputer.”
Smarr showed a number of DoE, NASA, and NSF supercomputer centers that have large tiled projector walls located in the center for visual analysis of these complexities. “The time has come to take that capability out to end users in their labs with local OptIPortals connected to the supercomputer center using the OptIPuter,” said Smarr. “I believe that we will see early adopters step forward in the next year to set up prototypes of this cyberarchitecture.”
Smarr described the work of one such early adopter, Michael Norman, UCSD Professor of Physics, recently named SDSC’s Chief Scientific Officer. Norman is designing an OptIPortal in the new SDSC building, to be dedicated in October 2008, for use by his Laboratory for Computational Astrophysics. It will be connected over the UCSD optical complex described above to the TeraGrid 10Gbps backbone and National LambdaRail and used to visualize results from his cosmology simulations on the NSF’s Petascale Track II machines at the Texas Advanced Computing Center and at the University of Tennessee/Oak Ridge National Laboratory’s National Institute for Computational Sciences. Norman plans to stage and analyze the terabytes of data generated at SDSC, using the campus optical fiber network to move the data into specialized OptIPortals at Calit2, such as the StarCAVE and HIPerSpace wall.
To make this OptIPuter distributed analysis more efficient, EVL has developed LambdaRAM, which can prefetch data from disk storage and temporarily store it in the cluster’s Random Access Memory (RAM), masking the substantial disk I/O latency, and then move the data from this “staging” computer to the computer running the simulation. Smarr showed how NASA Goddard Space Flight Center in Maryland uses the OptIPuter and LambdaRAM to optimize the use of NLR for severe storm and hurricane forecasts carried out at the Project Columbia supercomputer at NASA Ames in Mountain View, California, and to zoom and pan interactively through ultra-high-resolution images on local OptIPortals at Goddard. EVL modified LambdaRAM so that it would work seamlessly with legacy applications to locally access large data files generated by the remote supercomputer.
Finally, Smarr described how, with the integration of high definition and digital cinema video streams, which easily fit inside a 10Gbps lightpath, the OptIPuter architecture is rapidly creating an OptIPlanet Collaboratory in which multiple scientists can analyze a complex dataset while seeing and talking to each other as if they were physically in the same room. Smarr showed photos of “telepresence” sessions in January and May 2008 where this was demonstrated on a global basis between Calit2 at UC San Diego and the 100-Megapixel ‘OzIPortal,’ constructed earlier this year at the University of Melbourne in Australia, connected over a transpacific gigabit lightpath on Australia's Academic and Research Network (AARNet). “Petascale problems will require geographically distributed multidisciplinary teams analyzing enormous data sets—a perfect application of the OptIPlanet Collaboratory,” said Smarr.
In conclusion, Smarr said, “After a decade of research carried out at dozens of institutions, we are seeing the OptIPuter take off on a global basis. I look forward to working with many of the TeraGrid ‘08 participants as they become early adopters of this innovative, high performance cyberinfrastructure—rebalancing the local analysis and network connectivity with the awesome growth NSF has made possible in the emerging petascale computers.”