GOVERNMENT
Why the world needs supercomputing
By Chris O'Neal -- I'm writing this article to help bridge the gap between the community that fully understands the unprecedented value of supercomputing, and those technology customers who may not be sure yet. I hope to dispel the myths about supercomputing while driving home the compelling fact that these unique problem solving systems are becoming the new engine of business. Supercomputers make amazing contributions to industry and this voice has been lost. I don't like to preach, give my opinion or endorse a particular vendor. But I do like to hold vendors accountable. I like to report the news and let you decide. This article provides the strategic outlook and expert insights today's IT leaders demand in order to make highly effective purchasing, strategic and managerial decisions in the course of spurring their technological initiatives and growing their businesses. Of course, I’ll start at the beginning. So it all started with a big bang…. Oh no, I will not go back that far. Since 2001, I've published almost 16,000 articles about how supercomputing technology has produced tangible results. For example, Largest Computational Biology Simulation Mimics The Ribosome is the most read story in our publication's archive. This article describes how researchers at Los Alamos National Laboratory set a world's record by performing the first million-atom computer simulation in biology using the "Q” supercomputing system built by HP. The name is meant to evoke both the dimension-hopping Star Trek alien and the gadget-making wizard in the James Bond thrillers. From the architectural point of view, Q can be described as a SMP cluster consisting of Alpha processors from HP.
Last week, IDC and the Council on Competitiveness released a report saying that large and small companies choosing to switch from desktop computers to high performance computing (HPC) servers will need to overcome a number of obstacles. The new research investigated why the many U.S. companies that use desktop computers (PCs, Macs, workstations) for product design and technical computing activities have not advanced to more-powerful HPC servers. The studies found that 57 percent of firms have problems that cannot be solved by desktop computing alone. Although 34 percent saw the benefits of implementing HPC, they admitted they would not be able to do so because of one or more major barriers. The three largest problems included: uncertainty about the availability of software to run on HPC servers, lack of staff skilled in HPC hardware and software systems, and cost constraints. Only ten percent of companies had plans to implement HPC solutions in the near future. (Please see article at: www.supercomputingonline.com/article.php?sid=15438.)
Today, U.S. Presidential candidate John McCain is in Ohio for of his "Time for Action" tour, urging voters not to give up hope on the economy because he's confident it will bounce back. McCain says political candidates come through the area missing the fact that there is new technology at work in Youngstown that can guide the "rust belt" and its economy forward. "It won't be easy, and, as you know better than I do, it won't happen overnight. But dramatic change can happen, in this great city and others like it. With pro-growth policies to create new jobs, and with honest and efficient government in Washington, we can turn things around in this city. And we can make the future of this region even better than the best days of the past," McCain said.
If you read SC Online regularly, then you know all about the amazing work at the Ohio Supercomputer Center (OSC). Last month, the National Science Foundation designated nearly $1 million to provide Ohio's workforce with crucial training in computational modeling and simulation. (Please see article at: www.supercomputingonline.com/article.php?sid=15356.) "Computational methods reduce the time and labor required to obtain and apply new information," said Stan Ahalt, executive director of OSC. "With the improved software development, training, outreach and partnerships provided through Blue Collar Computing, supercomputing can become a reality on a smaller scale for small- and mid-sized industrial clients." OSC's Blue Collar Computing is an approach to supplying computational modeling and simulation solutions to companies without requiring large up-front investments. The model is service oriented, relying on amortizing the cost of expertise, software and hardware across many firms to reduce individual expenditures. It combines easy to use HPC programming languages, comprehensive training and a collaborative computing infrastructure. OSC's business clients can use some of the most advanced high performance computing resources for $1 per CPU hour. This is a viable model to make supercomputing commercially realizable.
Now I’ll detail how a great Ohio company is overcoming the obstacles within the IDC and the Council on Competitiveness studies. Each year, technology helps speed the design and manufacturing of more than one billion tires worldwide. Computer simulations and modeling make it possible to design and test new tires more rapidly–cutting tire design from years to months. Using computer-based models, engineers test thousands of different combinations of variables that make up a tire design—resulting in designs that are safer and less costly to manufacture.
Tire specialists at Cooper Tire & Rubber Company rely on technology to ensure that they remain on the leading edge of innovation. Headquartered in Findlay, Ohio, the company’s manufacturing, sales, distribution, technical and design facilities are located around the globe. Cooper Tire provides a full line of tires to meet the needs of virtually all consumers—from everyday motorists to the most demanding high performance, off-road and motor sport enthusiasts.
I spoke with Keith H.F. Sansalone, P.E., Advanced research engineer in the Research and Technology Group at Cooper Tire & Rubber. It was a great pleasure to hear the passion in his voice when he spoke about business. According to Sansalone the company needed a more powerful system to streamline their product development process. "We wanted to speed turnaround times and run more complex simulations for the design of our latest development project—the CS4 tire—which was planned for delivery in the fall of 2007."
Cooper wanted their new CS4 to provide two all season innovative tread options: a four-rib tread design and a five-rib design with a size lineup that would fit many of the most popular sedans, minivans and crossover vehicles on the road today. This premium, all-season tire would be based on a newly formulated performance compound, derived and inspired from the company’s ultra-high performance tire technology. Its unique R-Tech (Response- Technology) construction would utilize several design features to provide enhanced tire performance and superior ride comfort, responsive handling, and reduced noise.
In order to achieve their design turnaround and simulation size goals, Cooper enlisted the services of T2 Technologies to help with the selection, configuration, and implementation of a new computer system. T2 is a premier Midwest HP Business Partner with over two decades of experience with CAE/CFD IT solutions, many of which are unique to the computing requirements of auto, tire, and aerospace manufacturing companies.
Cooper, T2, and HP developed a solution for Cooper around the HP Integrity system based on 64-bit Intel Itanium 2 processors. The Integrity system was chosen primarily for superior price/performance, third-party application availability, and large physical memory. They also chose the HP-UX 11i operating system because of its portfolio of highly tuned CAE applications, performance in an engineering environment, and interoperability with the Microsoft Windows-based clients that Cooper uses for pre and post-processing.
Innovation allows Cooper Tire to make their process more efficient as they strive to design and manufacture the best tires on the market. Complex analysis software allows engineers to simulate the performance of tread design and other design parameters. The software creates parameterized design and simulation models and calculates the effects of anticipated usage scenarios on the proposed tire design.
The engineering research group uses a propriety suite of computer-aided engineering (CAE) applications that Cooper calls Vt²ech—short for Virtual Tire Technology—and a collection of off-the-shelf CAE applications, including Simulia Abaqus, MSC.Patran, MSC.Nastran, and MATLAB. The new larger and more complex models allow engineers to simulate a complete tire with all of its components, mounted on a wheel, pressurized, and under load.
Designers develop several tire tread patterns; each of the designs is then evaluated using the modeling applications. The tire models are virtually tested for tread wear, strength, cornering characteristics, tread pattern stiffness, noise generation, groove wander, rolling resistance, and traction.
"The virtual testing process lets us examine multiple designs simultaneously, reducing both the number of prototypes and the time it takes to develop them," Sansalone said.
The most promising tread designs are hand-cut from a smooth-molded tire and physically tested. The results obtained from testing are compared to the results obtained from the FEA software, providing an effective feedback loop to continually improve the models.
The CAE computing facilities at Cooper Tire service two user populations: the product development teams and research/development. Product development requirements include support for a complex software suite and a seamless interface to the Windows-based pre- and post-processing stations. On the research side, teams push the envelope with many large ad hoc simulations, so they need a powerful system that can execute complex applications quickly.
Prior to the upgrade, typical turnaround time for product development jobs took 12-72 hours. With the new system, turnaround for similar jobs has been cut in half. In another example, performance on one research project improved from 2.95 hours per iteration (using one CPU) to .07 hours per iteration (using four CPUs and additional memory).
Additionally, engineers can now run jobs that were not previously possibly. R&D projects that study tire/surface conditions are highly non-linear because of tire material properties and geometric deformation. They are also quite large—on the order of two million degrees of freedom (DoF)—and must be run for many iterations. According to Sansalone, "Prior to our upgrade, this type of simulation was impossible for us to solve. After the upgrade, not only will the system handle the job, but each iteration runs in just under ten minutes—completely acceptable for this class of simulation."
"In the world of tire design and development, we live and die by how quickly we can run each job," explained Sansalone. "The HP Integrity system provides us with the high performance and throughput we need to run our jobs quickly. Stability is also a huge factor in our success. With the HP system, we have not lost a single day of work. Not only is the HP hardware exceptional, the support personnel behind the scenes provide excellent expertise and customer service."
Sansalone said the acquisition of the HP Integrity system is three years old now. He said his company needs more capacity. So they are in the process of running through the justification for a new system. They are going to add HPC capacity to the old system. But the new capacity will not be the same technology as the existing technology. They have decided to move to commodity based clustering. This is the trend in the supercomputing market today. They are thinking of using an x86 Linux cluster based on Intel Xeon multicore technology. He said they hope to have the new system deployed by the end of this quarter.
Sansalone said they are in the last stages of going through the justification process. It’s not a huge expense. But he works for a company that minds its pennies and they want to get the biggest bang for the buck. Also, they are working with T2 again because of their successful history. They are helping the company step through the acquisition process. Cooper Tire has chosen four specific models that are difficult to resolve and given them to T2. Subsequently, they have been forwarded to HP in Texas for benchmark testing.
I spoke with Knute Christensen, Manager of the High Performance Computing (HPC) Partners, Solutions & Segment Marketing Group at HP. He said T2 is a high performance computing HP partner or value added reseller with an emphasis on value added. Then, we discussed the fact that desktops are not going to disappear from companies. In fact, he said workstations have become very powerful today with strong performance able to solve many problems. Having said that, he emphasized that there are still many problems that can not be solved on the desktop. Then, we discussed the metrics that businesses can use to accurately measure ROI and more.
In conclusion, I believe that companies do not need to substitute the desktop computer with the supercomputer. These are two complementary tools for the modern product development process. The supercomputer needs to be included as a step in the workflow. Companies can continue to use applications on the desktop to design and develop their ideas. Then, they can access the supercomputer directly from their desktop for help with engineering. This is much simpler today because supercomputing has become automated. In other words, users don't have to be supercomputer experts to be able to use these solutions. Engineers can just login to submit jobs and retrieve the results. Consequently, this kind of environment increases the productivity of engineers and lets the great American mind do what it does best, innovate. Ultimately, innovation is the engine that drives the U.S. economy.
There are quite a few design choices that need to be made when building a supercomputing system including the server, the client, the interconnect, the operating system, the message passing middleware, and the cluster management tools. Often, each application requires a different design choice. I believe the best way to serve the needs of customers is to create a vast knowledge base of these technologies to build upon in order to create the optimal solution. Vendors need to continuously monitor new and relevant technologies to streamline the processes to assess their performance and communicate the findings to customers in an efficient manner. Most importantly, I believe that supercomputing has now truly entered the commodity stage of its life cycle. HP is committed to further technology developments in this space and aggressively commoditizes clusters, leveraging the strengths of its strong partnerships and its ability to rapidly integrate emerging technologies.