EARTH SCIENCES
Convey Computer Doubles Graph500 Performance, Develops New Graph Personality
- Written by: Tyler O'Neal, Staff Editor
- Category: EARTH SCIENCES
Convey Computer announced multiple entries on the Graph500 List that double the performance of prior posted results. Convey cited two reasons for the significant performance improvement -- a new "Breadth-First Search" personality and a graph-optimized memory crossbar design.
On the most recent list, announced at SC11, multiple Convey single-node, hybrid-core systems clocked in at between 1.60 to 1.76 GTEP/s (billion edges per second) on problem sizes 27 and 28. Convey has a total of six entries on the Graph500 List including submissions from Lawrence Berkeley National Laboratory/National Energy Research Scientific Computing Center (LBL/NERSC), Sandia National Laboratories (SNL), and Bielefeld University.
Compared pound-for-pound and watt-for-watt, Convey's family of reconfigurable (FPGA) systems provide superior processing power on the Graph500 ( www.graph500.org ) list.(1) The Graph500 organization establishes and maintains a set of large-scale benchmarks that measure performance of "big data" applications.
The main component of the current Graph500 benchmark is a Breadth-First Search (BFS) of a constructed graph. To accelerate the BFS, and demonstrate the performance of the Convey hybrid-core architecture, Convey developed a "personality" specific to the BFS algorithm. The BFS personality leverages Convey's balanced architecture, which is based on a highly parallel memory subsystem and high-performance reconfigurable compute elements. The personality contains multiple function pipes (implemented in hardware on the system's FPGAs), and typically has thousands of loads in flight simultaneously. It also manages the synchronization of stores to memory as required by the Graph500 benchmark.
"There is little doubt that memory systems are an 'Achilles Heel' of handling big data applications. Today's commodity systems are optimized for sequential memory accesses, not the random accesses typically found in graph problems. This really hurts performance when processing large-scale analytics applications," explained Bruce Toal, CEO and co-founder of Convey Computer. "Our hybrid-core solution combines a powerful memory subsystem, which is ideal for massive data analytics, and a graph-friendly architecture capable of managing multi-terabyte graphs with billions of nodes."
In a computing world where 30 billion pieces of content are shared on Facebook every month,(2) today's high-performance computing systems are expected to exploit the relationships between data -- and not simply process data. This deluge of "big data" means big business for HPC because new computing architectures are required to handle the "new HPC" applications such as bioinformatics, graph analytics, cyber security, and algorithmic research.