ACADEMIA
New Parallel Processing Textbook
TITLE: Parallel Programming in C with MPI and OpenMP, AUTHOR: Michael J. Quinn, Oregon State University, quinn@eecs.orst.edu, PUBLISHER: McGraw-Hill
PAGES: 543, PUBLICATION DATE: June 6, 2003, URL: http://www.mcgraw-hillengineeringcs.com OVERVIEW:
The era of practical parallel programming has arrived, marked by the popularity of the MPI and OpenMP software standards and the emergence of commodity clusters as the hardware platform of choice for an increasing number of organizations. This text addresses the needs of students and professionals who want to learn how to design, analyze, implement, and benchmark parallel programs in C using MPI and/or OpenMP. It introduces a rock-solid design methodology with coverage of the most important MPI functions and OpenMP directives. It demonstrates, through a wide range of examples, how to develop parallel programs that will execute efficiently on today's parallel platforms.
Fortran programmers interested in parallel programming can also benefit from this text. While the examples in the book are in C, the underlying concepts of parallel programming with MPI and OpenMP are essentially the same for both C and Fortran programmers.
KEY FEATURES
FIVE-CHAPTER, TUTORIAL INTRODUCTION TO THE MPI LIBRARY. Quinn presents new functions only when needed to solve a programming problem. A carefully
crafted series of example programs gradually introduces 27 key MPI functions. Collective communication functions are presented before point-to-point message passing, making it easier for inexperienced parallel programmers to write correct parallel codes.
TUTORIAL INTRODUCTION TO OPENMP. A progressively more complicated series of code segments, functions, and programs allows each OpenMP directive or
function to be introduced "just in time" to meet a need.
INTRODUCTION TO HYBRID PARALLEL PROGRAMMING USING BOTH MPI AND OPENMP. This is often the most effective way to program clusters constructed out of symmetrical multiprocessors.
EMPHASIS ON DESIGN, ANALYSIS, IMPLEMENTATION, and BENCHMARKING. An early chapter introduces a rigorous parallel algorithm design process. This
process is used throughout the rest of the book to develop parallel algorithms for a wide variety of applications. The book repeatedly demonstrates how benchmarking a sequential program and carefully analyzing a parallel design can lead to accurate predictions of the performance of a parallel program.
EXCEPTIONAL CHAPTER ON PERFORMANCE ANALYSIS. Quinn takes a single, generic speedup formula and derives from it Amdahl's Law, Gustafson-Barsis's Law,
the Karp-Flatt metric, and the isoefficiency relation. Students will learn the purpose of each formula and how they relate to each other.
PARALLEL ALGORITHMS FOR MANY APPLICATIONS. The book considers parallel implementations of Floyd's algorithm, matrix-vector multiplication, matrix
multiplication, Gaussian elimination, the conjugate gradient method, finite difference methods, sorting, the fast Fourier transform, backtrack search,
branch-and-bound, and more.
THOROUGH TREATMENT OF MONTE CARLO ALGORITHMS. A full chapter on this often-neglected topic introduces problems associated with parallel random
number generation and covers random walks, simulated annealing, the Metropolis algorithm, and much more.
BRIEF TABLE OF CONTENTS
1 Motivation and History
2 Parallel Architectures
3 Parallel Algorithm Design
4 Message-Passing Programming
5 The Sieve of Eratosthenes
6 Floyd's Algorithm
7 Performance Analysis
8 Matrix-Vector Multiplication
9 Document Classification
10 Monte Carlo Methods
11 Matrix Multiplication
12 Solving Linear Systems
13 Finite Difference Methods
14 Sorting
15 The Fast Fourier Transform
16 Combinatorial Search
17 Shared-Memory Programming
18 Combining MPI and OpenMP
A MPI Functions
B Utility Functions
C Debugging MPI Programs
D Review of Complex Numbers
E OpenMP Functions
Bibliography
Author Index
Subject Index