[DBPP] previous next up contents index [Search]
Next: Terminology Up: Contents Previous: Contents

Preface

Welcome to Designing and Building Parallel Programs ! My goal in this book is to provide a practitioner's guide for students, programmers, engineers, and scientists who wish to design and build efficient and cost-effective programs for parallel and distributed computer systems. I cover both the techniques used to design parallel programs and the tools used to implement these programs. I assume familiarity with sequential programming, but no prior exposure to parallel computing.

Designing and Building Parallel Programs promotes a view of parallel programming as an engineering discipline, in which programs are developed in a methodical fashion and both cost and performance are considered in a design. This view is reflected in the structure of the book, which is divided into three parts. The first part, Concepts, provides a thorough discussion of parallel algorithm design, performance analysis, and program construction, with numerous examples to illustrate fundamental principles. The second part, Tools, provides an in-depth treatment of four parallel programming tools: the parallel languages Compositional C++ (CC++ ), Fortran M (FM), and High Performance Fortran (HPF), and the Message Passing Interface (MPI) library. HPF and MPI are standard parallel programming systems, and CC++ and FM are modern languages particularly well-suited for parallel software engineering. Part II also describes tools for collecting and analyzing performance data. The third part, Resources surveys some fundamental parallel algorithms and provides many pointers to other sources of information.

How to Use This Book

In writing this book, I chose to decouple the presentation of fundamental parallel programming concepts from the discussion of the parallel tools used to realize these concepts in programs. This separation allowed me to present concepts in a tool-independent manner; hence, commonalities between different approaches are emphasized, and the book does not become a manual for a particular programming language.

However, this separation also has its dangers. In particular, it may encourage you to think that the concepts introduced in Part I can be studied independently of the practical discipline of writing parallel programs. This assumption would be a serious mistake. Parallel programming, like most engineering activities, is best learned by doing. Practical experience is essential! Hence, I recommend that chapters from Parts I and II be studied concurrently. This approach will enable you to acquire the hands-on experience needed to translate knowledge of the concepts introduced in the book into the intuition that makes a good programmer. For the same reason, I also recommend that you attempt as many of the end-of-chapter exercises as possible.

Designing and Building Parallel Programs can be used as both a textbook for students and a reference book for professionals. Because the hands-on aspects of parallel programming are so important, professionals may find it useful to approach the book with a programming problem in mind and make the development of a solution to this problem part of the learning process. The basic materials have been classroom tested. For example, I have used them to teach a two-quarter graduate-level course in parallel computing to students from both computer science and noncomputer science backgrounds. In the first quarter, students covered much of the material in this book; in the second quarter, they tackled a substantial programming project. Colleagues have used the same material to teach a one-semester undergraduate introduction to parallel computing, augmenting this book's treatment of design and programming with additional readings in parallel architecture and algorithms.

Acknowledgments

It is a pleasure to thank the colleagues with whom and from whom I have gained the insights that I have tried to distill in this book: in particular Mani Chandy, Bill Gropp, Carl Kesselman, Ewing Lusk, John Michalakes, Ross Overbeek, Rick Stevens, Steven Taylor, Steven Tuecke, and Patrick Worley. In addition, I am grateful to the many people who reviewed the text. Enrique Castro-Leon, Alok Choudhary, Carl Kesselman, Rick Kendall, Ewing Lusk, Rob Schreiber, and Rick Stevens reviewed one or more chapters. Gail Pieper, Brian Toonen, and Steven Tuecke were kind enough to read the entire text. Addison-Wesley's anonymous reviewers also provided invaluable comments. Nikos Drakos provided the latex2html software used to construct the online version, and Cris Perez helped run it. Brian Toonen tested all the programs and helped in other ways too numerous to mention. Carl Kesselman made major contributions to Chapter 5. Finally, all the staff at Addison-Wesley, and in particular editor Tom Stone and editorial assistant Kathleen Billus, were always a pleasure to work with.

Many of the tools and techniques described in this book stem from the pioneering work of the National Science Foundation's Center for Research in Parallel Computation,

without which this book would not have been possible. I am also grateful to the Office of Scientific Computing of the U.S. Department of Energy for their continued support.

Here is a Web Tour providing access to other educational resources related to parallel and distributed computing.


[DBPP] previous next up contents index [Search]
Next: Terminology Up: Contents Previous: Contents

© Copyright 1995 by Ian Foster