Stanford 50: State of the Art and Future Directions of Computational Mathematics and Numerical Computing


  • March 30, 2007
  • 4:25 pm - 4:50 pm

Parallel matrix computation: From the ILLIAC to quantum computing

Dianne O'Leary (University of Maryland)

The basic ideas behind parallel matrix computation were developed in the 1960s, 1970s, and 1980s. The single-instruction-multiple-data (SIMD) model was among the first ideas, implemented in machines such as the ILLIAC III and IV. Some later parallel machines implemented dataflow computing ideas.

Today, algorithms developed for these early machines are being revised and reused. For example, graphical processing units (GPUs) are cost-effective and widely-available SIMD parallel processors. An efficient implementation of an interior point algorithm for solving linear programming problems on GPUs, devised in collaboration with Jin Hyuk Jung, will be discussed.

In a second current application, algorithms for parallel matrix computation are not actually executed but instead used to design efficient machines. Specifically, efficient dataflow algorithms for the QR decomposition yield efficient designs for quantum computers, and the talk will focus on this rather surprising application (joint work with Gavin Brennen and Stephen Bullock).

Stanford University Home Page