Skip navigation

CME 194: Introduction to MPI

Outline

This short course runs for the first four weeks of the quarter and is offered every spring quarter. It is recommended for students who would like to write parallel programs. You will be exposed to distributed memory programming via the Message Passing Interface (MPI). In Distributed memory programming, unlike shared memory programming, individual computational threads share nothing and communicate by sending messages. The goal of this course is to teach you how to write efficient parallel programs actually get you to write them. Topics include: parallel decomposition, basic communication primitives, collective operations, and debugging. The lectures will be interactive and homeworks will require writing software. Students should be comfortable and interested in writing software in C/C++ but no prior parallel programming experience is required.

Syllabus

Instructor: Ryan H. Lewis

Writing an MPI program is a lot like writing a theatrical performance. While on stage the actors must work well together but not forget to deliver the best performance they can. Here are the main acts for any performance:
  • ■ Wednesday 4/2 Lecture 1 Disce Fundamenta -- Introduction to Parallel Computing & MPI Basics
  • ■ Monday - 4/6 Lecture 2 Inchoare Commercio & Rebus Iniuriam -- Getting Started & Point to Point Communication & Communicators
  • ■ Wednesday - 4/8 Lab 1: Preliminaries & Basics & Parallel Sorting
  • ■ Monday - 4/13 Lecture 3 Communi Opera -- Collective Operations
  • ■ Wednesday - 4/15 Lecture 4 Coetibus Opus -- Derived Datatypes and Serialization
  • ■ Monday 4/20 Lab 2: Distributed Linear Algebra
  • ■ Wednesday 4/22 Lecture 5 Essentia Callidus --
  • ■ Monday 4/27 Lab 3: