Skip to content

mpi4py#

This pages describes how to install, test, and, use MPI for Python mpi4py.

Do not install mpi4py through pip install mpi4py.

Installing mpi4py through pip install mpi4py will install a generic MPI that will not work on our clusters. We recommend separately installing mpi4py for each cluster through the steps outlined on this page.

Running MPI parallel Python scripts is only supported on the compute nodes and not on frontend nodes.

Optional: Preparation#

You might want to install mpi4py into a conda/venv environment.

Installation#

Installation must be performed on the cluster frontend node:

  1. Load Python module.
  2. Load MPI module.
  3. Optional: create or activate the conda/virtual environment mpi4py should be installed to
  4. Install mpi4py and specify the path to the MPI compiler wrapper, here mpicc:
    MPICC=$(which mpicc) pip install --no-cache-dir mpi4py
    

Which MPI module to load and compiler wrapper to use see our MPI documentation. More details regarding the installation can be found in the official documentation of mpi4py.

Test installation#

Testing the installation must be performed inside an interactive job:

  1. Load the Python and MPI module versions mpi4py was build with.
  2. Activate conda/virtual environment mpi4py was installed into (if any).
  3. Run MPI parallel Python script:
    srun python -m mpi4py.bench helloworld
    

This should print for each process a line in the form of:

Hello, World! I am process  <rank> of <size> on <hostname>

The number of processes to start is configured through the -n <no. of processes> flag of srun.

Usage#

MPI parallel python scripts with mpi4py only work inside a job on a compute node.

In an interactive job or inside a job script run the following steps:

  1. Load the Python and MPI module versions mpi4py was build with.
  2. Activate conda/virtual environment mpi4py was installed into (if any).
  3. Run MPI parallel Python script:
    srun python <script>
    

The number of processes to start is configured through the -n <no. of processes> flag of srun.

For how to request an interactive job via salloc and how to write a job script see batch processing.