... | ... | @@ -5,7 +5,9 @@ MPICH2 on Shark |
|
|
----
|
|
|
|
|
|
MPICH is a freely available, portable implementation of MPI, a standard for message-passing for distributed-memory applications used in parallel computing.
|
|
|
The MPICH2 libraries are installed on all execution nodes. In order to use the MPICH2 implementation you need a parallel environment.
|
|
|
The MPICH2 libraries are installed on all execution nodes.
|
|
|
MPICH2 has can be used with an old startup method MPD or with the new startup method Hydra.
|
|
|
In order to use the MPICH2 implementation you need a parallel environment.
|
|
|
To list all parallel environments execute this command:
|
|
|
|
|
|
qconf -spl
|
... | ... | @@ -26,7 +28,7 @@ You can load either one of these MPI libraries by using the module command. |
|
|
|
|
|
|
|
|
|
|
|
**To use the mpich2 library**
|
|
|
**To use the mpich2 library with MPD**
|
|
|
|
|
|
The mpich2 library uses the parallel environment mpich2_mpd.
|
|
|
First create a **.mpd.conf** file in your home directory with the following command:
|
... | ... | @@ -93,4 +95,51 @@ Now compile the code : |
|
|
` mpicc -o mpihello mpihello.c`
|
|
|
|
|
|
Now you are ready to submit your first mpi code with 2 slots(cores/CPUs) :
|
|
|
`qsub -pe mpich2_mpd 2 mpich2_mpd.sh` |
|
|
`qsub -pe mpich2_mpd 2 mpich2_mpd.sh`
|
|
|
|
|
|
To use the mpich2 library with Hydra
|
|
|
----
|
|
|
|
|
|
With Version 1.3 of MPICH2, they moved to Hydra as default startup method for their slaves tasks and other startup methods will be removed over time. Hydra has a compiled in Tight Integration into SGE by default, and no special setup in SGE to support MPICH2 jobs is necessary any longer.
|
|
|
|
|
|
Hydra will work out-of-the-box with a defined parallel environment where start- and stop_proc_args are both set to NONE in the to be used PE (in the essence, the same PE can now be used for Open MPI and MPICH2), and in the jobscript a plain mpiexec will discover automatically the granted slots and nodes without any further options. Nevertheless, in case that there is more than one MPI installation in a cluster available, the correct mpiexec corresponding to the compiled application must be used as usual.
|
|
|
|
|
|
The module for this version of MPICH2 is: `module load mpich/3.2`
|
|
|
|
|
|
The right parallel environment for MPICH2 with Hydra is : **mpich2**
|
|
|
|
|
|
The same mpihello.c can be used only you need to recompile this source code with the right mpicc from
|
|
|
|
|
|
/usr/local/mpich/mpich-3.2/bin/mpicc
|
|
|
|
|
|
Make sure that the mpich2 module is unloaded:
|
|
|
|
|
|
`module unload mpich2`
|
|
|
|
|
|
Now load the the right module mpich/3.2
|
|
|
|
|
|
`module load mpich/3.2`
|
|
|
|
|
|
Now compile the code :
|
|
|
` mpicc -o mpi3hello mpihello.c`
|
|
|
|
|
|
For MPICH2 we need to use the parallel environment mpich2. To submit a mpich2 job with qsub and 10 slots you need a shell script: The option -rmk sge is important to use, and the environment variable QRSH_WRAPPER.
|
|
|
The submit script would look like this, mpich2-qsub.sh
|
|
|
|
|
|
```
|
|
|
#!/bin/bash
|
|
|
#$ -V
|
|
|
#$ -N mpich2_tst
|
|
|
#$ -cwd
|
|
|
#$ -pe mpich2 10
|
|
|
#$ -v QRSH_WRAPPER=/usr/local/OpenGridScheduler/gridengine/bin/linux-x64/qrshwrapper
|
|
|
|
|
|
echo "Got $NSLOTS slots."
|
|
|
|
|
|
mpiexec -rmk sge -np $NSLOTS mpi3hello
|
|
|
```
|
|
|
You can now submit this script with qsub:
|
|
|
|
|
|
`qsub mpich2-qsub.sh`
|
|
|
|
|
|
|