next up previous contents
Next: 3.4 Tricks and problems Up: 3 Parallelism Previous: 3.2 Running on parallel   Contents

Subsections

3.3 Parallelization levels

Data structures are distributed across processors. Processors are organized in a hierarchy of groups, which are identified by different MPI communicators level. The groups hierarchy is as follow:

                 /  pools _ task   groups
  world _ images
                 \ linear-algebra  groups

world: is the group of all processors (MPI_COMM_WORLD).

images: Processors can then be divided into different "images", corresponding to a point in configuration space (i.e. to a different set of atomic positions) for NEB calculations; to one (or more than one) "irrep" or wave-vector in phonon calculations.

pools: When k-point sampling is used, each image group can be subpartitioned into "pools", and k-points can distributed to pools. Within each pool, reciprocal space basis set (PWs) and real-space grids are distributed across processors. This is usually referred to as "PW parallelization". All linear-algebra operations on array of PW / real-space grids are automatically and effectively parallelized. 3D FFT is used to transform electronic wave functions from reciprocal to real space and vice versa. The 3D FFT is parallelized by distributing planes of the 3D grid in real space to processors (in reciprocal space, it is columns of G-vectors that are distributed to processors).

task groups: In order to allow good parallelization of the 3D FFT when the number of processors exceeds the number of FFT planes, data can be redistributed to "task groups" so that each group can process several wavefunctions at the same time.

linear-algebra group: A further level of parallelization, independent on PW or k-point parallelization, is the parallelization of subspace diagonalization (pw.x) or iterative orthonormalization (cp.x). Both operations required the diagonalization of arrays whose dimension is the number of Kohn-Sham states (or a small multiple). All such arrays are distributed block-like across the ``linear-algebra group'', a subgroup of the pool of processors, organized in a square 2D grid. As a consequence the number of processors in the linear-algebra group is given by n2, where n is an integer; n2 must be smaller than the number of processors of a single pool. The diagonalization is then performed in parallel using standard linear algebra operations. (This diagonalization is used by, but should not be confused with, the iterative Davidson algorithm). One can choose to compile ScaLAPACK if available, internal built-in algorithms otherwise.

Communications: Images and pools are loosely coupled and processors communicate between different images and pools only once in a while, whereas processors within each pool are tightly coupled and communications are significant. This means that Gigabit ethernet (typical for cheap PC clusters) is ok up to 4-8 processors per pool, but fast communication hardware (e.g. Mirynet or comparable) is absolutely needed beyond 8 processors per pool.

Choosing parameters: To control the number of processors in each group, command line switches: -nimage, -npools, -ntg, northo (for cp.x) or -ndiag (for pw.x) are used. As an example consider the following command line:

mpirun -np 4096 ./pw.x -nimage 8 -npool 2 -ntg 8 -ndiag 144 -input my.input
This executes PWscf on 4096 processors, to simulate a system with 8 images, each of which is distributed across 512 processors. k-points are distributed across 2 pools of 256 processors each, 3D FFT is performed using 8 task groups (64 processors each, so the 3D real-space grid is cut into 64 slices), and the diagonalization of the subspace Hamiltonian is distributed to a square grid of 144 processors (12x12).

Default values are: -nimage 1 -npool 1 -ntg 1 ; ndiag is set to 1 if ScaLAPACK is not compiled, it is set to the square integer smaller than or equal to half the number of processors of each pool.

3.3.0.1 Massively parallel calculations

For very large jobs (i.e. O(1000) atoms or so) or for very long jobs to be run on massively parallel machines (e.g. IBM BlueGene) it is crucial to use in an effective way both the "task group" and the "linear-algebra" parallelization. Without a judicious choice of parameters, large jobs will find a stumbling block in either memory or CPU requirements. In particular, the linear-algebra parallelization is used in the diagonalization of matrices in the subspace of Kohn-Sham states (whose dimension is as a strict minimum equal to the number of occupied states). These are stored as block-distributed matrices (distributed across processors) and diagonalized using custom-tailored diagonalization algorithms that work on block-distributed matrices.

Since v.4.1, ScaLAPACK can be used to diagonalize block distributed matrices, yielding better speed-up than the default algorithms for large (> 1000) matrices, when using a large number of processors (> 512). If you want to test ScaLAPACK, use configure -with-scalapack. This will add -D__SCALAPACK to DFLAGS in make.sys and set LAPACK_LIBS to something like:

    LAPACK_LIBS = -lscalapack -lblacs -lblacsF77init -lblacs -llapack
The repeated -lblacs is not an error, it is needed! If configure does not recognize ScaLAPACK, inquire with your system manager on the correct way to link them.

A further possibility to expand scalability, especially on machines like IBM BlueGene, is to use mixed MPI-OpenMP. The idea is to have one (or more) MPI process(es) per multicore node, with OpenMP parallelization inside a same node. This option is activated by configure -with-openmp, which adds preprocessing flag -D__OPENMP and one of the following compiler options:

ifort: -openmp
xlf: -qsmp=omp
PGI: -mp
ftn: -mp=nonuma
OpenMP parallelization is currently implemented and tested for the following combinations of FFTs and libraries:
internal FFTW copy: -D__FFTW
ESSL: -D__ESSL or -D__LINUX_ESSL, link with -lesslsmp
ACML: -D__ACML, link with -lacml_mp.
Currently, ESSL (when available) are faster than internal FFTW, which in turn are faster than ACML.

3.3.1 Understanding parallel I/O

In parallel execution, each processor has its own slice of wavefunctions, to be written to temporary files during the calculation. The way wavefunctions are written by pw.x is governed by variable wf_collect, in namelist &CONTROL If wf_collect=.true., the final wavefunctions are collected into a single directory, written by a single processor, whose format is independent on the number of processors. If wf_collect=.false. (default) each processor writes its own slice of the final wavefunctions to disk in the internal format used by PWscf.

The former case requires more disk I/O and disk space, but produces portable data files; the latter case requires less I/O and disk space, but the data so produced can be read only by a job running on the same number of processors and pools, and if all files are on a file system that is visible to all processors (i.e., you cannot use local scratch directories: there is presently no way to ensure that the distribution of processes on processors will follow the same pattern for different jobs).

cp.x instead always collects the final wavefunctions into a single directory. Files written by pw.x can be read by cp.x only if wf_collect=.true. (and if produced for k = 0 case). The directory for data is specified in input variables outdir and prefix (the former can be specified as well in environment variable ESPRESSO_TMPDIR): outdir/prefix.save. A copy of pseudopotential files is also written there. If some processor cannot access the data directory, the pseudopotential files are read instead from the pseudopotential directory specified in input data. Unpredictable results may follow if those files are not the same as those in the data directory!

IMPORTANT: Avoid I/O to network-mounted disks (via NFS) as much as you can! Ideally the scratch directory outdir should be a modern Parallel File System. If you do not have any, you can use local scratch disks (i.e. each node is physically connected to a disk and writes to it) but you may run into trouble anyway if you need to access your files that are scattered in an unpredictable way across disks residing on different nodes.

You can use input variable disk_io='minimal', or even 'none', if you run into trouble (or into angry system managers) with excessive I/O with pw.x. The code will store wavefunctions into RAM during the calculation. Note however that this will increase your memory usage and may limit or prevent restarting from interrupted runs.

3.3.1.1 Cray XT3

On the cray xt3 there is a special hack to keep files in memory instead of writing them without changes to the code. You have to do a: module load iobuf before compiling and then add liobuf at link time. If you run a job you set the environment variable IOBUF_PARAMS to proper numbers and you can gain a lot. Here is one example:
env IOBUF_PARAMS='*.wfc*:noflush:count=1:size=15M:verbose,\
*.dat:count=2:size=50M:lazyflush:lazyclose:verbose,\
*.UPF*.xml:count=8:size=8M:verbose' pbsyod =\
\~{}/espresso/bin/pw.x npool 4 in si64pw2x2x2.inp > & \
si64pw2x2x232moreiobuf.out &
This will ignore all flushes on the *wfc* (scratch files) using a single i/o buffer large enough to contain the whole file ($ \sim$ 12 Mb here). this way they are actually never(!) written to disk. The *.dat files are part of the restart, so needed, but you can be 'lazy' since they are writeonly. .xml files have a lot of accesses (due to iotk), but with a few rather small buffers, this can be handled as well. You have to pay attention not to make the buffers too large, if the code needs a lot of memory, too and in this example there is a lot of room for improvement. After you have tuned those parameters, you can remove the 'verboses' and enjoy the fast execution. Apart from the i/o issues the cray xt3 is a really nice and fast machine. (Info by Axel Kohlmeyer, maybe obsolete)


next up previous contents
Next: 3.4 Tricks and problems Up: 3 Parallelism Previous: 3.2 Running on parallel   Contents
Build Daemon user 2012-05-24