MPI-SCATCI
2.0
An MPI version of SCATCI
|
Distribution of processes into a grid. More...
Data Types | |
type | processgrid |
MPI process grid layout. More... | |
Functions/Subroutines | |
subroutine | setup_process_grid (this, ngroup, sequential) |
Initialize the process grid. More... | |
subroutine | summarize (this) |
Write current grid layout to stdout. More... | |
logical function | is_my_group_work (this, i) |
Check whether this work-item is to be processed by this process' group. More... | |
integer function | which_group_is_work (this, i) |
Find out which group workitem will be processed by. More... | |
integer function | group_master_world_rank (this, igroup) |
Find out world rank of the master process of a given MPI group. More... | |
subroutine | square (n, a, b) |
Given integer area of a box, calculate its edges. More... | |
Variables | |
type(processgrid) | process_grid |
Distribution of processes into a grid.
This module contains utility routines and types which aid with the "parallelization or parallelization", i.e., with concurrent distributed diagonalizations of Hamiltonians for multiple irreducible representations.
integer function parallelization_module::group_master_world_rank | ( | class(processgrid), intent(inout) | this, |
integer, intent(in) | igroup | ||
) |
Find out world rank of the master process of a given MPI group.
Definition at line 293 of file Parallelization_module.F90.
logical function parallelization_module::is_my_group_work | ( | class(processgrid), intent(inout) | this, |
integer, intent(in) | i | ||
) |
Check whether this work-item is to be processed by this process' group.
The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around.
Definition at line 262 of file Parallelization_module.F90.
subroutine parallelization_module::setup_process_grid | ( | class(processgrid), intent(inout) | this, |
integer, intent(in) | ngroup, | ||
logical, intent(in) | sequential | ||
) |
Initialize the process grid.
Splits the world communicator to the given number of MPI groups. Sets up all global group communicators and local group communicators (subset on one node). If there are more groups than processes, then all groups are equal to the MPI world. If the number of processes is not divisible by the number of groups, the leading mod(nprocs,ngroup) processes will contain 1 process more than the rest of the groups.
this | Process grid to set up. |
ngroup | Number of MPI groups to create. |
sequential | If "true", all diagonalizations will be done in sequence (not concurrently, even if there are enough CPUs to create requested groups). This is needed to have the eigenvectors written to disk, which does not happen with concurrent diagonalizations. |
Definition at line 99 of file Parallelization_module.F90.
subroutine parallelization_module::square | ( | integer, intent(in) | n, |
integer(blasint), intent(out) | a, | ||
integer(blasint), intent(out) | b | ||
) |
Given integer area of a box, calculate its edges.
Return positive a
and b
such that their product is exactly equal to n
, and the difference between a
and b
is as small as possible. On return a
is always less than or equal to b
.
Definition at line 310 of file Parallelization_module.F90.
subroutine parallelization_module::summarize | ( | class(processgrid), intent(in) | this | ) |
Write current grid layout to stdout.
Definition at line 227 of file Parallelization_module.F90.
integer function parallelization_module::which_group_is_work | ( | class(processgrid), intent(inout) | this, |
integer, intent(in) | i | ||
) |
Find out which group workitem will be processed by.
The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around. Groups are numbered from 0.
Definition at line 279 of file Parallelization_module.F90.
type(processgrid) parallelization_module::process_grid |
Definition at line 80 of file Parallelization_module.F90.