|
MPI-SCATCI 2.0
An MPI version of SCATCI
|
MPI process grid layout. More...
Public Member Functions | |
| procedure, pass, public | setup (this, ngroup, sequential) |
| Initialize the process grid. | |
| procedure, pass, public | is_my_group_work (this, i) |
| Check whether this work-item is to be processed by this process' group. | |
| procedure, pass, public | which_group_is_work (this, i) |
| Find out which group workitem will be processed by. | |
| procedure, pass, public | group_master_world_rank (this, igroup) |
| Find out world rank of the master process of a given MPI group. | |
| procedure, pass, public | summarize (this) |
| Write current grid layout to stdout. | |
Public Attributes | |
| integer(blasint) | wcntxt |
| BLACS context containing all MPI processes in the world communicator. | |
| integer(blasint) | wprows |
| MPI world communicator grid row count. | |
| integer(blasint) | wpcols |
| MPI world communicator grid column count. | |
| integer(blasint) | mywrow |
| row position of this process within the MPI world communicator | |
| integer(blasint) | mywcol |
| column position of this process within the MPI world communicator | |
| integer | igroup |
| zero-based index of the MPI group this process belongs to | |
| integer | ngroup |
| total number of MPI groups partitioning the world communicator | |
| integer | gprocs |
| number of processes in the current MPI group | |
| integer | grank |
| rank of this process within the MPI group | |
| integer | lprocs |
| number of processes of the current MPI group localized on a single node | |
| integer | lrank |
| rank of this process within the local MPI group | |
| integer(blasint) | gcntxt |
| BLACS context containing all MPI processes in the MPI group communicator. | |
| integer(blasint) | gprows |
| MPI group communicator grid row count. | |
| integer(blasint) | gpcols |
| MPI group communicator grid column count. | |
| integer(blasint) | mygrow |
| row position of this process within the MPI group communicator | |
| integer(blasint) | mygcol |
| column position of this process within the MPI group communicator | |
| integer(mpiint) | gcomm |
| MPI group communicator. | |
| integer(mpiint) | lcomm |
| subset of the MPI group communicator localized on a single node | |
| integer, dimension(:), allocatable | groupnprocs |
| Number of processes per MPI group. | |
| logical | sequential |
| Whether the diagonalizations will be done sequentially (one after another) or not. | |
Static Private Member Functions | |
| procedure, nopass, private | square (n, a, b) |
| Given integer area of a box, calculate its edges. | |
| procedure, pass, public Parallelization_module::ProcessGrid::group_master_world_rank | ( | class(processgrid), intent(inout) | this, |
| integer, intent(in) | igroup ) |
Find out world rank of the master process of a given MPI group.
Definition at line 74 of file Parallelization_module.F90.
| procedure, pass, public Parallelization_module::ProcessGrid::is_my_group_work | ( | class(processgrid), intent(inout) | this, |
| integer, intent(in) | i ) |
Check whether this work-item is to be processed by this process' group.
The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around.
Definition at line 72 of file Parallelization_module.F90.
| procedure, pass, public Parallelization_module::ProcessGrid::setup | ( | class(processgrid), intent(inout) | this, |
| integer, intent(in) | ngroup, | ||
| logical, intent(in) | sequential ) |
Initialize the process grid.
Splits the world communicator to the given number of MPI groups. Sets up all global group communicators and local group communicators (subset on one node). If there are more groups than processes, then all groups are equal to the MPI world. If the number of processes is not divisible by the number of groups, the leading mod(nprocs,ngroup) processes will contain 1 process more than the rest of the groups.
| this | Process grid to set up. |
| ngroup | Number of MPI groups to create. |
| sequential | If "true", all diagonalizations will be done in sequence (not concurrently, even if there are enough CPUs to create requested groups). This is needed to have the eigenvectors written to disk, which does not happen with concurrent diagonalizations. |
Definition at line 71 of file Parallelization_module.F90.
|
staticprivate |
Given integer area of a box, calculate its edges.
Return positive a and b such that their product is exactly equal to n, and the difference between a and b is as small as possible. On return a is always less than or equal to b.
Definition at line 76 of file Parallelization_module.F90.
| procedure, pass, public Parallelization_module::ProcessGrid::summarize | ( | class(processgrid), intent(in) | this | ) |
Write current grid layout to stdout.
Definition at line 75 of file Parallelization_module.F90.
| procedure, pass, public Parallelization_module::ProcessGrid::which_group_is_work | ( | class(processgrid), intent(inout) | this, |
| integer, intent(in) | i ) |
Find out which group workitem will be processed by.
The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around. Groups are numbered from 0.
Definition at line 73 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::gcntxt |
BLACS context containing all MPI processes in the MPI group communicator.
Definition at line 58 of file Parallelization_module.F90.
| integer(mpiint) Parallelization_module::ProcessGrid::gcomm |
MPI group communicator.
Definition at line 64 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::gpcols |
MPI group communicator grid column count.
Definition at line 60 of file Parallelization_module.F90.
| integer Parallelization_module::ProcessGrid::gprocs |
number of processes in the current MPI group
Definition at line 53 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::gprows |
MPI group communicator grid row count.
Definition at line 59 of file Parallelization_module.F90.
| integer Parallelization_module::ProcessGrid::grank |
rank of this process within the MPI group
Definition at line 54 of file Parallelization_module.F90.
| integer, dimension(:), allocatable Parallelization_module::ProcessGrid::groupnprocs |
Number of processes per MPI group.
Definition at line 67 of file Parallelization_module.F90.
| integer Parallelization_module::ProcessGrid::igroup |
zero-based index of the MPI group this process belongs to
Definition at line 50 of file Parallelization_module.F90.
| integer(mpiint) Parallelization_module::ProcessGrid::lcomm |
subset of the MPI group communicator localized on a single node
Definition at line 65 of file Parallelization_module.F90.
| integer Parallelization_module::ProcessGrid::lprocs |
number of processes of the current MPI group localized on a single node
Definition at line 55 of file Parallelization_module.F90.
| integer Parallelization_module::ProcessGrid::lrank |
rank of this process within the local MPI group
Definition at line 56 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::mygcol |
column position of this process within the MPI group communicator
Definition at line 62 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::mygrow |
row position of this process within the MPI group communicator
Definition at line 61 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::mywcol |
column position of this process within the MPI world communicator
Definition at line 48 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::mywrow |
row position of this process within the MPI world communicator
Definition at line 47 of file Parallelization_module.F90.
| integer Parallelization_module::ProcessGrid::ngroup |
total number of MPI groups partitioning the world communicator
Definition at line 51 of file Parallelization_module.F90.
| logical Parallelization_module::ProcessGrid::sequential |
Whether the diagonalizations will be done sequentially (one after another) or not.
Definition at line 69 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::wcntxt |
BLACS context containing all MPI processes in the world communicator.
Definition at line 44 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::wpcols |
MPI world communicator grid column count.
Definition at line 46 of file Parallelization_module.F90.
| integer(blasint) Parallelization_module::ProcessGrid::wprows |
MPI world communicator grid row count.
Definition at line 45 of file Parallelization_module.F90.