MPI-SCATCI 2.0
An MPI version of SCATCI
Loading...
Searching...
No Matches
Parallelization_module::ProcessGrid Type Reference

MPI process grid layout. More...

Public Member Functions

procedure, pass, public setup (this, ngroup, sequential)
 Initialize the process grid.
procedure, pass, public is_my_group_work (this, i)
 Check whether this work-item is to be processed by this process' group.
procedure, pass, public which_group_is_work (this, i)
 Find out which group workitem will be processed by.
procedure, pass, public group_master_world_rank (this, igroup)
 Find out world rank of the master process of a given MPI group.
procedure, pass, public summarize (this)
 Write current grid layout to stdout.

Public Attributes

integer(blasint) wcntxt
 BLACS context containing all MPI processes in the world communicator.
integer(blasint) wprows
 MPI world communicator grid row count.
integer(blasint) wpcols
 MPI world communicator grid column count.
integer(blasint) mywrow
 row position of this process within the MPI world communicator
integer(blasint) mywcol
 column position of this process within the MPI world communicator
integer igroup
 zero-based index of the MPI group this process belongs to
integer ngroup
 total number of MPI groups partitioning the world communicator
integer gprocs
 number of processes in the current MPI group
integer grank
 rank of this process within the MPI group
integer lprocs
 number of processes of the current MPI group localized on a single node
integer lrank
 rank of this process within the local MPI group
integer(blasint) gcntxt
 BLACS context containing all MPI processes in the MPI group communicator.
integer(blasint) gprows
 MPI group communicator grid row count.
integer(blasint) gpcols
 MPI group communicator grid column count.
integer(blasint) mygrow
 row position of this process within the MPI group communicator
integer(blasint) mygcol
 column position of this process within the MPI group communicator
integer(mpiint) gcomm
 MPI group communicator.
integer(mpiint) lcomm
 subset of the MPI group communicator localized on a single node
integer, dimension(:), allocatable groupnprocs
 Number of processes per MPI group.
logical sequential
 Whether the diagonalizations will be done sequentially (one after another) or not.

Static Private Member Functions

procedure, nopass, private square (n, a, b)
 Given integer area of a box, calculate its edges.

Detailed Description

MPI process grid layout.

Authors
J Benda
Date
2019

Definition at line 43 of file Parallelization_module.F90.

Member Function/Subroutine Documentation

◆ group_master_world_rank()

procedure, pass, public Parallelization_module::ProcessGrid::group_master_world_rank ( class(processgrid), intent(inout) this,
integer, intent(in) igroup )

Find out world rank of the master process of a given MPI group.

Authors
J Benda
Date
2019

Definition at line 74 of file Parallelization_module.F90.

◆ is_my_group_work()

procedure, pass, public Parallelization_module::ProcessGrid::is_my_group_work ( class(processgrid), intent(inout) this,
integer, intent(in) i )

Check whether this work-item is to be processed by this process' group.

Authors
J Benda
Date
2019

The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around.

Definition at line 72 of file Parallelization_module.F90.

◆ setup()

procedure, pass, public Parallelization_module::ProcessGrid::setup ( class(processgrid), intent(inout) this,
integer, intent(in) ngroup,
logical, intent(in) sequential )

Initialize the process grid.

Authors
J Benda
Date
2019

Splits the world communicator to the given number of MPI groups. Sets up all global group communicators and local group communicators (subset on one node). If there are more groups than processes, then all groups are equal to the MPI world. If the number of processes is not divisible by the number of groups, the leading mod(nprocs,ngroup) processes will contain 1 process more than the rest of the groups.

Parameters
thisProcess grid to set up.
ngroupNumber of MPI groups to create.
sequentialIf "true", all diagonalizations will be done in sequence (not concurrently, even if there are enough CPUs to create requested groups). This is needed to have the eigenvectors written to disk, which does not happen with concurrent diagonalizations.

Definition at line 71 of file Parallelization_module.F90.

◆ square()

procedure, nopass, private Parallelization_module::ProcessGrid::square ( integer, intent(in) n,
integer(blasint), intent(out) a,
integer(blasint), intent(out) b )
staticprivate

Given integer area of a box, calculate its edges.

Authors
A Al-Refaie, J Benda
Date
2017 - 2019

Return positive a and b such that their product is exactly equal to n, and the difference between a and b is as small as possible. On return a is always less than or equal to b.

Definition at line 76 of file Parallelization_module.F90.

◆ summarize()

procedure, pass, public Parallelization_module::ProcessGrid::summarize ( class(processgrid), intent(in) this)

Write current grid layout to stdout.

Authors
J Benda
Date
2019

Definition at line 75 of file Parallelization_module.F90.

◆ which_group_is_work()

procedure, pass, public Parallelization_module::ProcessGrid::which_group_is_work ( class(processgrid), intent(inout) this,
integer, intent(in) i )

Find out which group workitem will be processed by.

Authors
J Benda
Date
2019

The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around. Groups are numbered from 0.

Definition at line 73 of file Parallelization_module.F90.

Member Data Documentation

◆ gcntxt

integer(blasint) Parallelization_module::ProcessGrid::gcntxt

BLACS context containing all MPI processes in the MPI group communicator.

Definition at line 58 of file Parallelization_module.F90.

◆ gcomm

integer(mpiint) Parallelization_module::ProcessGrid::gcomm

MPI group communicator.

Definition at line 64 of file Parallelization_module.F90.

◆ gpcols

integer(blasint) Parallelization_module::ProcessGrid::gpcols

MPI group communicator grid column count.

Definition at line 60 of file Parallelization_module.F90.

◆ gprocs

integer Parallelization_module::ProcessGrid::gprocs

number of processes in the current MPI group

Definition at line 53 of file Parallelization_module.F90.

◆ gprows

integer(blasint) Parallelization_module::ProcessGrid::gprows

MPI group communicator grid row count.

Definition at line 59 of file Parallelization_module.F90.

◆ grank

integer Parallelization_module::ProcessGrid::grank

rank of this process within the MPI group

Definition at line 54 of file Parallelization_module.F90.

◆ groupnprocs

integer, dimension(:), allocatable Parallelization_module::ProcessGrid::groupnprocs

Number of processes per MPI group.

Definition at line 67 of file Parallelization_module.F90.

◆ igroup

integer Parallelization_module::ProcessGrid::igroup

zero-based index of the MPI group this process belongs to

Definition at line 50 of file Parallelization_module.F90.

◆ lcomm

integer(mpiint) Parallelization_module::ProcessGrid::lcomm

subset of the MPI group communicator localized on a single node

Definition at line 65 of file Parallelization_module.F90.

◆ lprocs

integer Parallelization_module::ProcessGrid::lprocs

number of processes of the current MPI group localized on a single node

Definition at line 55 of file Parallelization_module.F90.

◆ lrank

integer Parallelization_module::ProcessGrid::lrank

rank of this process within the local MPI group

Definition at line 56 of file Parallelization_module.F90.

◆ mygcol

integer(blasint) Parallelization_module::ProcessGrid::mygcol

column position of this process within the MPI group communicator

Definition at line 62 of file Parallelization_module.F90.

◆ mygrow

integer(blasint) Parallelization_module::ProcessGrid::mygrow

row position of this process within the MPI group communicator

Definition at line 61 of file Parallelization_module.F90.

◆ mywcol

integer(blasint) Parallelization_module::ProcessGrid::mywcol

column position of this process within the MPI world communicator

Definition at line 48 of file Parallelization_module.F90.

◆ mywrow

integer(blasint) Parallelization_module::ProcessGrid::mywrow

row position of this process within the MPI world communicator

Definition at line 47 of file Parallelization_module.F90.

◆ ngroup

integer Parallelization_module::ProcessGrid::ngroup

total number of MPI groups partitioning the world communicator

Definition at line 51 of file Parallelization_module.F90.

◆ sequential

logical Parallelization_module::ProcessGrid::sequential

Whether the diagonalizations will be done sequentially (one after another) or not.

Definition at line 69 of file Parallelization_module.F90.

◆ wcntxt

integer(blasint) Parallelization_module::ProcessGrid::wcntxt

BLACS context containing all MPI processes in the world communicator.

Definition at line 44 of file Parallelization_module.F90.

◆ wpcols

integer(blasint) Parallelization_module::ProcessGrid::wpcols

MPI world communicator grid column count.

Definition at line 46 of file Parallelization_module.F90.

◆ wprows

integer(blasint) Parallelization_module::ProcessGrid::wprows

MPI world communicator grid row count.

Definition at line 45 of file Parallelization_module.F90.


The documentation for this type was generated from the following file: