MPI-SCATCI  2.0
An MPI version of SCATCI
parallelization_module Module Reference

Distribution of processes into a grid. More...

Data Types

type  processgrid
 MPI process grid layout. More...
 

Functions/Subroutines

subroutine setup_process_grid (this, ngroup, sequential)
 Initialize the process grid. More...
 
subroutine summarize (this)
 Write current grid layout to stdout. More...
 
logical function is_my_group_work (this, i)
 Check whether this work-item is to be processed by this process' group. More...
 
integer function which_group_is_work (this, i)
 Find out which group workitem will be processed by. More...
 
integer function group_master_world_rank (this, igroup)
 Find out world rank of the master process of a given MPI group. More...
 
subroutine square (n, a, b)
 Given integer area of a box, calculate its edges. More...
 

Variables

type(processgridprocess_grid
 

Detailed Description

Distribution of processes into a grid.

Authors
J Benda
Date
2019

This module contains utility routines and types which aid with the "parallelization or parallelization", i.e., with concurrent distributed diagonalizations of Hamiltonians for multiple irreducible representations.

Function/Subroutine Documentation

◆ group_master_world_rank()

integer function parallelization_module::group_master_world_rank ( class(processgrid), intent(inout)  this,
integer, intent(in)  igroup 
)

Find out world rank of the master process of a given MPI group.

Authors
J Benda
Date
2019

Definition at line 293 of file Parallelization_module.F90.

◆ is_my_group_work()

logical function parallelization_module::is_my_group_work ( class(processgrid), intent(inout)  this,
integer, intent(in)  i 
)

Check whether this work-item is to be processed by this process' group.

Authors
J Benda
Date
2019

The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around.

Definition at line 262 of file Parallelization_module.F90.

◆ setup_process_grid()

subroutine parallelization_module::setup_process_grid ( class(processgrid), intent(inout)  this,
integer, intent(in)  ngroup,
logical, intent(in)  sequential 
)

Initialize the process grid.

Authors
J Benda
Date
2019

Splits the world communicator to the given number of MPI groups. Sets up all global group communicators and local group communicators (subset on one node). If there are more groups than processes, then all groups are equal to the MPI world. If the number of processes is not divisible by the number of groups, the leading mod(nprocs,ngroup) processes will contain 1 process more than the rest of the groups.

Parameters
thisProcess grid to set up.
ngroupNumber of MPI groups to create.
sequentialIf "true", all diagonalizations will be done in sequence (not concurrently, even if there are enough CPUs to create requested groups). This is needed to have the eigenvectors written to disk, which does not happen with concurrent diagonalizations.

Definition at line 99 of file Parallelization_module.F90.

◆ square()

subroutine parallelization_module::square ( integer, intent(in)  n,
integer(blasint), intent(out)  a,
integer(blasint), intent(out)  b 
)

Given integer area of a box, calculate its edges.

Authors
A Al-Refaie, J Benda
Date
2017 - 2019

Return positive a and b such that their product is exactly equal to n, and the difference between a and b is as small as possible. On return a is always less than or equal to b.

Definition at line 310 of file Parallelization_module.F90.

◆ summarize()

subroutine parallelization_module::summarize ( class(processgrid), intent(in)  this)

Write current grid layout to stdout.

Authors
J Benda
Date
2019

Definition at line 227 of file Parallelization_module.F90.

◆ which_group_is_work()

integer function parallelization_module::which_group_is_work ( class(processgrid), intent(inout)  this,
integer, intent(in)  i 
)

Find out which group workitem will be processed by.

Authors
J Benda
Date
2019

The work-item index is expected to be greater than or equal to 1. Work-items (indices) larger than the number of MPI groups wrap around. Groups are numbered from 0.

Definition at line 279 of file Parallelization_module.F90.

Variable Documentation

◆ process_grid

type(processgrid) parallelization_module::process_grid

Definition at line 80 of file Parallelization_module.F90.