ScalES-PPM
 All Classes Files Functions Variables Typedefs Pages
Partition How-to

Overview

This document intends to show how to:

The partitioning modules provide the following functionality:

Multiple data structures are available to represent a partition, addressing the different needs of the available algorithms.

Setting up for Partitioning

To decompose a data structure one first needs to describe the topology of the data structure in question. ScalES-PPM provides for several basic data types to make this step as simple as possible:

Computing a partition

Uniform partition of n-dimensional grids

Extending the example from above, an m $\times$ n grid can be partitioned into k $\times$l even sized parts by calling the uniform_partition method of ppm_uniform_partition:

USE ppm_extents, ONLY: extent
USE ppm_rectilinear, ONLY: lidx2rlcoord
USE ppm_uniform_partition, ONLY: uniform_partition
...
TYPE(extent) :: grid(2), part(2), deco(2)
INTEGER :: m, n, k, l, comm_rank, ierror
...
CALL mpi_comm_rank(mpi_comm_world, comm_rank, ierror)
deco = (/ extent(1, k), extent(1, l) /)
grid = (/ extent(1, m), extent(1, n) /)
part = uniform_partition(grid, deco, lidx2rlcoord(deco, comm_rank + 1))

In the above example, the linear MPI rank is mapped to a cartesian coordinate with the help of the utility module ppm_rectilinear.

Because the axes are divided separately, the partition thus obtained forms a block partition.

Balanced hierarchical partition

Graph partition

The library provides convenience wrappers for both MeTiS and ParMeTiS. The following example code shows how to

  1. build a graph from a rectilinear grid
  2. call the MeTiS wrapper
USE ppm_extents, ONLY: extent
USE ppm_graph_csr, ONLY: build_graph, graph_csr
USE ppm_set_partition_base, ONLY: partition_assignment
USE ppm_graph_partition_serial, ONLY: graph_partition_metis
...
TYPE(extent) :: grid(2)
INTEGER :: m, n, k, l, num_parts, ierror
INTEGER ::
TYPE(graph_csr) :: grid_graph
TYPE(partition_assignment) :: partition
INTEGER, ALLOCATABLE :: grid_pt_weight(:)
...
grid = (/ extent(1, m), extent(1, n) /)
CALL build_graph(grid_graph, grid)
ALLOCATE(grid_pt_weight(m * n))
! compute weights per grid point
CALL graph_partition_metis(partition, grid_graph, num_parts, &
vertex_weights=grid_pt_weights)

Set partition

If your data has no inherent connectivity, i.e. the problem is embarrassingly parallel, graphs or grids are improper models to use for partitioning. In this case partitioning by weight only is probably the most sensible solution. The library provides a greedy method to partition such data sets.

USE ppm_set_partition_base, ONLY: set_i4
USE ppm_set_partition, ONLY: greedy_partitioning
...
INTEGER, PARAMETER :: set_size=1000
TYPE(set_i4), ALLOCATABLE :: partition(:)
INTEGER :: weights(set_size)
...
CALL greedy_partitioning(partition, weights)

Repartition

Repartitioning for graphs

Repartitioning sets

For an already partitioned set the library offers a routine based on swapping elements such that memory reallocation can be avoided. The example is for the multi-process variant which has MPI collective call semantics.

USE ppm_set_repartition, ONLY: repartition_swap_mp
USE ppm_set_partition_base, ONLY: set_i4
...
INTEGER(i4), ALLOCATABLE :: weight(:)
TYPE(set_i4) :: part, new_part
...
new_part = part
CALL repartition_swap_mp(new_part%elem, weight, mpi_comm_world)

Based on the part and new_part arrays of above example, the application can then reorganize the data decomposition. If a certain amount of imbalance among partitions is tolerable, it can be beneficial to add an efficiency_threshold argument to the call to repartition_swap_mp.

Das diesem Bericht zugrundeliegende Vorhaben wurde mit Mitteln des Bundesministeriums für Bildung, und Forschung unter dem Förderkennzeichen 01IH08004E gefördert. Die Verantwortung für den Inhalt dieser Veröffentlichung liegt beim Autor.