YetAnotherCoupler 3.2.0_a
Loading...
Searching...
No Matches
Initialising YAC

Overview

YAC provides three routines for the initialisation of YAC (only C-Versions are listed here):

And two dummy initialisation routines:

Initialisation methods

MPI handshake

A detailed description of the MPI handshake algorithm can be found here.

This algorithm can be used to collectively generate the first set of communicators in a coupled run configuration. For example one communicator for each executable or for groups of executables.

The algorithm takes an array of group names as input and returns an array of communicators. Each of these communicators contains all processes that provided the same group name.

Default YAC initialisation

The default YAC initialisation routine first calls the MPI handshake for MPI_COMM_WORLD and provides "yac" as group name. Afterwards it calls yac_cinit_comm with this communicator.

YAC initialisation with MPI communicator

This initialisation routine takes an MPI communicator as input. All processes in this communicator have to call a YAC initialisation routine.

If any process in MPI_COMM_WORLD calls yac_cinit, the communicator passed to yac_cinit_comm has to be generated by the MPI handshake algorithm and the group name for this communicator has to be "yac". This is required in order to avoid a deadlock.

Dummy initialisation

As yac_cinit, the routine yac_cinit_dummy will also first call the MPI handshake algorithm. However, it will not provide the group name "yac". Therefore, these processes are not included in the YAC communicator and cannot call any other YAC routines, except for yac_cfinalize. The call to yac_cfinalize is only required, if MPI was not initialised before the call to yac_cinit_dummy.

This routine is useful for processes that want to exclude themselves from the coupling by YAC.

Dummy initialisation with YAC communicator

If a process is part of a YAC communicator (generated by the MPI handshake algorithm using the group name "yac"), it can still exclude itself from using YAC by calling yac_cinit_comm_dummy and provide this communicator.

Usage example

Coupled run configuration

The following is an example for a coupled run configuration:

dot_inline_dotgraph_1.png

This configuration contains four different executable:

  • model_a.x
    • each process is part of one component (A_a, A_b, or A_io)
    • A_a and A_b exchange data using YAC
    • A_a and A_io exchange data using the communication library yaxt
  • model_b.x
    • all processes belong to component B
    • B and A_a exchange data using YAC
    • B and A_b exchange data using YAC
  • model_c.x
    • all processes belong to component C
    • C and A_a exchange data using YAC
  • model_d.x
    • all processes belong to component D
    • each process is part of one component (D or D_io)
    • D and C exchange data using a Coupler-library
    • D and D_io communicate using MPI point-to-point communication
  • model_e.x
    • all processes belong to component E
    • does not communicate with other components through MPI
  • io.x
    • all processes belong to component IO
    • IO and C exchange data using an IO-library

Implementation

The generation of the MPI communicator and the initialisation of all libraries can be implemented as follows.

model_a.x

The components of this executable require the following communicators:

  • a_comm
    • contains all processes of model_a.x
  • yac_comm
    • contains all processes using YAC
  • a_io_comm
    • contains all processes of components A_a and A_io
  • a_a_comm, a_b_comm, io_comm
    • contains all processes of component A_a, A_b, or A_io respectively
enum {
A_a,
A_b,
A_io,
} A_role;
char const * group_names[2] = {"A", "yac"};
MPI_Comm group_comms[2];
yac_cmpi_handshake(MPI_COMM_WORLD, 2, group_names, group_comms);
MPI_Comm a_comm = group_comms[0];
MPI_Comm yac_comm = group_comms[1];
// determine role of each process in a_comm
enum A_role role = determine_role(a_comm);
switch (role) {
case (A_a): {
// generate a_io_comm and a_a_comm
char const * group_name[2] = {"a_io", "A_a"};
MPI_Comm group_comms[2];
yac_cmpi_handshake(a_comm, 2, group_names, group_comms);
MPI_Comm a_io_comm = group_comms[0];
MPI_Comm a_a_comm = group_comms[1];
// initialise YAC
yac_cinit_comm(yac_comm);
// run component "A_a"
...
// cleanup
MPI_Comm_free(&a_a_comm);
MPI_Comm_free(&a_io_comm);
break;
}
case (A_b): {
// generate a_b_comm
char const * group_name = "A_b"
MPI_Comm a_b_comm;
yac_cmpi_handshake(a_comm, 1, &group_name, &a_b_comm);
// initialise YAC
yac_cinit_comm(yac_comm);
// run component "A_b"
...
// cleanup
MPI_Comm_free(&a_b_comm);
break;
}
case (A_io): {
// generate io_comm and a_io_comm
char const * group_name[2] = {"io", "a_io"};
MPI_Comm group_comms[2];
yac_cmpi_handshake(a_comm, 2, group_names, group_comms);
MPI_Comm io_comm = group_comms[0];
MPI_Comm a_io_comm = group_comms[1];
// take part in collective initialisation of YAC
// run component "A_io"
...
// cleanup
MPI_Comm_free(&a_io_comm);
MPI_Comm_free(&io_comm);
break;
}
};
// cleanup
MPI_Comm_free(&yac_comm);
MPI_Comm_free(&a_comm);
void yac_cinit_comm_dummy(MPI_Comm comm)
Definition yac.c:412
void yac_cmpi_handshake(MPI_Comm comm, size_t n, char const **group_names, MPI_Comm *group_comms)
Definition yac.c:327
void yac_cfinalize()
Definition yac.c:533
void yac_cinit_comm(MPI_Comm comm)
Definition yac.c:376

model_b.x

The component of this executable requires the following communicator:

  • b_comm
    • contains all processes of component B

(The communicator yac_comm is not required, because all communication with model_a.x is done through YAC.)

// initialise YAC
// define component B
int comp_id;
yac_cdef_comp("B", &comp_id);
// get b_comm
MPI_Comm b_comm;
yac_cget_comp_comm(comp_id, &b_comm);
// run component "A_a"
...
// cleanup
MPI_Comm_free(&b_comm);
void yac_cinit(void)
Definition yac.c:402
void yac_cget_comp_comm(int comp_id, MPI_Comm *comp_comm)
Definition yac.c:638
void yac_cdef_comp(char const *comp_name, int *comp_id)
Definition yac.c:795

model_c.x

The component of this executable requires the following communicator:

  • yac_comm
    • contains all processes using YAC
  • libio_comm
    • contains all processes using libio
  • libcouple_comm
    • contains all processes using libcouple
  • c_comm
    • contains all process of component C
// generate all required communicators
char const * group_names[4] = {"yac", "libio", "libcouple", "C"};
MPI_Comm group_comms[4];
yac_cmpi_handshake(MPI_COMM_WORLD, 4, group_names, group_comms);
MPI_Comm yac_comm = group_comms[0];
MPI_Comm libio_comm = group_comms[1];
MPI_Comm libcouple_comm = group_comms[2];
MPI_Comm c_comm = group_comms[3];
// initialise YAC
yac_cinit_comm(yac_comm);
// initialise other libraries
...
// run component "C"
...
// cleanup
MPI_Comm_free(&c_comm);
MPI_Comm_free(&libcouple_comm);
MPI_Comm_free(&libio_comm);
MPI_Comm_free(&yac_comm);

model_d.x

The components of this executable require the following communicators:

  • d_io_comm
    • contains all processes of model_d.x
  • libcouple_comm
    • contains all processes using libcouple
  • d_comm and io_comm
    • contains all processes of components D or D_io respectively
enum {
D,
D_io,
} D_role;
char const * group_names[2] = {"libcouple", "d_io"};
MPI_Comm group_comms[2];
yac_cmpi_handshake(MPI_COMM_WORLD, 2, group_names, group_comms);
MPI_Comm libcouple_comm = group_comms[0];
MPI_Comm d_io_comm = group_comms[1];
// determine role of each process in d_comm
enum D_role role = determine_role(d_comm);
switch (role) {
case (D): {
// generate d_comm
MPI_Comm d_comm;
MPI_Comm_split(d_io_comm, 0, 0, &d_comm);
// initialise libcouple
...
// run component "D"
...
// cleanup
MPI_Comm_free(&d_comm);
break;
}
case (D_io): {
// generate io_comm
MPI_Comm d_comm;
MPI_Comm_split(d_io_comm, 1, 0, &d_comm);
// initialise libcouple
...
// run component "D_io"
...
// cleanup
MPI_Comm_free(&io_comm);
break;
}
};
// cleanup
MPI_Comm_free(&libcouple_comm);
MPI_Comm_free(&a_comm);

model_e.x

The components of this executable does not require any communicators.

// take part in collective initialisation of YAC
void yac_cinit_dummy(void)
Definition yac.c:428

io.x

The component of this executable requires the following communicator:

  • libio_comm
    • contains all processes using libio
// generate required communicator
char const * group_name = "libio";
MPI_Comm libio_comm;
yac_cmpi_handshake(MPI_COMM_WORLD, 1, &group_name, &libio_comm);
// initialise libio using libio_comm
...
// run component "IO"
...
// cleanup
MPI_Comm_free(&libio_comm);

Alternatively, if libio supports the same handshake algorithm, no additional communicator has to be generated.

// initialise libio
...
// run component "IO"
...
// cleanup
...