YetAnotherCoupler  2.4.1
The C interface (yac_interface.h)

Initialisation

Definition

End of definition and search

Data exchange

Finalisation

Query routines

Auxillary routines

The Initialisation Phase

char * xml_filename = "coupling_aquaplanet.xml";
char * xsd_filename = "coupling.xsd";
yac_cinit ( xml_filename, xsd_filename );
void yac_cinit(const char *xml_filename, const char *schema_filename)
char * xml_filename = "coupling_aquaplanet.xml";
char * xsd_filename = "coupling.xsd";
MPI_Comm world_comm = MPI_COMM_WORLD;
yac_cinit_comm ( xml_filename, xsd_filename, world_comm );
void yac_cinit_comm(const char *xml_filename, const char *schema_filename, MPI_Comm comm)

After the initialisation we have the possibility to overwrite (or reinitialise) the job start- and end date by calling yac_fdef_datetime. This reinitialisation has to happen before any call to yac_fdef_fields.

Otherwise it will not have any impact!

Both arguments are optional (can be NULL):

const char * start_datetime = "01-01-1850T00:00:00";
const char * end_datetime = "31-12-1850T00:00:00";
yac_cdef_datetime ( start_datetime, end_datetime );
void yac_cdef_datetime(const char *start_datetime, const char *end_datetime)

Component Definition

int comp_id;
char * comp_name = "ocean";
yac_cdef_comp ( comp_name, &comp_id );
void yac_cdef_comp(char const *comp_name, int *comp_id)
int const nbr_comps = ... ;
char const * comp_names[nbr_comps] = {
"ocean",
"carbon",
...};
int comp_ids[nbr_comps];
yac_cdef_comps ( comp_names, nbr_comps, comp_ids );
void yac_cdef_comps(char const **comp_names, int num_comps, int *comp_ids)

The components are identified by a unique component name. The returned comp_id is used in subsequent calls for further definitions for this particular component. It is possible to define more than one component per process.

Once, all components have been defined, a local communicator can be retrieved for each component identified by its local comp ID.

#include <mpi.h>
int comp_id;
MPI_Comm *local_communicator;
yac_cget_localcomm ( &local_communicator, comp_id );
void yac_cget_localcomm(MPI_Comm *comm, int comp_id)

Now, we have the possibility to redirect the standard out and standard error output

MPI_Comm_rank ( local_communicator, &rank );
MPI_Comm_size ( local_communicator, &size );
int lenstr = (int) strlen(comp_name);
yac_redirstdout ( comp_name, lenstr, 1, rank, size );
void yac_redirstdout(const char *filestem, int lenstr, int parallel, int my_pe, int npes)

Grid Definition

For each grid the geographical information about the elements (or cells) needs to be provided. The geographical positions of the vertices have to be provided in a simple list, x_vertices(n) and y_vertices(n) contain the longitude and latitude of vertex n. For each cell information has to be provied about the connectivity, i.e. which vertices have to be taken to form a particular cell.

int m,n;
int const nbr_vertices = 20;
int const nbr_cells = 5;
int nbr_vertices_per_cell[nbr_cells];
int cell_to_vertex[nbr_cells*nbr_vertices];
double x_vertices[nbr_vertices];
double y_vertices[nbr_vertices];
char * comp_name = "ocean_grid";
for ( n = 0; n < nbr_cells; n++ ) {
for ( m = 0; m < nbr_nbr_vertices; m++ ) {
cell_to_vertex[n*nbr_nbr_vertices+m] = ... ;
}
yac_cdef_grid ( grid_name,
nbr_vertices,
nbr_cells,
nbr_vertices_per_cell,
x_vertices,
y_vertices,
cell_to_vertex,
grid_id );

Point Definition

If data do not represent values of the complete cell it is possible to define sets of points. Here we specify points at some location inside of a cell (location can be YAC_LOCATION_CELL, YAC_LOCATION_CORNER, or YAC_LOCATION_EDGE)

int nbr_points[2];
int location = YAC_LOCATION_CELL;
int point_id;
double x_points;
double y_points;
nbr_points[0] = 10;
nbr_points[1] = 20;
x_points = malloc ( nbr_points[0] * sizeof(*x_points) );
y_points = malloc ( nbr_points[1] * sizeof(*y_points) );
nbr_points,
location,
x_points,
y_points,
&point_id );
int const YAC_LOCATION_CELL
Definition: yac_interface.c:66
void yac_cdef_points_reg2d(int const grid_id, int const *nbr_points, int const located, double const *x_points, double const *y_points, int *point_id)

or for points on unstructured grids

int const nbr_points = ... ;
x_points = malloc ( nbr_points * sizeof(*x_points) );
y_points = malloc ( nbr_points * sizeof(*y_points) );
nbr_points,
location,
x_points,
y_points,
&point_id );
void yac_cdef_points_unstruct(int const grid_id, int const nbr_points, int const located, double const *x_points, double const *y_points, int *point_id)

The decomposition information is set. In this example, glb_index contains the global cell index. If the ith cell is halo cell owner_local will store the MPI rank of the owner of this particular cell. All cell which belong to the calling process are flag with -1.

for ( i = 0; i < nbr_cells; ++i ) {
global_index[i] = i;
}
yac_cset_global_index ( global_index,
grid_id );
void yac_cset_global_index(yac_int const *global_index, int location, int grid_id)
int is_core;
grid_id );
void yac_cset_core_mask(int const *is_core, int location, int grid_id)

Definition of Masks

The mask should have the same size as the number of points of cells and thus flag each point or cell as either .true. ( 0 ) or .false. ( 1 ) with the implication that only "true" points or cells are considered.

int const nbr_cells = ... ;
int is_valid[nbr_cells];
points_id );
static int is_valid(const xmlDocPtr doc, const char *schema_filename)
Definition: config_xml.c:169
void yac_cset_mask(int const *is_valid, int points_id)

Definition of Coupling Fields

int n;
int const nbr_fields = 5;
int field_ids[nbr_fields];
int point_ids[1];
int num_point_sets = 1;
typedef char * string;
string fieldName[nbr_fields];
fieldName[0] = "sea_surface_temperature";
fieldName[1] = "wind_speed";
fieldName[2] = "water_flux_into_sea_water";
fieldName[3] = "grid_eastward_wind";
fieldName[4] = "grid_northward_wind";

Assuming that all five fields have to be defined this looks like

for ( n = 0; n < nbr_fields; n++ )
yac_cdef_field ( field_name[n],
comp_id,
point_ids,
num_point_sets,
&field_id[n] );
void yac_cdef_field(char const *name, int const comp_id, int const *point_ids, int const num_pointsets, int *field_id)

To later obtain the total number of defined fields at a different place two functions are provided

int nbr_defined_fields;
int * defined_field_ids;
yac_cget_nbr_fields ( &nbr_defined_fields );
defined_field_ids = malloc ( nbr_defined_fields * sizeof(*defined_field_ids) )
yac_cget_nbr_fields ( nbr_defined_fields , defined_field_ids );
...
free ( defined_field_ids );
void yac_cget_nbr_fields(int *nbr_fields)

End of Definition - Start of Search

Once all components, grids and fields are defined the search is invoked by

int ierror;
yac_csearch ( &ierror );
void yac_csearch(int *ierror)

comp_ids contains the list of local component IDs returned by yac_cdf_comp, while field_ids contains the list of local field IDs returned by yac_cdef_field.

Upon return from yac_csearch YAC can provide a communicator which encompasses the root processes of of a pair of components ( see yac_cget_pair_rootcomm ).

Data Exchange

The function for sending and receiving data are overloaded w.r.t. the data type of the send and receive fields. Thus, send_field and recv_field can be either of type REAL or DOUBLE PRECISION.

The sending is implemented using non-blocking MPI function calls. Thus, the routine returns even if the data have not been transfered to the receiver. The data are buffered internally and the user is free to reuse the send_field buffer for a next message.

int m,n;
int const nbr_hor_points = 512;
int const nbr_pointssets = 1; // actually, the number of point sets
int const collection_size = 4;
int field_id;
int ierror;
double *** send_field;
send_field = malloc(collection_size * sizeof(*send_field));
for (i = 0; i < collection_size; i++) {
send_field[i] = malloc(num_pointsets * sizeof(**send_field));
for (j = 0; j < num_pointsets; j++) {
send_field[i][j] = malloc(nbr_hor_points * sizeof(***send_field));
}
}
for (i = 0; i < collection_size; i++)
for (j = 0; j < num_pointsets; j++)
for (k = 0; k < nbr_hor_points; k++)
send_field[i][j][k] = ... ;
[...]
call_yac_cput ( field_id,
collection_size,
send_field,
info,
ierror );
static size_t num_points

The receiving is implemented using so-called blocking MPI functions. Thus, yac_cget will only return if data has been received.

int const nbr_hor_points = 1024;
int const collection_size = 3;
int field_id;
int info;
int ierror;
double recv_field[collection_size][nbr_hor_points]
call yac_cget_ ( field_id,
collection_size,
recv_field,
&info,
&ierror );
void yac_cget_(int const field_id, int const collection_size, double *recv_field, int *info, int *ierr)

The Ending Phase

The coupling has to be terminated by calling yac_cfinalize.

void yac_cfinalize()

In case MPI_Init has been called by yac_cinit, MPI_Finalize will be called by yac_cfinalize. If the user has called MPI_Init himself (before calling yac_cinit) she also has to call MPI_Finalize after the call to yac_cfinalize.

Restarting YAC

It is possible to restart YAC. To do that the user has to call yac_ccleanup in the Ending Phase instead of yac_cfinalize.

void yac_ccleanup()

After yac_ccleanup has been called, the user can restart YAC by going through the Initialisation, Definition, and Data Exchange Phase as before. It is possible to restart YAC using a different XML configuration file.

In case the user initialised yaxt himself, he has to finalise it after yac_ccleanup in order to be able to call yac_cinit again.

In the final Ending Phase yac_cfinalize has to be called instead of yac_ccleanup.