YetAnotherCoupler 3.5.2
|
There are multiple methods to initialise YAC (more details: Initialising YAC). The basic method, which is collective for all processes in MPI_COMM_WORLD is: | ||
import yac
yac = yac.YAC()
| ||
Alternatively, it is possible to initialise YAC with a previously generated MPI communicator (using for example MPI handshake algorithm). | ||
char const * yac_group_name = "yac";
MPI_Comm yac_comm;
MPI_COMM_WORLD, 1, &yac_group_name, &yac_comm );
yac_cinit_comm ( yac_comm );
void yac_cmpi_handshake(MPI_Comm comm, size_t n, char const **group_names, MPI_Comm *group_comms) Definition yac.c:344 | CHARACTER(len=YAC_MAX_CHARLEN) :: yac_group_name(1)
INTEGER :: yac_comm(1)
yac_group_name(1) = "yac"
CALL yac_fmpi_handshake( &
mpi_comm_world, yac_group_name, yac_comm)
CALL yac_finit_comm ( yac_comm(1) )
Definition yac_module.F90:264 Definition yac_module.F90:241 | |
After the initialisation we have the possibility to read in a coupling configuration file. | ||
CHARACTER(LEN=YAC_MAX_CHARLEN) :: yaml_filename
...
yaml_filename = "coupling.yaml"
CALL yac_fread_config_yaml(yaml_filename)
Fortran interface for the reading of configuration files. Definition yac_module.F90:326 | yaml_filename = "coupling.yaml"
yac.read_config_yaml(yaml_filename)
| |
If not defined in a coupling configuration file, the calendar has to be set. It is required for YAC be able to interpret the start- and end date, and timesteps provided by the field definition. | ||
Definition yac_module.F90:572 | yac.def_calendar(Calendar.PROLEPTIC_GREGORIAN)
| |
The start- and end date needs to be defined by at least one process in the YAC instance. This information is synchronized across all processes in the YAC instance by the synchronisation of definitions. It can be defined within a coupling configuration file or set through the interface. | ||
const char * start_datetime = "01-01-1850T00:00:00";
const char * end_datetime = "31-12-1850T00:00:00";
// Both arguments are optional (can be NULL)
yac_cdef_datetime ( start_datetime, end_datetime );
void yac_cdef_datetime(const char *start_datetime, const char *end_datetime) Definition yac.c:640 | CHARACTER(LEN=YAC_MAX_CHARLEN) :: start_datetime
CHARACTER(LEN=YAC_MAX_CHARLEN) :: end_datetime
start_datetime = '01-01-1850T00:00:00'
end_datetime = '31-12-1851T00:00:00'
! Both arguments are optional
CALL yac_fdef_datetime ( start_datetime = start_datetime, &
end_datetime = end_datetime )
Fortran interface for the definition of time parameters. Definition yac_module.F90:551 | start_datetime = "01-01-1850T00:00:00"
end_datetime = "31-12-1850T00:00:00"
yac.def_datetime ( start_datetime, end_datetime )
|
A coupled run configuration may consist of multiple executables or programs, e.g. model_a.x and model_b.x. If the processes of a single excutable have to register multiple components, these processes may required their own individual communicator that contain only the processes of their respective executable in order to be able to determine the component associated to each process. Initialising YAC contains more information on how to handle more complex setups such as the one described above. |
Each process can be part of zero, one, or more components. The components are identified by a unique component name. The definition of the component associated with each process is a collective call for all processes in the communicator passed to the initialisation routine (or MPI_COMM_WORLD if none was provided) and is called once. For each defined component an ID is returned, which is used in subsequent calls to identify the respective component. | ||
INTEGER :: comp_id
CHARACTER(LEN=YAC_MAX_CHARLEN) :: comp_name
comp_name = 'Ocean'
comp_id ) ! [out]
Definition yac_module.F90:478 | component = yac.def_comp ( "ocean" )
| |
int const nbr_comps = 2;
char const * comp_names[nbr_comps] = { "ocean", "carbon" };
int comp_ids[nbr_comps];
yac_cdef_comps ( comp_names, nbr_comps, comp_ids );
void yac_cdef_comps(char const **comp_names, int num_comps, int *comp_ids) Definition yac.c:872 | INTEGER :: comp_ids(2)
CHARACTER(LEN=YAC_MAX_CHARLEN) :: comp_names(2)
comp_names(1) = 'ocean'
comp_names(2) = 'carbon'
comp_names, & ! [IN]
comp_ids ) ! [out]
Definition yac_module.F90:497 | |
Once, all components have been defined, a component communicator can be retrieved for each component identified by its local comp ID. In case of python a mpi4py communicator can be obtained. | ||
INTEGER :: comp_comm
comp_comm ) ! [out]
Fortran interface for getting back a local MPI communicator. Definition yac_module.F90:2297 | comp_communicator = component.comp_comm
| |
In addition, YAC can provide a communicator, which encompasses all processes of a provided list of components. Not all components in this list have to be locally defined. The generation of this communicator is collective for all processes that defined at least one component in this list. The list of component names has to be consistent across all involved processes. | ||
MPI_Comm oce_atm_comm;
char const * comp_names[2] = { "ocean", "atmosphere" };
yac_cget_comps_comm ( comp_names, 2, &oce_atm_comm );
void yac_cget_comps_comm(const char **comp_names, int num_comps, MPI_Comm *comps_comm) Definition yac.c:779 | INTEGER :: oce_atm_comm
CHARACTER(LEN=YAC_MAX_CHARLEN) :: comp_names(2)
comp_names(1) = 'ocean'
comp_names(2) = 'atmosphere'
2, & ! [IN]
oce_atm_comm ) ! [out]
Fortran interface for getting back a MPI communicator for communication between components. Definition yac_module.F90:2313 | oce_atm_comm = yac.get_comps_comm(["ocean", "atmosphere"])
|
In some use-cases it might be difficult to provide all component names at once for the collective component definition. In this case it possible to "pre-define" components. The component ids for these "pre-defined" components are valid, however the component query routines can only be called after collective component definition is finished. | ||
INTEGER :: comp_id
CHARACTER(LEN=YAC_MAX_CHARLEN) :: comp_name
comp_name = 'I/O'
comp_id ) ! [out]
| component = yac.predef_comp ( "I/O" )
|
Depending on the grid definition routine being used, the local ordering of cells, vertices, and edges is given implicitly or explicitly by the user.
If not given explicitly (for example by using yac_cdef_grid_unstruct_edge, the ordering of the edges of the grid is defined as follows:
Generate for each edge a tuple containing the local indices of the adjacent vertices (lowest index first). Sort this list by the vertex indices; this gives the the local order of edges.
If data do not represent values of the complete cell, it is possible to define sets of points. Here we specify points at some location (in radian) inside of a cell (location can be YAC_LOCATION_CELL, YAC_LOCATION_CORNER, or YAC_LOCATION_EDGE). As for the grid defintion, there are multiple routines for defining points. For an unstructured grid points at the center of each cell may be defined as follows: | ||
int cell_point_id;
int nbr_cells = 200;
x_points = malloc ( nbr_cells * sizeof(*x_points) );
y_points = malloc ( nbr_cells * sizeof(*y_points) );
...
yac_cdef_points_unstruct ( grid_id,
nbr_cells,
x_points,
y_points,
&cell_point_id );
| INTEGER, PARAMETER :: nbr_points = 5
INTEGER :: point_id
DOUBLE PRECISION :: x_points(nbr_points)
DOUBLE PRECISION :: y_points(nbr_points)
...
nbr_points, & ! [IN]
yac_location_cell, & ! [IN]
x_points, & ! [IN]
y_points, & ! [IN]
point_id ) ! [out]
| nbr_cells = 200
x_points = [...]
y_points = [...]
...
cell_points = grid.def_points ( Location.CELL,
x_points,
y_points)
|
The user can provide global ids for all cell, vertices, and/or edges, if they are available. These have to be consistent across all processes. Otherwise, YAC will generate them. However, the generation of global ids may take some time. Therefore, it is recommended to provide them. Additionally, this makes interpretation of weight files (if they are generated) easier. | ||
INTEGER :: global_cell_ids(nbr_cells)
...
CALL yac_fset_global_index ( global_cell_ids, &
grid_id )
Fortran interface for the setting of grid global ids. Definition yac_module.F90:1255 | global_cell_ids = [...]
...
grid.set_global_index ( global_cell_ids,
Location.CELL)
| |
The core mask allows the user to define cells, vertices, and/or edges that YAC is supposed to ignore, for example because they are halo points that do not contain valid data. These points will not be used as a source or destination in a put/get/exchange operation. | ||
INTEGER :: is_core_cell(nbr_cells)
...
CALL yac_fset_core_mask ( is_core_cell, &
grid_id )
Fortran interface for the setting of a grid core masks. Definition yac_module.F90:1273 | is_core_cell = [...]
...
grid.set_core_mask ( is_core_cell,
Location.CELL)
|
While core masks are supposed to allow YAC to differentiate between the compute domain and halo/dummy points. YAC also allows to define additional masks that can set for each field individually (see here). These masks can be used to specify that YAC should for example only consider coast or coean cells. The | ||
int coast_mask_id;
int * is_coast_cell;
...
yac_cdef_mask ( grid_id,
nbr_cells,
is_coast_cell,
&coast_mask_id );
| INTEGER :: coast_mask_id
INTEGER :: is_coast_cell(nbr_cells)
...
nbr_cells, & ! [IN]
yac_location_cell, & ! [IN]
is_coast_cell, & ! [IN]
coast_mask_id ) ! [out]
Definition yac_module.F90:1330 | is_coast_cell = np.array( ... )
coast_mask = grid.def_mask ( Location.CELL,
is_coast_cell)
|
It is also possible to assign a default mask to a set of previously define points. The is_valid array should again have the approriate size. | ||
int * is_ocean_cell;
...
yac_cset_mask ( is_ocean_cell,
cell_point_id );
| LOGICAL :: is_ocean_cell (nbr_cells)
...
cell_point_id ) ! [in]
Fortran interface for the setting of default pointset masks. Definition yac_module.F90:1303 | is_ocean_cell = [...]
...
cell_points.set_mask ( is_ocean_cell )
|
A field usually gets assigned a mask when it is defined. However, masks for the source and target fields can also be defined for each couple either using the user interface or the through the configuration file. In these cases the masks are referenced through their names, which can be set when defining them. | ||
int coast_mask_id;
int * is_coast_cell;
char const * mask_name = "ocean mask";
...
yac_cdef_mask_named ( grid_id,
nbr_cells,
is_coast_cell,
mask_name,
&coast_mask_id );
| INTEGER :: coast_mask_id
INTEGER :: is_coast_cell(nbr_cells)
CHARACTER(LEN=YAC_MAX_CHARLEN) :: mask_name
...
mask_name = 'ocean mask'
nbr_cells, & ! [IN]
yac_location_cell, & ! [IN]
is_coast_cell, & ! [IN]
mask_name, & ! [IN]
coast_mask_id ) ! [out]
Definition yac_module.F90:1366 | is_coast_cell = np.array( ... )
coast_mask = grid.def_mask ( Location.CELL,
is_coast_cell,
"ocean mask")
|
A field consists of one or more point sets and is identified by it name. YAC only supports 2D-fields on the sphere. However if multiple 2D-fields have the same configuration or the associated the field has more than one level, these can be processed in a single step using the collection size. In case a field is configured to be coupled, the point sets have to match with the interpolation stack. Most interpolation methods only support a single point set and some interpolations are limited to certain point locations (for example Conservative interpolation only support points with location YAC_LOCATION_CELL). | ||
int sst_field_id;
char const * field_name = "sea_surface_temperature";
int const num_point_sets = 1;
int point_ids[num_point_sets] = {cell_point_id};
int collection_size = 1;
char const * timestep = "PT15M";
yac_cdef_field ( field_name,
comp_id,
point_ids,
num_point_sets,
collection_size,
timestep,
&sst_field_id );
void yac_cdef_field(char const *name, int const comp_id, int const *point_ids, int const num_pointsets, int collection_size, const char *timestep, int time_unit, int *field_id) Definition yac.c:1216 | INTEGER :: field_id
CHARACTER(LEN=YAC_MAX_CHARLEN) :: field_name
INTEGER, PARAMETER :: num_point_sets = 1
INTEGER :: point_ids(num_point_sets)
INTEGER, PARAMETER :: collection_size = 1
CHARACTER(LEN=YAC_MAX_CHARLEN) :: timestep
field_name = 'sea_surface_temperature'
timestep = 'PT15M'
point_ids(1) = cell_point_id
comp_id, & ! [IN]
point_ids, & ! [IN]
num_point_sets, & ! [IN]
collection_size, & ! [IN]
timestep, & ! [IN]
yac_time_unit_iso_format, & ! [IN]
field_id ) ! [out]
Fortran interface for the definition of coupling fields using default masks. Definition yac_module.F90:1415 | field_name = "sea_surface_temperature"
num_point_sets = 1
points = [cell_points]
collection_size = 1
timestep = "PT15M"
sst_field = Field.create ( field_name,
component,
points,
collection_size,
timestep,
TimeUnit.ISO_FORMAT)
|
If the user wants to specify a mask for a field (in addition to the core mask), he has to either set a default mask to the respective points or pass the mask id to the field definition. The location associated with the provided masks has to match with the ones of the points and they have to be based on the same grid. Before a field can be defined, a calendar has to be set. | ||
int sst_field_id;
char const * field_name = "sea_surface_temperature";
int const num_point_sets = 1;
int point_ids[num_point_sets] = {cell_point_id};
int mask_ids[num_point_sets] = {ocean_mask_id};
int collection_size = 1;
char const * timestep = "PT15M";
yac_cdef_field_mask ( field_name,
comp_id,
point_ids,
mask_ids,
num_point_sets,
collection_size,
timestep,
&sst_field_id );
void yac_cdef_field_mask(char const *name, int const comp_id, int const *point_ids, int const *mask_ids, int const num_pointsets, int collection_size, const char *timestep, int time_unit, int *field_id) Definition yac.c:1143 | INTEGER :: field_id
CHARACTER(LEN=YAC_MAX_CHARLEN) :: field_name
INTEGER, PARAMETER :: num_point_sets = 1
INTEGER :: point_ids(num_point_sets)
INTEGER :: mask_ids(num_point_sets)
INTEGER, PARAMETER :: collection_size = 1
CHARACTER(LEN=YAC_MAX_CHARLEN) :: timestep
field_name = 'sea_surface_temperature'
timestep = 'PT15M'
point_ids(1) = cell_point_id
mask_ids(1) = ocean_mask_id
comp_id, & ! [IN]
point_ids, & ! [IN]
mask_ids, & ! [IN]
num_point_sets, & ! [IN]
collection_size, & ! [IN]
timestep, & ! [IN]
yac_time_unit_iso_format, & ! [IN]
field_id ) ! [out]
Fortran interface for the definition of coupling fields using explicit masks. Definition yac_module.F90:1446 | |
Once a field is defined it can be enabled for dynamic fractional masking (see Dynamic fractional masking). When enabling dynamic fractional masking a fallback value has to be provided for the source field. If in an exchange operation the mask for all source points used to interpolate a target point is zero, then this value will be assigned. | ||
int sst_field_id;
char const * comp_name = "ocean";
char const * grid_name = "ocean_grid";
char const * field_name = "sea_surface_temperature";
double frac_mask_fallback_value = 0.0;
yac_cenable_field_frac_mask ( comp_name,
grid_name,
field_name,
frac_mask_fallback_value );
void yac_cenable_field_frac_mask(const char *comp_name, const char *grid_name, const char *field_name, double frac_mask_fallback_value) Definition yac.c:1259 | CHARACTER(LEN=YAC_MAX_CHARLEN) :: comp_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: grid_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: field_name
DOUBLE PRECISION :: frac_mask_fallback_value
comp_name = 'ocean'
grid_name = 'grid_name'
field_name = 'sea_surface_temperature'
frac_mask_fallback_value = 0.0
CALL yac_fenable_field_frac_mask ( &
comp_name, & ! [IN]
grid_name, & ! [IN]
field_name, & ! [IN]
frac_mask_fallback_value) ! [in]
| frac_mask_fallback_value = 0.0
yac.enable_field_frac_mask(
comp_name, grid_name, field_name,
frac_mask_fallback_value)
|
Once a component, grid, or field is defined, it is possible to attach metadata to it. In YAC metadata comes in the form of a string. It is sufficient if at least one process defines the metadata for a component, grid, or field. If multiple processes define metadata, it has to be consistent across all of these processes. The metadata will be synchronised and made available for all other processes by the the synchronisation of definitions and/or the end of the definition. | ||
char const * comp_metadata = "some metadata";
yac_cdef_component_metadata(comp_name, comp_metadata);
void yac_cdef_component_metadata(const char *comp_name, const char *metadata) Definition yac.c:1276 | CHARACTER (LEN=256) :: comp_metadata
...
comp_metadata = "some metadata"
CALL yac_fdef_component_metadata(comp_name, comp_metadata)
| comp_metadata = "some metadata"
yac.def_component_metadata(comp_name, comp_metadata.encode())
|
char const * grid_metadata = "some metadata";
yac_cdef_grid_metadata(grid_name, grid_metadata);
void yac_cdef_grid_metadata(const char *grid_name, const char *metadata) Definition yac.c:1290 | CHARACTER (LEN=256) :: grid_metadata
...
grid_metadata = "some metadata"
CALL yac_fdef_grid_metadata(grid_name, grid_metadata)
| grid_metadata = "some metadata"
yac.def_grid_metadata(grid_name, grid_metadata.encode())
|
char const * field_metadata = "some metadata";
comp_name, grid_name, field_name, field_metadata);
void yac_cdef_field_metadata(const char *comp_name, const char *grid_name, const char *field_name, const char *metadata) Definition yac.c:1304 | CHARACTER (LEN=256) :: field_metadata
...
field_metadata = "some metadata"
CALL yac_fdef_field_metadata( &
comp_name, grid_name, field_name, field_metadata)
| field_metadata = "some metadata"
yac.def_field_metadata(
comp_name, grid_name, field_name, field_metadata.encode())
|
The coupling between two fields can either be defined through a configuration file or through the interface. As for the definition of fields, the definition of couples also requires a previously defined calendar. To define a couple using the interface, the user has to define a interpolation stack. | ||
int interp_stack_id;
yac_cget_interp_stack_config( &interp_stack_id )
interp_stack_id, YAC_AVG_ARITHMETIC, 1)
interp_stack_id, -1.0 );
void yac_cadd_interp_stack_config_fixed(int interp_stack_config_id, double value) Definition yac.c:3852 void yac_cget_interp_stack_config(int *interp_stack_config_id) Definition yac.c:3667 void yac_cadd_interp_stack_config_average(int interp_stack_config_id, int reduction_type, int partial_coverage) Definition yac.c:3701 | INTEGER :: interp_stack_id
CALL yac_fget_interp_stack_config( &
interp_stack_id )
interp_stack_id, yac_avg_arithmetic, 1)
interp_stack_id, -1)
Fortran interfaces for the definition of an interpolation stack. Definition yac_module.F90:2376 | interp_stack = yac.InterpolationStack()
interp_stack.add_average(AverageReductionType.YAC_AVG, 1)
interp_stack.add_fixed( -1.0 )
Definition yac.pyx:1598 |
Alternatively, the interpolation stack can be generated from a string: | ||
int interp_stack_nnn;
"- nnn:\n"
" n: 1", &interp_stack_nnn);
void yac_cget_interp_stack_config_from_string_yaml(char const *interp_stack_config, int *interp_stack_config_id) Definition yac.c:3927 | INTEGER :: interp_stack_nnn
"- nnn:" // achar(10) // &
" n: 1", interp_stack_nnn)
| interp_stack_nnn =
yac.InterpolationStack.from_string_yaml("- nnn:\n n: 1")
from_string_yaml(cls, str interp_stack_string) Definition yac.pyx:1610 |
If a coupling stack is not needed anymore, it can be freed. | ||
yac_cfree_interp_stack_config(interp_stack_id);
void yac_cfree_interp_stack_config(int interp_stack_config_id) Definition yac.c:3681 | CALL yac_ffree_interp_stack_config(interp_stack_id)
| del interp_stack
|
Once the interpolation stack is defined, it can be used in any number of couple definitions. It is sufficient to define a couples on a single process. It will be synchronised to all other processes at synchronisation of definitions and/or the end of the definitions. However, if a couple (identified by the same source and target field) is defined on multiple processes and/or in a configuration file, the definition must be consistent. | ||
char const * src_comp_name = "atmo"
char const * src_grid_name = "icon_atmos_grid"
char const * tgt_comp_name = "ocean";
char const * tgt_grid_name = "icon_ocean_grid";
char const * field_name = "sea_surface_temperature";
char const * coupling_timestep = "PT15M";
int time_unit = YAC_TIME_UNIT_ISO_FORMAT;
int time_reduction = YAC_REDUCTION_TIME_NONE;
int src_lag = 0;
int tgt_lag = 0;
src_comp_name, src_grid_name, field_name,
tgt_comp_name, tgt_grid_name, field_name,
coupling_timestep, time_unit, time_reduction,
interp_stack_id, src_lag, tgt_lag);
void yac_cdef_couple(char const *src_comp_name, char const *src_grid_name, char const *src_field_name, char const *tgt_comp_name, char const *tgt_grid_name, char const *tgt_field_name, char const *coupling_timestep, int time_unit, int time_reduction, int interp_stack_config_id, int src_lag, int tgt_lag) Definition yac.c:1632 | CHARACTER(LEN=YAC_MAX_CHARLEN) :: src_comp_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: src_grid_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: tgt_comp_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: tgt_grid_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: field_name
INTEGER :: time_unit
CHARACTER(LEN=YAC_MAX_CHARLEN) :: timestep
INTEGER :: time_reduction
src_comp_name = 'atmo'
src_grid_name = 'icon_atmos_grid'
tgt_comp_name = 'ocean'
tgt_grid_name = 'icon_ocean_grid'
field_name = 'sea_surface_temperature'
time_unit = yac_time_unit_iso_format
timestep = 'PT15M'
time_reduction = yac_reduction_time_none
CALL yac_fdef_couple ( &
src_comp_name, src_grid_name, field_name, &
tgt_comp_name, tgt_grid_name, field_name, &
time_unit, timestep, time_reduction, interp_stack_id)
| src_comp_name = "atmo"
src_grid_name = "icon_atmos_grid"
tgt_comp_name = "ocean"
tgt_grid_name = "icon_ocean_grid"
field_name = "sea_surface_temperature"
coupling_timestep = "PT15M"
time_unit = TimeUnit.ISO_FORMAT
time_reduction = Reduction.INSTANT
yac.def_couple(
src_comp_name, src_grid_name, field_name,
tgt_comp_name, tgt_grid_name, field_name,
coupling_timestep, time_unit, time_reduction,
interp_stack)
|
YAC provides a set of additional parameters like weight file name and scaling factor, which can be set for each couple. While for Fortran and Python this is implemented as optional arguments, for C the extended coupling configuration has to be used: | ||
char const * src_comp_name = "atmo"
char const * src_grid_name = "icon_atmos_grid"
char const * tgt_comp_name = "ocean";
char const * tgt_grid_name = "icon_ocean_grid";
char const * field_name = "sea_surface_temperature";
char const * coupling_timestep = "PT15M";
int time_unit = YAC_TIME_UNIT_ISO_FORMAT;
int time_reduction = YAC_REDUCTION_TIME_NONE;
int src_lag = 0;
int tgt_lag = 0;
// additional coupling parameters
int ext_couple_config;
char const * weight_file_name = "weights.nc"
double celcius2kelvin = 273.15;
int mapping_on_source = 1;
char const * const src_mask_name = "ocean mask"
yac_cget_ext_couple_config(&ext_couple_config);
// activate writing of weight files and
// set weight file name
ext_couple_config, weight_file_name);
// activation conversion from degree Celcius to Kelvin
ext_couple_config, celcius2kelvin);
ext_couple_config, mapping_on_source);
ext_couple_config_id, 1, &src_mask_name)
src_comp_name, src_grid_name, field_name,
tgt_comp_name, tgt_grid_name, field_name,
coupling_timestep, time_unit, time_reduction,
interp_stack_id, src_lag, tgt_lag,
ext_couple_config);
yac_cfree_ext_couple_config(ext_couple_config);
void yac_cfree_ext_couple_config(int ext_couple_config_id) Definition yac.c:1373 void yac_cset_ext_couple_config_src_mask_names(int ext_couple_config_id, size_t num_src_mask_names, char const *const *src_mask_names) Definition yac.c:1495 void yac_cget_ext_couple_config(int *ext_couple_config_id) Definition yac.c:1365 void yac_cset_ext_couple_config_weight_file(int ext_couple_config_id, char const *weight_file) Definition yac.c:1393 void yac_cdef_couple_custom(char const *src_comp_name, char const *src_grid_name, char const *src_field_name, char const *tgt_comp_name, char const *tgt_grid_name, char const *tgt_field_name, char const *coupling_timestep, int time_unit, int time_reduction, int interp_stack_config_id, int src_lag, int tgt_lag, int ext_couple_config_id) Definition yac.c:1600 void yac_cset_ext_couple_config_scale_summand(int ext_couple_config_id, double scale_summand) Definition yac.c:1465 void yac_cset_ext_couple_config_mapping_side(int ext_couple_config_id, int mapping_side) Definition yac.c:1419 | CHARACTER(LEN=YAC_MAX_CHARLEN) :: src_comp_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: src_grid_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: tgt_comp_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: tgt_grid_name
CHARACTER(LEN=YAC_MAX_CHARLEN) :: field_name
INTEGER :: time_unit
CHARACTER(LEN=YAC_MAX_CHARLEN) :: timestep
INTEGER :: time_reduction
CHARACTER(LEN=YAC_MAX_CHARLEN) :: weight_file_name
DOUBLE PRECISION :: celcius2kelvin
INTEGER :: mapping_on_source
CHARACTER(LEN=YAC_MAX_CHARLEN) :: weight_file_name
TYPE(yac_string) :: src_mask_name(1)
src_comp_name = 'atmo'
src_grid_name = 'icon_atmos_grid'
tgt_comp_name = 'ocean'
tgt_grid_name = 'icon_ocean_grid'
field_name = 'sea_surface_temperature'
time_unit = yac_time_unit_iso_format
timestep = 'PT15M'
time_reduction = yac_reduction_time_none
! additional coupling parameters
weight_file_name = 'weights.nc'
celcius2kelvin = 273.15
mapping_on_source = 1
src_mask_name(1)%string = 'ocean mask'
CALL yac_fdef_couple ( &
src_comp_name, src_grid_name, field_name, &
tgt_comp_name, tgt_grid_name, field_name, &
time_unit, timestep, time_reduction, interp_stack_id, &
weight_file = weight_file_name, &
mapping_side = mapping_on_source, &
scale_summand = celcius2kelvin, &
src_mask_names = src_mask_name)
| src_comp_name = "atmo"
src_grid_name = "icon_atmos_grid"
tgt_comp_name = "ocean"
tgt_grid_name = "icon_ocean_grid"
field_name = "sea_surface_temperature"
coupling_timestep = "PT15M"
time_unit = TimeUnit.ISO_FORMAT
time_reduction = Reduction.INSTANT
src_mask_name = ["ocean mask"]
# additional coupling parameters
weight_file_name = "weights.nc"
celcius2kelvin = 273.15
mapping_on_source = 1
yac.def_couple(
src_comp_name, src_grid_name, field_name,
tgt_comp_name, tgt_grid_name, field_name,
coupling_timestep, time_unit, time_reduction,
interp_stack,
weight_file = weight_file_name,
mapping_side = mapping_on_source,
scale_summand = celcius2kelvin,
src_masks_names = src_mask_name)
|
Once all components, grids and fields are defined, the configuration can be synchronized explicitly between all processes. This is optional and can be done by only a subset of all tasks, and is done implicitly by the end of definitions. Afterwards, the local configuration contains the all definitions from all other processes that were done until their synchronisation of definitions or end of definitions (in case they did not explicitly synchronised their definitions). | ||
CALL yac_fsync_def()
Fortran interface for invoking the end of the definition phase. Definition yac_module.F90:2354 | yac.sync_def()
|
After the synchronisation of definitions, processes have the option to query the definitons done by all processes. YAC provides a number of routines for this task. | ||
int nbr_comps = yac_cget_nbr_comps();
char const ** comp_names =
malloc((size_t)nbr_comps * sizeof(*comp_names));
yac_cget_comp_names(nbr_comps, comp_names);
for (int comp_idx = 0; comp_idx < nbr_comps; ++comp_idx) {
char const * comp_metadata =
yac_cget_component_metadata(comp_names[comp_idx]);
if (comp_metadata)
printf("%s (%s)\n", comp_names[comp_idx], comp_metadata);
int nbr_grids = yac_cget_comp_nbr_grids(comp_names[comp_idx]);
char const ** grid_names =
malloc((size_t)nbr_grids * sizeof(*grid_names));
yac_cget_comp_grid_names(comp_names[comp_idx], nbr_grids, grid_names);
for (int grid_idx = 0; grid_idx < nbr_grids; ++grid_idx) {
char const * grid_metadata =
yac_cget_grid_metadata(grid_names[grid_idx]);
if (grid_metadata)
printf(
"%s::%s (%s)\n", comp_names[comp_idx], grid_names[grid_idx],
grid_metadata);
int nbr_fields =
yac_cget_nbr_fields(comp_names[comp_idx], grid_names[grid_idx]);
char const ** field_names =
malloc((size_t)nbr_fields * sizeof(*field_names));
comp_names[comp_idx], grid_names[grid_idx],
nbr_fields, field_names);
for (int field_idx = 0; field_idx < nbr_fields; ++field_idx) {
char const * field_metadata =
comp_names[comp_idx], grid_names[grid_idx],
field_names[field_idx]);
if (field_metadata)
printf(
"%s::%s::%s (%s)\n", comp_names[comp_idx],
grid_names[grid_idx], field_names[field_idx],
field_metadata);
char const * field_timestep =
comp_names[comp_idx], grid_names[grid_idx],
field_names[field_idx]);
int field_collection_size =
comp_names[comp_idx], grid_names[grid_idx],
field_names[field_idx]);
int field_role =
comp_names[comp_idx], grid_names[grid_idx],
field_names[field_idx]);
printf(
"%s::%s::%s timestep: %s "
"collection_size: %d role: src(%c) tgt(%c) "
"none(%c)\n",
comp_names[comp_idx], grid_names[grid_idx],
field_names[field_idx], field_timestep,
field_collection_size,
}
free(field_names);
}
free(grid_names);
}
free(comp_names);
int yac_cget_field_role(const char *comp_name, const char *grid_name, const char *field_name) Definition yac.c:3382 int yac_cget_nbr_fields(const char *comp_name, const char *grid_name) Definition yac.c:3121 const char * yac_cget_grid_metadata(const char *grid_name) Definition yac.c:1332 int yac_cget_field_collection_size(const char *comp_name, const char *grid_name, const char *field_name) Definition yac.c:3364 void yac_cget_comp_names(int nbr_comps, const char **comp_names) Definition yac.c:3146 const char * yac_cget_field_metadata(const char *comp_name, const char *grid_name, const char *field_name) Definition yac.c:1346 void yac_cget_comp_grid_names(const char *comp_name, int nbr_grids, const char **grid_names) Definition yac.c:3219 const char * yac_cget_component_metadata(const char *comp_name) Definition yac.c:1319 const char * yac_cget_field_timestep(const char *comp_name, const char *grid_name, const char *field_name) Definition yac.c:3328 void yac_cget_field_names(const char *comp_name, const char *grid_name, int nbr_fields, const char **field_names) Definition yac.c:3264 | INTEGER :: comp_idx, grid_idx, field_idx
TYPE(yac_string), ALLOCATABLE :: comp_names(:)
TYPE(yac_string), ALLOCATABLE :: grid_names(:)
TYPE(yac_string), ALLOCATABLE :: field_names(:)
CHARACTER(len=:), ALLOCATABLE :: field_timestep
INTEGER :: field_collection_size
INTEGER :: field_role
comp_names = yac_fget_comp_names()
DO comp_idx = 1, SIZE(comp_names)
IF (yac_fcomponent_has_metadata( &
comp_names(comp_idx)%string)) &
WRITE (*, "(A,' (',A,')')") &
comp_names(comp_idx)%string, &
comp_names(comp_idx)%string)
grid_names = &
comp_names(comp_idx)%string)
DO grid_idx = 1, SIZE(grid_names)
IF (yac_fgrid_has_metadata( &
grid_names(grid_idx)%string)) &
WRITE (*, "(A,'::',A,' (',A,')')") &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
grid_names(grid_idx)%string)
field_names = &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string)
DO field_idx = 1, SIZE(field_names)
IF (yac_ffield_has_metadata( &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
field_names(field_idx)%string)) &
WRITE (*, "(A,'::',A,'::',A,' (',A,')')") &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
field_names(field_idx)%string, &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
field_names(field_idx)%string)
field_timestep = &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
field_names(field_idx)%string)
field_collection_size = &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
field_names(field_idx)%string)
field_role = &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
field_names(field_idx)%string)
WRITE(*,"(A,'::',A,'::',A,' timestep: ',A, &
' collection_size: ',I0, &
' role: src(',A,') tgt(',A, &
') none(',A,')')") &
comp_names(comp_idx)%string, &
grid_names(grid_idx)%string, &
field_names(field_idx)%string, &
field_timestep, field_collection_size, &
DEALLOCATE(field_timestep)
DEALLOCATE(field_names(field_idx)%string)
END DO
DEALLOCATE(field_names)
DEALLOCATE(grid_names(grid_idx)%string)
ENDDO
DEALLOCATE(grid_names)
DEALLOCATE(comp_names(comp_idx)%string)
ENDDO
DEALLOCATE(comp_names)
Definition yac_module.F90:2700 Definition yac_module.F90:2750 Definition yac_module.F90:2774 static void merge(char *base_a, size_t num_a, int a_ascending, char *base_b, size_t num_b, int b_ascending, size_t size, int(*compar)(const void *, const void *), char *target) Definition mergesort.c:56 | for comp_name in yac.component_names:
comp_metadata = yac.get_component_metadata(comp_name)
if comp_metadata is not None:
print(f"{comp_name} ({comp_metadata})")
for grid_name in yac.get_comp_grid_names(comp_name):
grid_metadata = yac.get_grid_metadata(grid_name)
if grid_metadata is not None:
print(f"{comp_name}::{grid_name} ({grid_metadata})")
for field_name in yac.get_field_names(comp_name, grid_name):
field_metadata = \
yac.get_field_metadata(comp_name, grid_name, field_name)
if field_metadata is not None:
print(f"{comp_name}::{grid_name}::{field_name} " +
f"({field_metadata})")
field_timestep = \
yac.get_field_timestep(comp_name, grid_name, field_name)
field_collection_size = \
yac.get_field_collection_size(comp_name, grid_name, field_name)
print(f"{comp_name}::{grid_name}::{field_name} " +
f"timestep: {field_timestep} " +
f"collection_size: {field_collection_size}")
|
Even after synchronisation of definition the processes can define additional grids, fields, and couples. However, these definitions will not be synchronised with the other processes until the end of definitions. These additions definitions can be used for example by output components, that first want to query what the model components have defined. They can then use this information to decide which data should be sent to the output and define the respective couples accordingly. |
Once all processes have finished their definitions they have to collectively end the definition phase. At first, the definitions are synchronised between the processes and checked for consistency. Afterwards all internal data required for the exchanges is generated. This includes the generation of communication matrices, the computation of interpolation weights, and the reading/writing of weight files (if applicable). | ||
yac_cenddef ( );
| CALL yac_fenddef ( )
Fortran interface for invoking the end of the definition phase. Definition yac_module.F90:2550 | yac.enddef ( )
|
Since the coupling configuration of a run can be configured through various means (multiple configuration files and through the user interface by all participating processes), it may not always be obivous what the final configuration of the run is. YAC can write out the coupling configuration that is known by all processes after the component definition, after the Synchronisation of definitions, and after the End of definition. This has to be configured either through the user interface or has to be included in one of the configuration files (see YAML configuration file). | ||
"final_coupling_config.yaml",
YAC_CONFIG_OUTPUT_FILETYPE_YAML,
yac_cenddef();
void yac_cset_config_output_file(const char *filename, int filetype, int sync_location, int include_definitions) Definition yac.c:526 | CALL yac_fset_config_output_file( &
"final_coupling_config.yaml", &
yac_config_output_filetype_yaml, &
CALL yac_fenddef( )
Fortran interface for the writing of configuration files. Definition yac_module.F90:354 @ yac_config_output_sync_loc_enddef Definition yac_module.F90:201 | yac.set_config_output_file(
"final_couple_config.yaml",
ConfigOutputFormat.YAML,
ConfigOutputSyncLoc.ENDDEF)
yac.enddef()
|
For debugging purposes it is possible to write grid data passed to YAC into a netCDF file. This feature can either be enabled via a configuration file (see YAML configuration file) or via the user interface. If it is activated YAC will write out the grid data of the specified grid End of definitions. | ||
"ocean_grid", "ocean_grid.nc");
yac_cenddef();
void yac_cset_grid_output_file(const char *gridname, const char *filename) Definition yac.c:555 | CALL yac_fset_grid_output_file( &
"ocean_grid", "ocean_grid.nc")
CALL yac_fenddef( )
Fortran interface for the writing of grid files. Definition yac_module.F90:382 |
For sending of data YAC provides a Depending on the coupling configuration, the data provided through the | ||
int const nbr_hor_points = 512;
int const nbr_pointsets = 1;
int const collection_size = 4;
int info;
int ierror;
double *** send_field =
malloc(collection_size * sizeof(*send_field));
for (int i = 0; i < collection_size; i++) {
send_field[i] =
malloc((size_t)nbr_pointsets * sizeof(**send_field));
for (int j = 0; j < nbr_pointsets; j++)
send_field[i][j] =
malloc((size_t)nbr_hor_points * sizeof(***send_field));
}
for (int i = 0; i < collection_size; i++)
for (int j = 0; j < nbr_pointsets; j++)
for (int k = 0; k < nbr_hor_points; k++)
send_field[i][j][k] = ... ;
...
yac_cput ( field_id,
collection_size,
send_field,
&info,
&ierror );
| INTEGER, PARAMETER :: nbr_hor_points = 512
INTEGER, PARAMETER :: collection_size = 4
INTEGER, PARAMETER :: nbr_pointsets = 1
INTEGER :: info
INTEGER :: ierror
DOUBLE PRECISION :: send_field(nbr_hor_points, &
nbr_pointsets, &
collection_size)
DO i = 1, collection_size
DO j = 1, nbr_pointsets
DO k = 1, nbr_hor_points
send_field(k,j,i) = ...
END DO
END DO
END DO
...
nbr_hor_points, & ! [IN]
nbr_pointsets, & ! [IN]
collection_size, & ! [IN]
send_field, & ! [IN]
info, & ! [OUT]
ierror ) ! [out]
| nbr_hor_points = 512
nbr_pointsets = 1
collection_size = 4
send_field = np.empty(shape=(collection_size, nbr_pointsets, nbr_hor_points))
# ... fill send_field somehow
info = field.put (send_field)
|
The receiving of data is implemented as a If no couple is defined for the field, the | ||
int const nbr_hor_points = 1024;
int const collection_size = 4;
int info;
int ierror;
double ** recv_field =
malloc((size_t)collection_size * sizeof(*recv_field));
for (int i = 0; i < collection_size; ++i)
recv_field[i] =
malloc((size_t)nbr_hor_points * sizeof(**recv_field));
yac_cget ( field_id,
collection_size,
recv_field,
&info,
&ierror );
void yac_cget(int const field_id, int collection_size, double **recv_field, int *info, int *ierr) Definition yac.c:2007 | INTEGER, PARAMETER :: nbr_hor_points = 1024
INTEGER, PARAMETER :: collection_size = 4
INTEGER :: info
INTEGER :: ierror
DOUBLE PRECISION recv_field(nbr_hor_points, collection_size)
nbr_hor_points, & ! [IN]
collection_size, & ! [IN]
recv_field, & ! [OUT]
info, & ! [OUT]
ierror ) ! [out]
| nbr_hor_points = 1024
collection_size = 4
recv_field = np.empty(shape=(collection_size, nbr_hor_points))
recv_field, info = field.get ( recv_field )
None (default argument) can be passed. In that case a buffer in the correct size is allocated. See also Coroutines for an example how to use the python coroutines interface. |
In addition to the put and get operation, YAC also provides an exchange operation. This can be used in case of a bi-directional exchange. It executes a put and get operation at the same time and returns after both have been completed. By combining both operations, the internal buffer usage is more efficient. | ||
int const send_nbr_hor_points = 512;
int const send_nbr_pointsets = 1;
int const recv_nbr_hor_points = 1024;
int const collection_size = 4;
int send_info;
int recv_info;
int ierror;
double *** send_field =
malloc((size_t)collection_size * sizeof(*send_field));
double ** recv_field =
malloc((size_t)collection_size * sizeof(*recv_field));
for (int i = 0; i < collection_size; i++) {
send_field[i] =
malloc(send_nbr_pointsets * sizeof(**send_field));
recv_field[i] =
malloc(
(size_t)recv_nbr_hor_points * sizeof(**recv_field));
for (int j = 0; j < send_nbr_pointsets; j++)
send_field[i][j] =
malloc(send_nbr_hor_points * sizeof(***send_field));
}
for (int i = 0; i < collection_size; i++)
for (int j = 0; j < send_nbr_pointsets; j++)
for (int k = 0; k < send_nbr_hor_points; k++)
send_field[i][j][k] = ... ;
...
yac_cexchange ( send_field_id,
recv_field_id,
collection_size,
send_field,
recv_field,
&send_info,
&recv_info,
&ierror );
| INTEGER, PARAMETER :: send_nbr_hor_points = 512
INTEGER, PARAMETER :: send_nbr_pointsets = 1
INTEGER, PARAMETER :: recv_nbr_hor_points = 1024
INTEGER, PARAMETER :: collection_size = 4
INTEGER, PARAMETER :: collection_size = 4
INTEGER :: send_info
INTEGER :: recv_info
INTEGER :: ierror
DOUBLE PRECISION :: send_field(nbr_hor_points, &
nbr_pointsets, &
collection_size)
DOUBLE PRECISION recv_field(nbr_hor_points, &
collection_size)
DO i = 1, collection_size
DO j = 1, nbr_pointsets
DO k = 1, nbr_hor_points
send_field(k,j,i) = ...
END DO
END DO
END DO
...
recv_field_id, & ! [IN]
send_nbr_hor_points, & ! [IN]
send_nbr_pointsets, & ! [IN]
recv_nbr_hor_points, & ! [IN]
collection_size, & ! [IN]
send_field, & ! [IN]
recv_field, & ! [OUT]
send_info, & ! [OUT]
recv_info, & ! [OUT]
ierror ) ! [out]
|
There are two utility routines that help working with asynchronous data exchanges. | ||
The test operation checks whether for a provided field an asynchronous data exchange operation is still active. | ||
int flag;
yac_ctest(field_id, &flag);
if (!flag)
fputs("Last async operation is still active.", stdout);
| LOGICAL :: flag
...
CALL yac_ftest ( field_id, flag )
IF (.NOT. flag) print *, "Last async operation is still active."
Fortran interface for testing fields for active communicaitons. Definition yac_module.F90:2251 | if field.test() == 0:
print("Last async operation is still active")
|
The second utility routine is the wait operation. It does not return until all active asynchronous data exchanges associated with the provided field have been completed. (This can easily result in a deadlock. The user has to make sure, that this does not occure.) | ||
yac_cwait(field_id);
fputs("Last async operation has been completed.", stdout);
| CALL yac_fwait ( field_id )
print *, "Last async operation has been completed."
Fortran interface for testing fields for active communicaitons. Definition yac_module.F90:2279 | field.wait()
print("Last async operation has been completed.")
|
All exchange routines return an info argument, which can be used by the user to determine which action was performed by the respective exchange call. The possible values are as follows: | ||
YAC has for each field an internal event timer. With each However, sometime it may be desirable to advance the internal clock without calling a data exchange operation, if it is save to do so. For each field it is possible to check the action that will occure in the next call of a data exchange operation. If appropriate, the | ||
int action;
yac_cget_action(field_id, action);
if (action == YAC_ACTION_NONE) {
yac_cupdate(field_id);
} else {
// do data exchange
...
}
| INTEGER :: action
CALL yac_fget_action(field_id, action)
CALL yac_fupdate(field_id)
ELSE
! do data exchange
...
END IF
Definition yac_module.F90:3114 Definition yac_module.F90:3132 | if field.action == yac.Action.NONE:
field.update()
else:
# do data exchange
...
|
Once all exchanges have been completed, YAC has to be finalised, which frees all memory allocated by YAC. | ||
yac_cfinalize ( );
| CALL yac_ffinalize ( )
| del yac
|
In case MPI_Init was initialised by YAC, MPI_Finalize will be called in this finalisation phase. If the user has called MPI_Init himself (before the YAC initialisation), he also has to call MPI_Finalize once the finalisation phase has been completed. |
It is possible to restart YAC. To do that the user has to clean up YAC in the finalisation phase instead of finalising it. | ||
yac_ccleanup ( );
| CALL yac_fcleanup ( )
Fortran interface for the coupler cleanup before restart. Definition yac_module.F90:403 | |
After the clean up, the user can restart YAC by going through the Initialisation, Definition, and Data Exchange Phase as before. The restarted YAC can have a different configuration than the previous one. In case the user initialised yaxt himself, he has to finalise it after the clean up in order to be able initialise YAC again. |