YetAnotherCoupler  2.4.1
The Fortran interface (yac_finterface.f90 and mo_yac_finterface.f90)

Initialisation

Definition

End of definition and search

Data exchange

Finalisation

Query routines

Auxillary routines

Remarks
Functions marked with (*) are overloaded with respect to the data type. See mo_yac_finterface.f90 for details.

The Initialisation Phase

CHARACTER(LEN=YAC_MAX_CHARLEN) :: xml_filename
CHARACTER(LEN=YAC_MAX_CHARLEN) :: xsd_filename
...
xml_filename = "coupling.xml"
xsd_filename = "coupling.xsd"
CALL yac_finit ( xml_filename, xsd_filename )
subroutine yac_finit(xml_filename, schema_filename)
#define YAC_MAX_CHARLEN
Definition: yac_interface.h:74
CHARACTER(LEN=YAC_MAX_CHARLEN) :: xml_filename
CHARACTER(LEN=YAC_MAX_CHARLEN) :: xsd_filename
INTEGER :: world_comm
...
xml_filename = "coupling.xml"
xsd_filename = "coupling.xsd"
world_comm = MPI_COMM_WORLD
CALL yac_finit_comm ( xml_filename, xsd_filename, world_comm )
subroutine yac_finit_comm(xml_filename, schema_filename, world_comm)

After the initialisation we have the possibility to overwrite (or reinitialise) the job start- and end date by calling yac_fdef_datetime. This reinitialisation has to happen before any call to yac_fdef_fields.

Otherwise it will not have any impact!

Both arguments are optional:

character(len=YAC_MAX_CHARLEN) :: start_datetime
character(len=YAC_MAX_CHARLEN) :: end_datetime
start_datetime = '01-01-1850T00:00:00'
end_datetime = '31-12-1851T00:00:00'
call yac_fdef_datetime ( start_datetime = start_datetime, end_datetime = end_datetime )
subroutine yac_fdef_datetime(start_datetime, end_datetime)

The Definition Phase

Component Definition

INTEGER :: comp_id
CHARACTER(LEN=YAC_MAX_CHARLEN) :: comp_name
comp_name = 'Ocean'
CALL yac_fdef_comp ( comp_name, & ! [IN]
comp_id ) ! [OUT]
subroutine yac_fdef_comp(comp_name, comp_id)
INTEGER :: comp_ids(2)
CHARACTER(LEN=YAC_MAX_CHARLEN) :: comp_names(2)
comp_names(1) = 'Ocean'
comp_names(2) = 'Atmosphere'
CALL yac_fdef_comp ( 2, & ! [IN]
comp_names, & ! [IN]
comp_ids ) ! [OUT]

The components are identified by a unique component name. The returned comp_id is used in subsequent calls for further definitions for this particular component. It is possible to define more than one component per process.

Once, all components have been defined, a local communicator can be retrieved for each component identified by its local comp ID.

INTEGER :: comp_id
INTEGER :: local_communicator
CALL yac_fget_localcomm ( local_communicator, & ! [OUT]
comp_id ) ! [IN]
subroutine yac_fget_localcomm(local_communicator, comp_id)

Now, we have the possibility to redirect the standard out and standard error output

INTEGER :: i, local_comm, npes, rank, ierror
...
CALL mpi_comm_rank ( local_comm, rank, ierror )
CALL mpi_comm_size ( local_comm, npes, ierror )
i = LEN_TRIM(comp_name)
CALL yac_redirstdout ( TRIM(comp_name), i, 1, rank, npes )
void yac_redirstdout(const char *filestem, int lenstr, int parallel, int my_pe, int npes)

Grid Definition

For each grid the geographical information about the elements (or cells) needs to be provided. The geographical positions of the vertices have to be provided in a simple list, x_vertices(n) and y_vertices(n) contain the longitude and latitude of vertex n. For each cell information has to be provied about the connectivity, i.e. which vertices have to be taken to form a particular cell. The example here is given for an unstructured grid. For further option see the interface description in yac_fdef_grid (*).

The grid_id is returned.

CHARACTER(LEN=YAC_MAX_CHARLEN) :: grid_name
INTEGER, PARAMETER :: nbr_vertices = 20
INTEGER, PARAMETER :: nbr_cells = 5
INTEGER :: grid_id
INTEGER :: nbr_vertices_per_cell(nbr_cells)
INTEGER :: cell_to_vertex(nbr_cells,nbr_vertices)
DOUBLE PRECISION :: x_vertices(nbr_vertices)
DOUBLE PRECISION :: y_vertices(nbr_vertices)
grid_names = 'ocean_grid'
CALL yac_fdef_grid ( grid_name, &
nbr_vertices, &
nbr_cells, &
nbr_vertices_per_cell, &
x_vertices, &
y_vertices, &
cell_to_vertex, &
grid_id )
@ DOUBLE
Definition: vtk_output.c:49

Point Definition

If data do not represent values of the complete cell it is possible to define sets of points. Here we specify points at some location inside of a cell (location can be YAC_LOCATION_CELL, YAC_LOCATION_CORNER, or YAC_LOCATION_EDGE). The example provided here is for unstructured grids. For further option see the interface description in yac_fdef_points (*).

INTEGER, PARAMETER :: nbr_points = 5
INTEGER :: point_id
DOUBLE PRECISION :: x_points(nbr_points)
DOUBLE PRECISION :: y_points(nbr_points)
CALL yac_fdef_points ( grid_id, & ! [IN]
nbr_points, & ! [IN]
x_points, & ! [IN]
y_points, & ! [IN]
point_id ) ! [OUT]
int const YAC_LOCATION_CELL
Definition: yac_interface.c:66

Decomposition information

In this example, glb_index contains the global cell index.

DO i = 1, nbr_cells
glb_index(i) = i
owner_local(i) = -1
ENDDO
& glb_index, &
& grid_id )
subroutine yac_fset_global_index(global_index, location, grid_id)

With yac_fset_core_mask we flag all inner cell as valid. These are all cells of the compute domain excluding the halo and other dummy cells.

The core mask can be specified as LOGICAL or INTEGER. See also yac_fset_core_mask (*)

CALL yac_fset_core_mask ( &
& is_core, &
& grid_id )

Definition of Masks

In order to mask out a certain set of points or cells two subroutine calls are available. The mask should have the same size as the number of points of cells and thus flag each point or cell as either .true. or .false. If the cell shall not be considered is_valid has to be .FALSE. for this cell. is_valid can also be provided as integer array, false cells have to be set to 0 in this case.

CALL yac_fset_mask ( is_valid, & ! [IN]
point_id ) ! [IN]
static int is_valid(const xmlDocPtr doc, const char *schema_filename)
Definition: config_xml.c:169

Definition of Coupling Fields with default mask

CALL yac_fdef_field ( field_name, & ! [IN]
comp_id, & ! [IN]
point_ids, & ! [IN]
num_point_sets, & ! [IN]
field_id ) ! [OUT]
subroutine yac_fdef_field(field_name, component_id, point_ids, num_pointsets, field_id)

Definition of Masks

In order to mask out a certain set of points or cells two subroutine calls are available. The mask should have the same size as the number of points of cells and thus flag each point or cell as either .true. or .false. If the cell shall not be considered is_valid has to be .FALSE. for this cell. is_valid can also be provided as integer array, false cells have to be set to 0 in this case.

CALL yac_fdef_mask ( grid_id, & ! [IN]
nbr_points, & ! [IN]
location, & ! [IN]
is_valid, & ! [IN]
mask_id ) ! [out]

Definition of Coupling Fields with different masks

CALL yac_fdef_field_mask ( field_name, & ! [IN]
comp_id, & ! [IN]
point_ids, & ! [IN]
mask_ids, & ! [IN]
num_point_sets, & ! [IN]
field_id ) ! [OUT]
subroutine yac_fdef_field_mask(field_name, component_id, point_ids, mask_ids, num_pointsets, field_id)

End of Definition - Start of Search

Once all components, grids and fields are defined the search is invoked by

INTEGER :: ierror
CALL yac_fsearch ( ierror ) ! [OUT]
subroutine yac_fsearch(ierror)

comp_ids contains the list of local component IDs returned by yac_fdef_comp, while field_ids contains the list of local field IDs returned by yac_fdef_field.

Upon return from yac_fsearch YAC can provide a communicator which encompasses the root processes of of a pair of components ( see yac_fget_pair_rootcomm ).

Data Exchange

The functions for sending and receiving data are overloaded w.r.t. the data type of the send and receive fields. Thus, send_field and recv_field can be either of type REAL or DOUBLE PRECISION.

See also yac_fput (*).

The sending is implemented using non-blocking MPI function calls. Thus, the routine returns even if the data have not been transfered to the receiver. The data are buffered internally and the user is free to reuse the send_field buffer for a next message.

INTEGER, PARAMETER :: nbr_hor_points = 512
INTEGER, PARAMETER :: collection_size = 4
INTEGER, PARAMETER :: nbr_pointsets = 1
INTEGER :: field_id
INTEGER :: ierror
DOUBLE PRECISION send_field(nbr_hor_points,nbr_pointssets, collection_size)
CALL_yac_fput ( field_id, & ! [IN]
nbr_hor_points, & ! [IN]
nbr_pointsets, & ! [IN]
collection_size, & ! [IN]
send_field, & ! [IN]
info, & ! [OUT]
ierror ) ! [OUT]

The receiving is implemented using so-called blocking MPI functions. Thus, yac_fput will only return if a message has been received.

See also yac_fget (*).

INTEGER, PARAMETER :: nbr_hor_points = 1024
INTEGER, PARAMETER :: collection_size = 3
INTEGER :: field_id
INTEGER :: info
INTEGER :: ierror
DOUBLE PRECISION recv_field(nbr_hor_points, collection_size)
CALL yac_fget ( field_id, & ! [IN]
nbr_hor_points, & ! [IN]
collection_size, & ! [IN]
recv_field, & ! [OUT]
info, & ! [OUT]
ierror ) ! [OUT]

The Ending Phase

The coupling has to be terminated by

CALL yac_ffinalize ( )
subroutine yac_ffinalize()

In case MPI_Init has been called by yac_finit, MPI_Finalize will be called by yac_ffinalize. If the user has called MPI_Init himself (before calling yac_finit) she also has to call MPI_Finalize after the call to yac_ffinalize.

Restarting YAC

It is possible to restart YAC. To do that the user has to call yac_fcleanup in the Ending Phase instead of yac_ffinalize.

CALL yac_fcleanup ( )
subroutine yac_fcleanup()

After yac_fcleanup has been called, the user can restart YAC by going through the Initialisation, Definition, and Data Exchange Phase as before. It is possible to restart YAC using a different XML configuration file.

In case the user initialised yaxt himself, he has to finalise it after yac_fcleanup in order to be able to call yac_finit again.

In the final Ending Phase yac_ffinalize has to be called instead of yac_fcleanup.