MODULE model_mod (for NOGAPS)

DART project logo

Jump to DART Documentation Main Index
version information for this file:
$Id: model_mod.html 6331 2013-07-30 17:12:57Z thoar $

NAMELIST / INTERFACES / FILES / REFERENCES / ERRORS / PLANS / PRIVATE COMPONENTS / TERMS OF USE

Overview

NOGAPS requires the LAPACK library. The system used to derive the NOGAPS/DART code used the Intel 10.1 compiler suite with the Intel Math Kernel Libraries on an Intel x86-64 chipset (little-endian). The following compiler arguments were required in the mkmf.template:

INCS = -I$(NETCDF)/include
LIBS = -L$(NETCDF)/lib -lnetcdf -lmkl -lmkl_lapack -lguide -lpthread
FFLAGS  = -g -O2 -fpe0 -convert big_endian -assume byterecl -132 $(INCS)
LDFLAGS = $(FFLAGS) $(LIBS)
This is the BIG PICTURE:

NOGAPS, filter, wakeup_filter, dart_to_nogaps, and nogaps_to_dart are necessarily compiled with MPI support. perfect_model_obs and trans_time are single-threaded.

  1. NOGAPS/src/libnogaps.a must be created. Only Tim Whitcomb knows how to do this. libnogaps.a is used by both dart_to_nogaps and nogaps_to_dart.

  2. NOGAPS/shell_script/config.csh must be edited to reflect the locations of the executables, etc.

  3. A set of NOGAPS files are converted to DART initial conditions files by the script NOGAPS/shell_scripts/run_nogapsIC_to_dart.csh. run_nogapsIC_to_dart.csh is a batch script that uses Job Array syntax to launch N "identical" jobs - one for each ensemble member to convert to a DART initial conditions file. This script also creates the "experiment directory", the run-time filesystem for all future assimilation runs of this experiment.

  4. NOGAPS/shell_scripts/run_perfect.csh will select ONE of these initial conditions files (input.nml:&perfect_model_obs_nml:restart_in_file_name) and use it as though it is the true model trajectory - and harvest synthetic observations from this trajectory at the times/locations specified in obs_seq.in . The synthetic observations will be in obs_seq.out .

  5. NOGAPS/shell_scripts/run_filter.csh will launch multiple MPI-aware programs that successively use the entire processor set. At this time, it is not possible to have some of the executables simultaneously use subsets of the processor pool.

Now for the detail:


Section 1: libnogaps.a

Just for convenience, we put the NOGAPS source code in DART/models/NOGAPS/src because we could. It can go anywhwere (but it must be specified in config.csh, mkmf_dart_to_nogaps, and mkmf_nogaps_to_dart). Not only did we build the NOGAPS executable in this directory, it was useful to build a library out of all the routines in NOGAPS because so many of them were needed to convert from the NOGAPS spectral representation to one more natively applicable to DART.

To build libnogaps.a, the original NOGAPS makefile must be modified. While this modification can be performed by hand, it is much easier to use the provided script fix_makefile.sed, found in the shell_scripts directory. This script requires the original makefile on standard input and will return a modified makefile with a new library target to standard output, so running looks like:

    $ cd DART/models/NOGAPS/src
    $ ../shell_scripts/fix_makefile.sed < makefile > makefile.lib
    $ make -f makefile.lib all
The script makes the following modifications:


Section 2: config.csh - define what's where.

The idea is that the NOGAPS/shell_scripts/config.csh will be copied to the experiment directory to be used by the subsequent processes. Fundamentally, the combination of $scratch_dir and $experiment_name define the variable $experiment_dir (aka CENTRALDIR). This is where all the run-time action takes place, here is an example of PART of the config.csh script:

set dart_base_dir     = "/fs/image/home/thoar/SVN/DART/models/NOGAPS"
set ocards_files      = "${dart_base_dir}/templates/ocards_files"
set scratch_dir       = "/ptmp/thoar/NOGAPS1"
set experiment_name   = "test6"
set experiment_dir    = ${scratch_dir}/${experiment_name}
set dtg               = 2008080100
set resolution        = 159
set n_levels          = 30
set NOGAPS_exec_dir   = "${dart_base_dir}/src"
set NOGAPS_exec_name  = "got${resolution}l${n_levels}"
set perturb_dir       = "/home/coral/hansenj/DART_experiments/cookbook/DART/models/NOGAPS/init_perts_T${resolution}L${n_levels}"
set climo             = "${dart_base_dir}/climo${resolution}"
dart_base_dir This is the fully-qualified path to DART/models/NOGAPS. The DART executable are expected to be in $dart_base_dir/work/*, the shell scripts are expected to be in $dart_base_dir/shell_scripts. You can change all of this if you want to muck about with the rest of config.csh
ocards_files directory containing ...
scratch_dir large directory for the experiments ...
experiment_name run-time execution directory
experiment_dir fully-qualified directory name for an experiment
dtg date-time-group
resolution horizontal resolution for running NOGAPS
n_levels number of vertical levels
NOGAPS_exec_dir directory containing the NOGAPS executable
NOGAPS_exec_name    the NOGAPS executable file name - contains horizontal and vertical resolution information
perturb_dir directory containing ...
climo directory containing READONLY versions of the climatology files

Section 3: run_nogapsIC_to_dart.csh - Generating the DART initial conditions.

The run_nogapsIC_to_dart.csh script is written to exploit the Job Array syntax of LSF and PBS and can easily be modified to incorporate others. The idea is simple. Multiple copies of the job are spawned when the job is submitted ONCE. Each copy of the job has a unique Array ID or Task Identifier or ... I translate all the queueing-system specific variables to generic ones and use the generic ones throughout the rest of the script. This one script will work on multiple platforms. This is what the preamble would look like for a 40-member ensemble:

#BSUB -J mbr[1-40]
...
#PBS -t 1-40
...
if ($?LSB_QUEUE) then

   #-------------------------------------------------------------------
   # This is used by LSF
   #-------------------------------------------------------------------

   setenv ORIGINALDIR $LS_SUBCWD
   setenv JOBNAME     $LSB_OUTPUTFILE:ar
   setenv JOBID       $LSB_JOBID
   setenv MYQUEUE     $LSB_QUEUE
   setenv MYHOST      $LSB_SUB_HOST
   setenv mem_id      $LSB_JOBINDEX
   setenv f_mbr       $LSB_JOBINDEX_END

else if ($?PBS_QUEUE) then

   #-------------------------------------------------------------------
   # This is used by PBS - f_mbr cannot be set with PBS Job Array ...
   #-------------------------------------------------------------------

   setenv ORIGINALDIR $PBS_O_WORKDIR
   setenv JOBNAME     $PBS_JOBNAME
   setenv JOBID       $PBS_JOBID:ar
   setenv MYQUEUE     $PBS_QUEUE
   setenv MYHOST      $PBS_O_HOST
   setenv mem_id      $PBS_ARRAYID
   setenv f_mbr       xx

else

   #-------------------------------------------------------------------
   # You can run this interactively to check syntax, file motion, etc.
   # These are all just make-believe.
   #-------------------------------------------------------------------

   setenv ORIGINALDIR `pwd`
   setenv JOBNAME     mbr001
   setenv JOBID       $$
   setenv MYQUEUE     Interactive
   setenv MYHOST      $host
   setenv mem_id      3
   setenv f_mbr       20

endif

The number of copies of jobs spawned is controlled through the job array syntax. We use one copy for each desired ensemble member. This must be hand-entered as part of the LSF/PBS directives. Each has their own separate syntax.

The script makes some assumptions about filenames.

&nogaps_to_dart_nml
   nogaps_restart_filename    = 'specfiles/shist000000',
   nogaps_to_dart_output_file = 'dart_new_vector',
   nogaps_data_time_filename  = 'CRDATE.dat'
   /

Each ensemble member gets a unique directory and the files and executables are copied to these directories. The weak spot here is that ALL of the ensemble members are trying to link and populate the SAME climo directory. Since this is readonly anyway, I'd prefer to just point everything to the souce of the link and be done with it. (i.e. in config.csh just set climo = "${climo_dir}/climo${resolution}") Dan created tarfiles with the specfiles for multiple ensemble members. The appropriate tarfile (which remains in-situ and) is unpacked. The contents are then moved to the expected locations. i.e. those specified in input.nml:&nogaps_to_dart_nml:nogaps_restart_filename (specfiles/shist000000).

The namelists for NOGAPS are created. All directories use relative pathnames (i.e. '.') for the shortest possible names. There were commments in the original scripts that the length of the strings was a concern. Since the namelists used to use 'temp' - which was a relative link - or '$HOME' which was the same place as 'temp' and '.', there is no reason to make things more complicated than necessary.

FIXME: may be possible to remove the trans_time ... or not.

All the bits and pieces necessary to run nogaps_to_dart are assembled in the unique run directory and nogaps_to_dart is run. The output file is namelist specified as dart_new_vector and is renamed to be consistent with what filter will expect from the input.nml:&filter_nml:restart_in_file_name (usually filter_ic).


Section 4: (optional) run_perfect.csh - Running a perfect model experiment.

I am going to assume that the target observation sequence file is already created somewhere and is called obs_seq.in. Furthermore, perfect_model_obs is a single-threaded application while the model is MPI-aware. This means that only one MPI-aware application is running at one time - a pretty simple scenario.

There are several comment blocks for PBS or LSF directives that make it possible to use the same script for both batch queueing systems. The first executable portion of the script simply translates the queueing-system-specific variables to generic names that can be used throughout the remainder of the script. The experiment_dir (i.e. CENTRALDIR) is known from the original config.csh, so the first that happens is to 'cd' to CENTRALDIR.

All the executables and input control files are copied to CENTRALDIR. The ensemble member to be used as THE TRUTH is defined by input.nml:&perfect_model_obs_nml:restart_in_file_name (right now it is set to filter_ic.0001) which must be a pre-existing file in CENTRALDIR (created by run_nogapsIC_to_dart.csh).

Really all that is left is to set the value of the MPI command that is needed by the model executable. If you are using a queueing system, the MPI command is already known (from config.csh, actually); if not, there is some work to be done. The block with the comment "# WARNING: This block is untested ..." is, well ..., untested and unlikely to work without modification.

This is a great way to test changes to the advance_model.csh script. The same advance_model.csh script can be (should be?) used by both perfect_model_obs and by filter.


Section 5: run_filter.csh - Running an assimilation experiment.

run_filter.csh has the same strategy as run_perfect.csh and run_nogapsIC_to_dart.csh as pertains to the submission directives and variable-name translation. All the input files/executables are copied to CENTRALDIR. There is some shell trickery to extract bits from the input.nml - namely; the ensemble size, the filter 'async' variable, and the string containing the model advance command. All of these have bearing on the logic of the script.

Essentially, if the model is an MPI-aware program and filter is an MPI-aware program ... getting the O/S to run both of these at the same time has been tricky. filter runs in the background and quite literally goes to sleep while the model executes. When the model advance is complete, wakeup_filter is executed to wake filter and continue. The communication for this is through named pipes - which are like files but don't have the delay of the filesystem. The one problem with this is that sometimes filter fails and doesn't exit cleanly - causing the job to hang. system-dependent, but we're working on a more reliable mechanism.

Since NOGAPS and the convert routines ARE MPI-aware, the $parallel_model variable must be TRUE. The logic for the parallel_model = .false. is included for completeness only.

In the current implementation, the filter_control00000 file created by filter contains all the information to advance all of the ensemble members - one after another - sequentially.


Explanation of advance_model.csh - how one member advances.

advance_model.csh should not need to be modified from one run to another unless different NOGAPS files are needed.

advance_model.csh gets spawned by either perfect_model_obs or filter. The input.nml:&filter_nml:adv_ens_command (or &perfect_model_obs_nml:adv_ens_command) specify which shell script gets invoked. For the scenario when all of the MPI tasks available to the job are used to advance a single ensemble member, advance_model.csh is the right choice. advance_model_batch.csh is under development for the scenario when the model advance is a self-contained job for the queueing system. The remainder of the discussion is for advance_model.csh and is intended to be the 'cleanest' (i.e. simplest?) example upon which to build future scripts.

advance_model.csh is called with precisely three arguments. The process number of the caller, the number of ensemble members that must be advanced by the process, and the name of the control file for the process. These are used to control the iterations (one for each ensemble member) inside advance_model.csh.

A fundamental tenet is that all the files needed by advance_model.csh are available in the run-time directory of filter (i.e. CENTRALDIR). filter will fork the advance_model.csh script - which will cd to a local directory in which to advance the model. All the files are moved/copied into this local directory, the work is done, and the output file get moved/copied back to CENTRALDIR.

  1. Some error-checking is performed to ensure the directories required by NOGAPS exist. I have no idea what is supposed to be in those directories, so ...

    The (private, local) sub-directory is created and populated with generic bits from CENTRALDIR. The DART state vector file is queried (by trans_time) to extract the current/valid date of the state vector, the target or "advance_to" date, and the forecast length - in hours. trans_time is based on a little DART utility and is customized for NOGAPS - so it is the assembla repository as opposed to the general DART repository. trans_time expects the input filename to be temp_ic and the output file containing the time information to be time_info. These are hardwired. time_info has three things - one per line:
    dtg
    dtgnext
    endtau - the forecast length
    The NOGAPS namelists are created. There had been a circular dependence on some environment variables to specify both absolute and relative pathname information to the same directory; all pathnames are now relative to the current private run-time directory by using the "./" methodology.

  2. The dart state is converted to a nogaps specfile. nogaps_to_dart is MPI-aware and requires some of the nogaps code - which requires a file named CRDATE.dat containing a valid "$dtg". nogaps_to_dart also creates a file that is not needed - the default name is dart_data.time - This is the same format as from the trans_time routine - containing the same data - so it's redundant.

  3. The model is advanced using CRDATE.dat (i.e. $dtg) for a forecast length of $endtau (via namelist) to the time specified by $dtgnext FIXME: CRDATE.dat is manually updated with the new time/date. If possible, this is the number one thing I would fix. If the model advance fails but does not cause an error exit, the whole machine can march blindly on ...

  4. Convert the nogaps state to a dart state - which requires some of the nogaps code - which requires a CRDATE.dat file. The updated DART state is moved back to CENTRALDIR with the required filename.

  5. The indices to extract information from the filter_control00000 file are updated and the current working directory is moved back to CENTRALDIR.

After all ensemble members have been advanced, the filter_control00000 file is removed. IMPORTANT - if this file still exists when the advance_model.csh script has finished - IT IS AN ERROR - and filter will die a very theatrical death.


And then a miracle happens ...

All of this is predicated on the ability to assimilate as many cycles as you want in a single job - which is unrealistic and not very smart. When DART finishes, it writes out restart files that can be used as input for subsequent assimilations. Coming up with a naming scheme to archive these files is left as an exercise ... as are the scripts that manipulate the observation sequences and/or times that appear in the input.nml:&filter_nml namelist.


NAMELIST

We adhere to the F90 standard of starting a namelist with an ampersand '&' and terminating with a slash '/' for all our namelist input. The declarations have a different syntax, naturally.

namelist / model_nml / output_state_vector, time_step_days, &
             time_step_seconds, geometry_text_file, debug

This namelist is read in a file called input.nml

Contents Type Description
output_state_vector logical .true. results in netcdf files (i.e. True_State.nc, Prior_Diag.nc, and Posterior_Diag.nc) with 'prognostic' variables. .false. results in the DART state vector being passed 'as-is' Default: .true.
time_step_days integer minimum number of days to advance model. Default : 0
time_step_seconds integer minimum number of seconds to advance model. Default : 900
geometry_text_file character(len=128) name of the file containing the geometry information. Default: noggeom.txt
debug integer turn up for more and more debug messages Default: 0 (relatively silent)


General discussion of interfaces

Several of the routines listed below are allowed to be a NULL INTERFACE. This means the subroutine or function name must exist in this file, but it is ok if it contains no executable code.

A few of the routines listed below are allowed to be a PASS-THROUGH INTERFACE. This means the subroutine or function name can be listed on the 'use' line from the location_mod, and no subroutine or function with that name is supplied in this file. Alternatively, this file can provide an implementation which calls the underlying routines from the location_mod and then alters or augments the results based on model-specific requirements.


OTHER MODULES USED

types_mod
time_manager_mod
threed_sphere/location_mod
utilities_mod
obs_kind_mod
typesizes
netcdf
nogaps_interp_mod

This seems like the right place to describe the following, even though they are not necessarily directly used by model_mod. NOGAPS has a few modules that are required:

basediab.mod
cubic.mod
cupsum.mod
diagnos.mod
dyncons.mod
fields.mod
gcvars.mod
mpinog.mod
navdas.mod
symem.mod
times.mod

and a couple include files:

imp.h
param.h

These, as well as libnogaps.a, are expected to be in the DART/models/NOGAPS/src directory by the mkmf_ scripts.


PUBLIC INTERFACES

use model_mod, only : get_model_size
 adv_1step
 get_state_meta_data
 model_interpolate
 get_model_time_step
 static_init_model
 end_model
 init_time
 init_conditions
 nc_write_model_atts
 nc_write_model_vars
 pert_model_state
 get_close_maxdist_init
 get_close_obs_init
 get_close_obs
 ens_mean_for_model

Optional namelist interface &model_nml may be read from file input.nml. The details of the namelist are always model-specific (there are no generic namelist values).

A note about documentation style. Optional arguments are enclosed in brackets [like this].


model_size = get_model_size( )
integer :: get_model_size

Returns the length of the model state vector. Required.

model_size The length of the model state vector.


call adv_1step(x, time)
real(r8), dimension(:), intent(inout) :: x
type(time_type),        intent(in)    :: time

Intended to perform a single timestep advance of the model IFF the model can be called as a subroutine. This is a NULL INTERFACE for the NOGAPS model.

x State vector of length model_size.
time    Specifies time of the initial model state.


call get_state_meta_data (index_in, location, [, var_type] )
integer,             intent(in)  :: index_in
type(location_type), intent(out) :: location
integer, optional,   intent(out) ::  var_type 

Given an integer index into the state vector structure, returns the associated location. A second intent(out) optional argument kind can be returned if the model has more than one type of field (for instance temperature and zonal wind component). This routine could also be called get_state_location_plus() since it returns not the data value, but the location of that value, plus an optional variable type (e.g. U_WIND or V_WIND). This interface is required for all applications as it is used to compute the distance between observations and state variables.

index_in    Index of state vector element about which information is requested.
location The location of state variable element.
var_type The type of the state variable element.


call model_interpolate(x, location, itype, obs_val, istatus)
real(r8), dimension(:), intent(in)  :: x
type(location_type),    intent(in)  :: location
integer,                intent(in)  :: itype
real(r8),               intent(out) :: obs_val
integer,                intent(out) :: istatus

Given a state vector, a location, and a model state variable type, interpolates the state variable field to that location and returns the value in obs_val. The istatus variable should be returned as 0 unless there is some problem in computing the interpolation in which case an alternate value should be returned. The itype variable is a model specific integer that specifies the type of field (for instance temperature, zonal wind component, etc.).

x A model state vector.
location    Location to which to interpolate.
itype Type of state field to be interpolated.
obs_val The interpolated value from the model.
istatus Integer value returning 0 for success.
128 == latitude out-of-range
130 == below ground


var = get_model_time_step()
type(time_type) :: get_model_time_step

Returns the time step (forecast length) of the model; the smallest increment in time that the model is capable of advancing the state in a given implementation. The actual value may be set by the model_mod namelist (depends on the model). This interface is required for all applications.

var    Smallest time step of model. The values of input.nml&model_nml:time_step_seconds,time_step_days are used to define the smallest time step of the model.


call static_init_model()

Called to do one-time initialization of the model. This routine reads the &model_nml namelist, sets the calendar to 'GREGORIAN', reads the geometry_text_file (usually noggeom.txt), and sets up the rest of the geometry information.



call end_model()

Does any shutdown and clean-up needed for model. For NOGAPS, this is just deallocating the module variables invoked by static_init_model().



call init_time(time)
type(time_type), intent(out) :: time

Returns a time that is somehow appropriate for starting up a long integration of the model. At present, this is only used if the namelist parameter start_from_restart is set to .false. in the program perfect_model_obs. If this option is not to be used in perfect_model_obs, or if no synthetic data experiments using perfect_model_obs are planned, this can be a NULL INTERFACE. NOGAPS actually sets the time to 0.

time    Initial model time.


call init_conditions(x)
real(r8), dimension(:), intent(out) :: x

Returns a model state vector, x, that is some sort of appropriate initial condition for starting up a long integration of the model. At present, this is only used if the namelist parameter start_from_restart is set to .false. in the program perfect_model_obs. If this option is not to be used in perfect_model_obs, or if no synthetic data experiments using perfect_model_obs are planned, this can be a NULL INTERFACE. NOGAPS returns a vector of MISSING values.

x    Initial conditions for state vector.
default is MISSING_R8 (-888888.0)


ierr = nc_write_model_atts(ncFileID)
integer             :: nc_write_model_atts
integer, intent(in) :: ncFileID

This routine writes the NOGAPS attributes to a netCDF file. This includes coordinate variables and any metadata, but NOT the model state vector. SPACE is allocated for the model state vector (all copies), but that variable gets filled as the model advances.

The namelist variable &model_nml:output_state_vector controls whether the DART state vector is output as-is, or converted to a NOGAPS set of 'prognostic' variables.

ncFileID    Integer file descriptor to previously-opened netCDF file.
ierr Returns a 0 for successful completion.


ierr = nc_write_model_vars(ncFileID, statevec, copyindex, timeindex)
integer                            :: nc_write_model_vars
integer,                intent(in) :: ncFileID
real(r8), dimension(:), intent(in) :: statevec
integer,                intent(in) :: copyindex
integer,                intent(in) :: timeindex

This routine writes the model state vector (all copies) to a netCDF file.

The namelist variable &model_nml:output_state_vector controls whether the DART state vector is output as-is, or converted to a NOGAPS set of 'prognostic' variables.

ncFileID file descriptor to previously-opened netCDF file.
statevec A single copy of the model state vector.
copyindex    Integer index of copy to be written.
timeindex The timestep counter for the given state.
ierr Returns 0 for normal completion.


call pert_model_state(state, pert_state, interf_provided)
real(r8), dimension(:), intent(in)  :: state
real(r8), dimension(:), intent(out) :: pert_state
logical,                intent(out) :: interf_provided

The NOGAPS model_mod relies on the default DART mechanism to simply add a small amount of noise to the state vector to generate a new ensemble from a given model state.

state State vector to be perturbed.
pert_state Perturbed state vector. NOGAPS returns a vector of MISSING values.
interf_provided    NOGAPS always returns 'false' - indicating the default DART algorithms are to be used.


call get_close_maxdist_init(gc, maxdist)
type(get_close_type), intent(inout) :: gc
real(r8),             intent(in)    :: maxdist

In distance computations any two locations closer than the given maxdist will be considered close by the get_close_obs() routine. This is a PASS-THROUGH ROUTINE that uses the default routine in the location_mod.

gc The get_close_type which stores precomputed information about the locations to speed up searching
maxdist    Anything closer than this will be considered close.


call get_close_obs_init(gc, num, obs)
type(get_close_type), intent(inout) :: gc
integer,              intent(in)    :: num
type(location_type),  intent(in)    :: obs(num)

This is a PASS-THROUGH ROUTINE that uses the default routine in the location_mod. This routine precomputes information to accelerate the distance computations done by get_close_obs().

gc The get_close_type which stores precomputed information about the locations to speed up searching
num The number of items in the third argument
obs    A list of locations which will be part of the subsequent distance computations


call get_close_obs(gc, base_obs_loc, base_obs_kind, obs, obs_kind, num_close, close_ind, dist)
type(get_close_type), intent(in)  :: gc
type(location_type),  intent(in)  :: base_obs_loc
integer,              intent(in)  :: base_obs_kind
type(location_type),  intent(in)  :: obs(:)
integer,              intent(in)  :: obs_kind(:)
integer,              intent(out) :: num_close
integer,              intent(out) :: close_ind(:)
real(r8),             intent(out) :: dist(:)

Given a location and kind, compute the distances to all other locations in the obs list. The return values are the number of items which are within maxdist of the base, the index numbers in the original obs list, and optionally the distances. The gc contains precomputed information to speed the computations.

This subroutine will be called after get_close_maxdist_init and get_close_obs_init.

gc The get_close_type which stores precomputed information about the locations to speed up searching
base_obs_loc Reference location. The distances will be computed between this location and every other location in the obs list
base_obs_kind    The kind of base_obs_loc
obs Compute the distance between the base_obs_loc and each of the locations in this list
obs_kind The corresponding kind of each item in the obs list
num_close The number of items from the obs list which are within maxdist of the base location
close_ind The list of index numbers from the obs list which are within maxdist of the base location
dist the distance between each entry in the close_ind list and the base location.


call ens_mean_for_model(ens_mean)
real(r8), dimension(:), intent(in) :: ens_mean

A model-size vector with the means of the ensembles for each of the state vector items. This mean may be used to compute distances and is used to compute vertical information from a 'common' sigma coordinate system. This mean is always 'current' - it is updated after each model advance.

ens_mean    State vector containing the ensemble mean.


FILES


REFERENCES

  1. none

ERROR CODES and CONDITIONS

Restarting a bombed run ... remove the *lock files!

KNOWN BUGS

filter or run_filter.csh hangs.

The DART scripts require that the master task runs on the head node in order to effectively read and write to the named pipes. If the master task is not on the head node, filter starts to run, does what it can do, and then has to communicate. No communication takes place and the filter program hangs. The layout of the tasks is the domain of the batch system and MPI and is beyond the control of DART. If one configuration causes things to fail, DART is configurable to try the opposite task layout. At this time (Aug 2010), it is required to actually edit the DART/mpi_utilities/mpi_utilities.f90 to uncomment a line to allow the use of a namelist. This namelist must also then be present in input.nml. The appropriate lines of interest in mpi_utilities.f90 are:

! if your batch system does the task layout backwards, set this to true
! so the last task will communicate with the script in async 4 mode.
! as of now, mpich and mvapich do it forward, openmpi does it backwards.
logical :: reverse_task_layout  = .false.   ! task 0 on head node; task N-1 if .true.

...

! NAMELIST: change the following from .false. to .true. to enable
! the reading of this namelist.  This is the only place you need
! to make this change.
logical :: use_namelist = .true.

And the following must be inserted into input.nml :

&mpi_utilities_nml
   reverse_task_layout = .true.,
   /

The reason it is required to edit the source code is a backwards compatibility issue. With the next release, editing the source code will not be required, but it WILL be required to have an mpi_utilities_nml namelist in input.nml.


FUTURE PLANS

Search the code for instances of the string 'FIXME' ...


PRIVATE COMPONENTS

N/A


Terms of Use

DART software - Copyright 2004 - 2011 UCAR.
This open source software is provided by UCAR, "as is",
without charge, subject to all terms of use at
http://www.image.ucar.edu/DAReS/DART/DART_download

Contact: your_name_here
Revision: $Revision: 6331 $
Source: $URL: https://svn-dares-dart.cgd.ucar.edu/DART/releases/classic/models/NOGAPS/model_mod.html $
Change Date: $Date: 2013-07-30 11:12:57 -0600 (Tue, 30 Jul 2013) $
Change history:  try "svn log" or "svn diff"