DART Classic Documentation Switch to Manhattan?

An incomplete list of DART-supported models:


There are two broad classes of models supported by DART. Some are 'low-order' models, generally single-threaded, subroutine-callable, and idealized: there are no real observations of these systems. The other class of models are 'high-order' models. There are real observations of these systems. Or at least, we like to think so ...



[top]


The 'low-order' models supported in DART.


ikeda


The Ikeda model is a 2D chaotic map useful for visualization data assimilation updating directly in state space. There are three parameters: a, b, and mu. The state is 2D, x = [X Y]. The equations are:

    X(i+1) = 1 + mu * ( X(i) * cos( t ) - Y(i) * sin( t ) )
    Y(i+1) =     mu * ( X(i) * sin( t ) + Y(i) * cos( t ) ),

where

    t = a - b / ( X(i)**2 + Y(i)**2 + 1 )

Note the system is time-discrete already, meaning there is no delta_t. The system stems from nonlinear optics (Ikeda 1979, Optics Communications). Interface written by Greg Lawson. Thanks Greg!


lorenz_63


This is the 3-variable model as described in: Lorenz, E. N. 1963. Deterministic nonperiodic flow. J. Atmos. Sci. 20, 130-141.
The system of equations is:

   X' = -sigma*X + sigma*Y
   Y' = -XZ + rX - Y
   Z' =  XY -bZ

lorenz_84


This model is based on:   Lorenz E. N., 1984: Irregularity: A fundamental property of the atmosphere. Tellus36A, 98-110.
The system of equations is:

   X' = -Y^2 - Z^2  - aX  + aF
   Y' =  XY  - bXZ  - Y   + G
   Z' = bXY  +  XZ  - Z

Where a, b, F, and G are the model parameters.


9var


This model provides interesting off-attractor transients that behave something like gravity waves.


lorenz_96


This is the model we use to become familiar with new architectures, i.e., it is the one we use 'first'. It can be called as a subroutine or as a separate executable. We can test this model both single-threaded and mpi-enabled.

Quoting from the Lorenz 1998 paper:

... the authors introduce a model consisting of 40 ordinary differential equations, with the dependent variables representing values of some atmospheric quantity at 40 sites spaced equally about a latitude circle. The equations contain quadratic, linear, and constant terms representing advection, dissipation, and external forcing. Numerical integration indicates that small errors (differences between solutions) tend to double in about 2 days. Localized errors tend to spread eastward as they grow, encircling the globe after about 14 days.
...
We have chosen a model with J variables, denoted by X1, ..., XJ; in most of our experiments we have let J = 40. The governing equations are:

    dXj/dt = (Xj+1 - Xj-2)Xj-1 - Xj + F         (1)

for j = 1, ..., J. To make Eq. (1) meaningful for all values of j we define X-1 = XJ-1, X0 = XJ, and XJ+1 = X1, so that the variables form a cyclic chain, and may be looked at as values of some unspecified scalar meteorological quantity, perhaps vorticity or temperature, at J equally spaced sites extending around a latitude circle. Nothing will simulate the atmosphere's latitudinal or vertical extent.

forced_lorenz_96


The forced_lorenz_96 model implements the standard L96 equations except that the forcing term, F, is added to the state vector and is assigned an independent value at each gridpoint. The result is a model that is twice as big as the standard L96 model. The forcing can be allowed to vary in time or can be held fixed so that the model looks like the standard L96 but with a state vector that includes the constant forcing term. An option is also included to add random noise to the forcing terms as part of the time tendency computation which can help in assimilation performance. If the random noise option is turned off (see namelist) the time tendency of the forcing terms is 0.


lorenz_96_2scale


This is the Lorenz 96 2-scale model, documented in Lorenz (1995). It also has the option of the variant on the model from Smith (2001), which is invoked by setting local_y = .true. in the namelist. The time step, coupling, forcing, number of X variables, and the number of Ys per X are all specified in the namelist. Defaults are chosen depending on whether the Lorenz or Smith option is specified in the namelist. Lorenz is the default model. Interface written by Josh Hacker. Thanks Josh!


lorenz_04


The reference for these models is Lorenz, E.N., 2005: Designing chaotic models. J. Atmos. Sci.62, 1574-1587.
Model II is a single-scale model, similar to Lorenz 96, but with spatial continuity in the waves. Model III is a two-scale model. It is fudamentally different from the Lorenz 96 two-scale model because of the spatial continuity and the fact that both scales are projected onto a single variable of integration. The scale separation is achived by a spatial filter and is therefore not perfect (i.e. there is leakage). The slow scale in model III is model II, and thus model II is a deficient form of model III. The basic equations are documented in Lorenz (2005) and also in the model_mod.f90 code. The user is free to choose model II or III with a Namelist variable.


simple_advection


This model is on a periodic one-dimensional domain. A wind field is modeled using Burger's Equation with an upstream semi-lagrangian differencing. This diffusive numerical scheme is stable and forcing is provided by adding in random gaussian noise to each wind grid variable independently at each timestep. An Eulerian option with centered-in-space differencing is also provided. The Eulerian differencing is both numerically unstable and subject to shock formation. However, it can sometimes be made stable in assimilation mode (see recent work by Majda and collaborators).


[top]


The 'high-order' models supported in DART.

In roughly the order they were supported by DART.


bgrid_solo

This is a dynamical core for B-grid dynamics using the Held-Suarez forcing. The resolution is configurable, and the entire model can be run as a subroutine. Status: supported.


pe2lyr

This model is a 2-layer, isentropic, primitive equation model on a sphere. Status: orphaned.


wrf

The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. More people are using DART with WRF than any other model. Note: The actual WRF code is not distributed with DART. Status: supported.


cam  DART/CAM datasets, guidelines

There are many DART-supported versions of CAM. The frozen version of the Community Climate System Model (CCSM4.0) uses the Community Atmosphere Model (CAM4). The Community Earth System Model (CESM 1.0) uses the Community Atmosphere Model (CAM5); the latest in a series of global atmosphere models developed at NCAR for the weather and climate research communities. Status: both are supported (as are some earlier releases).


PBL_1d

The PBL model is a single column version of the WRF model. In this instance, the necessary portions of the WRF code are distributed with DART. Status: supported - but looking to be adopted.


MITgcm_annulus

The MITgcm annulus model as configured for this application within DART is a non-hydrostatic, rigid lid, C-grid, primitive equation model utilizing a cylindrical coordinate system. For detailed information about the MITgcm, see http://mitgcm.org Status: orphaned - and looking to be adopted.


rose

The rose model is for the stratosphere-mesosphere and was used by Tomoko Matsuo (now at CU-Boulder and NOAA) for research in the assimilation of observations of the Mesosphere Lower-Thermosphere (MLT). Note: the model code is not distributed with DART. Status: orphaned


MITgcm_ocean

The MIT ocean GCM version 'checkpoint59a' is the foundation of this implementation. It was modified by Ibrahim Hoteit (then of Scripps) to accomodate the interfaces needed by DART. Status: supported - but looking to be adopted.


am2

The FMS AM2 model is GFDL's atmosphere-only code using observed sea surface temperatures, time-varying radiative forcings (including volcanos) and time-varying land cover type. This version of AM2 (also called AM2.1) uses the finite-volume dynamical core (Lin 2004). Robert Pincus (CIRES/NOAA ESRL PSD1) and Patrick Hoffman (NOAA) wrote the DART interface and are currently using the system for research. Note: the model code is not distributed with DART. Status: supported


coamps

The DART interface was originally written and supported by Tim Whitcomb. The following model description is taken from the COAMPS overview web page:

The Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) has been developed by the Marine Meteorology Division (MMD) of the Naval Research Laboratory (NRL). The atmospheric components of COAMPS, described below, are used operationally by the U.S. Navy for short-term numerical weather prediction for various regions around the world.

Note: the model code is not distributed with DART. Status: supported


POP

The Parallel Ocean Program (POP) comes in two variants. Los Alamos National Laboratory provides POP Version 2.0 which has been modified to run in the NCAR Community Climate System Model (CCSM) framework. As of November 2009, the CCSM-POP version is being run. The LANL-POP version is nearly supported - and some extensions useful for data assimilation in general have been proposed to LANL, who have agreed in principle to implement the changes. Fundamentally, the change is an additional restart option in which the first timestep after an assimilation is a Eulerian timestep (similar to a cold start). Note: the souce code for POP is not distributed with DART. Status: actively being developed



[top]


Downloadable datasets for DART.


The code distribution was getting 'cluttered' with datasets, boundary conditions, intial conditions, ... large files that were not necessarily interesting to all people who downloaded the DART code. Worse, subversion makes a local hidden copy of the original repository contents, so the penalty for being large is doubled. It just made sense to make all the large files available on as 'as-needed' basis.

To keep the size of the DART distribution down we have a separate www-site to provide some observation sequences, initial conditions, and general datasets. It is our intent to populate this site with some 'verification' results, i.e. assimilations that were known to be 'good' and that should be fairly reproducible - appropriate to test the DART installation.

Please be patient as I make time to populate this directory. (yes, 'make', all my 'found' time is taken ...)
Observation sequences can be found at http://www.image.ucar.edu/pub/DART/Obs_sets.

Verification experiments will be posted to http://www.image.ucar.edu/pub/DART/VerificationData as soon as I can get to it. These experiments will consist of initial conditions files for testing different high-order models like CAM, WRF, POP ...
The low-order models are already distributed with verification data in their work directories.

Useful bits for CAM can be found at http://www.image.ucar.edu/pub/DART/CAM.

Useful bits for WRF can be found at http://www.image.ucar.edu/pub/DART/WRF.

Useful bits for MPAS_ocn can be found at http://www.image.ucar.edu/pub/DART/MPAS_OCN.

 


[top]


Creating initial conditions for DART


The idea is to generate an ensemble that has sufficient 'spread' to cover the range of possible solutions. Insufficient spread can (and usually will) lead to poor assimilations. Think 'filter divergence'.

Generating an ensemble of initial conditions can be done in lots of ways, only a couple of which will be discussed here. The first is to generate a single initial condition and let DART perturb it with noise of a nature you specify to generate as many ensemble members as you like. The second is to take some existing collection of model states and convert them to DART initial conditions files and then use the restart_file_tool to set the proper date in the files. The hard part is then coming up with the original collection of model state(s).


Adding noise to a single model state

This method works well for some models, and fails miserably for others. As it stands, DART supplies a routine that can add gaussian noise to every element of a state vector. This can cause some models to be numerically unstable. You can supply your own model_mod:pert_model_state() if you want a more sophisticated perturbation scheme.


Using a collection of model states.

The important thing to remember is that the high-order models all come with routines to convert a single model restart file (or the equivalent) to a DART initial conditions file. CAM has cam_to_dart, WRF has wrf_to_dart, POP has pop_to_dart, etc. DART has the ability to read a single file that contains initial conditions for all the ensemble members, or a series of restart files - one for each ensemble member. Simply collect your ensemble of restart files from your model and convert each of them to a DART initial conditions file of the form filter_ics.#### where #### represents a 4 digit ensemble member counter. That is, for a 50-member ensemble, they should be named: filter_ics.0001  ... filter_ics.0050

Frequently, the initial ensemble of restart files is some climatological collection. For CAM experiments, we usually start with N different 'January 1' states ... from N different years. The DART utility program restart_file_tool is then run on each of these initial conditions files to set a consistent date for all of the initial conditions. Experience has shown that it takes less than a week of assimilating 4x/day to achieve a steady ensemble spread. WRF has its own method of generating an initial ensemble. For that, it is best to go to contact someone familiar with WRF/DART.


Initial conditions for the low-order models.

In general, there are 'restart files' for the low-order models that already exist as work/filter_ics. If you need more ensemble members than are supplied by these files, you can generate your own by adding noise to a single perfect_ics file. Simply specify

&filter_nml
start_from_restart   = .FALSE.,
restart_in_file_name = "perfect_ics",
ens_size             = [whatever you want]


[top]


'Perfect Model' observation experiments (also known as) Observation System Simulation Experiment (OSSE)



Once a model is compatible with the DART facility all of the functionality of DART is available. This includes 'perfect model' experiments (also called Observing System Simulation Experiments - OSSEs). Essentially, the model is run forward from a known state and, at predefined times, an observation forward operator is applied to the model state to harvest synthetic observations. This model trajectory is known as the 'true state'. The synthetic observations are then used in an assimilation experiment. The assimilation performance can then be evaluated precisely because the true state (of the model) is known. Since the same forward operator is used to harvest the synthetic observations as well as during the assimilation, the 'representativeness' error of the assimilation system is not an issue.

There are a set of Matlab® functions to help explore the assimilation performance in state-space as well as in observation-space. An OSSE is explored in depth in our Lorenz '96 example.


Perfect Model Experiment Overview


There are four fundamental steps to running an OSSE from within DART:

  1. Create a blueprint of what, where, and when you want observations. Essentially, define the metadata of the observations without actually specifying the observation values. The default filename for the blueprint is obs_seq.in . For simple cases, this is just running create_obs_sequence and create_fixed_network_sequence, more in-depth solutions are presented below.
  2. Harvest the synthetic observations from the true model state by running perfect_model_obs to advance the model from a known initial condition and apply the forward observation operator based on the observation 'blueprint'. The observation will have noise added to it based on a draw from a random normal distribution with the variance specified in the observation blueprint. The noise-free 'truth' and the noisy 'observation' are recorded in the output observation sequence file. The entire time-history of the true state of the model is recorded in True_State.nc . The default filename for the 'observations' is obs_seq.out .
  3. Assimilate the synthetic observations with filter in the usual way. The prior/forecast states are preserved in Prior_Diag.nc and the posterior/analysis states are preserved in Posterior_Diag.nc . The default filename for the file with the observations and (optionally) the ensemble estimates of the observations is obs_seq.final .
  4. Check to make sure the assimilation was effective! Ensemble DA is not a black box! YOU must check to make sure you are making effective use of the information in the observations!


1. Defining the observation metadata - the 'blueprint'.


There are lots of ways to define an observation sequence that DART can use as input for a perfect model experiment. If you have observations in DART format already, you can simply use them. If you have observations in one of the formats already supported by the DART converters (check DART/observations/observations.html), convert it to a DART observation sequence. You may need to use the obs_sequence_tool to combine multiple observation sequence files into observation sequence files for the perfect model experiment. Any existing observation values and quality control information will be ignored by perfect_model_obs - only the time and location information are used. In fact, any and all existing observation and QC values will be removed.

GENERAL COMMENT ABOUT THE INTERPLAY BETWEEN THE MODEL STOP/START FREQUENCY AND THE IMPACT ON THE OBSERVATION FREQUENCY: There is usually a very real difference between the dynamical timestep of the model and when it is safe to stop and restart the model. The assimilation window is (usually) required to be a multiple of the safe stop/start frequency. For example, an atmospheric model may have a dynamical timestep of a few seconds, but may be constrained such that it is only possible to stop/restart every hour. In this case, the assimilation window is a multiple of 3600 seconds. Trying to get observations at a finer timescale is not possible, we only have access to the model state when the model stops.

If you do not have an input observation sequence, it is simple to create one.

  1. Run create_obs_sequence to generate the blueprint for the types of observations and observation error variances for whatever locations are desired.
  2. Run create_fixed_network_seq to define the temporal distribution of the desired observations.

Both create_obs_sequence and create_fixed_network_seq interactively prompt you for the information they require. This can be quite tedious if you want a spatially dense set of observations. People have been known to actually write programs to generate the input to create_obs_sequence and simply pipe or redirect the information into the program. There are several examples of these in the models/bgrid_solo directory: column_rand.f90, id_set_def_stdin.f90, ps_id_stdin.f90, and ps_rand_local.f90 . Be advised that some observation types have different input requirements, so a 'one size fits all' program is a waste of time.


NOTE: only the observation kinds in the input.nml &obs_kind_nml:assimilate_these_obs_types,evaluate_these_obs will be available to the create_obs_sequence program.


DEVELOPERS TIP: You can specify 'identity' observations as input to perfect_model_obs. Identity observations are the model values AT the exact gridcell location, there is no interpolation at all. Just a straight table-lookup. This can be useful as you develop your model interfaces; you can test many of the routines and scripts without having a working model_interpolate().


More information about creating observation sequence files for OSSE's is available in the observation sequence discussion section.


2. Generating the true state and harvesting the observation values - perfect_model_obs


perfect_model_obs reads the blueprint and an initial state and applies the appropriate forward observation operator for each and every observation in the current 'assimilation window'. If necessary, the model is advanced until the next set of observations is desired. When it has run out of observations or reached the stop time defined by the namelist control, the program stops and writes out a restart file, a diagnostic file, the observation sequence file, and a log file. This is fundamentally a single deterministic forecast for 'as long as it takes' to harvest all the observations.


default filename format contents
perfect_restart   ASCII or binary  The DART model state at the end of the forecast. If the forecast needs to be lengthened, use this as the input. The format of the file is controlled by input.nml &assim_model_nml:write_binary_restart_files The first record is the valid time of the model. The rest is the model state at that time.
True_State.nc netCDF The DART model state at every assimilation timestep. This file has but one 'copy' - the truth. Dump the copy metadata and the time:
ncdump -v time,CopyMetaData True_State.nc
obs_seq.out ASCII or binary
DART-specific linked list
This file has the observations - the result of the forward observation operator. This observation sequence file has two 'copies' of the observation: the noisy 'copy' and the noise-free 'copy'. The noisy copy is designated as the 'observation', the noise-free copy is the truth. The observation-space diagnostic program obs_diag has special options for using the true copy instead of the observation copy. See the obs_diag.html for details.
dart_log.out ASCII The run-time output of perfect_model_obs .

Each model may define the assimilation window differently, but conceptually, all the observations plus or minus half the assimilation window are considered to be simultaneous and a single model state provides the basis for all those observations. For example: if the blueprint requires temperature observations every 30 seconds, the initial model time is noon (12:00) and the assimilation window is 1 hour; all the observations from 11:30 to 12:30 will use the same state as input for the forward observation operator. The fact that you have a blueprint for observations every 30 seconds means a lot of those observations may have the same value (if they are in the same location).


perfect_model_obs uses the input.nml for its control. A subset of the namelists and variables of particular interest for perfect_model_obs are summarized here. Each namelist is fully described by the corresponding module document.


&perfect_model_obs_nml  <--- link to the full namelist description!
   ...
   start_from_restart    = .true.            usually, but not always
   output_restart        = .true.            sure, why not
   init_time_days        = -1                negative means use the time in ...
   init_time_seconds     = -1                the 'restart_in_file_name' file
   first_obs_days        = -1                negative means start at the first time in ...
   first_obs_seconds     = -1                the 'obs_seq_in_file_name' file.
   last_obs_days         = -1                negative means to stop with the last ...
   last_obs_seconds      = -1                observation in the file.
   restart_in_file_name  = "perfect_ics"
   restart_out_file_name = "perfect_restart"
   obs_seq_in_file_name  = "obs_seq.in"
   obs_seq_out_file_name = "obs_seq.out"
   output_interval       = 1
   async                 = 0                 totally depends on the model
   adv_ens_command       = "./advance_ens.csh"       depends on the model
  /

&obs_sequence_nml
   write_binary_obs_sequence = .false.       .false. will create ASCII - easy to check.
  /

&obs_kind_nml
   ...
   assimilate_these_obs_types = 'RADIOSONDE_TEMPERATURE',
   ...                                       list all the synthetic observation
   ...                                       types you want
  /

&assim_model_nml
   ...
   write_binary_restart_files = .true.       your choice
  /

&model_nml
   ...
   time_step_days = 0,                       some models call this 'assimilation_period_days'
   time_step_seconds = 3600                  some models call this 'assimilation_period_seconds'
                                             use whatever value you want
  /

&utilities_nml
   ...
   termlevel   = 1                           your choice
   logfilename = 'dart_log.out'              your choice
  /

Executing perfect_model_obs


Since perfect_model_obs generally requires advancing the model, and the model may use MPI or require special ancillary files or forcing files or ..., it is not possible to provide a single example that will cover all possibilities. The subroutine-callable models (i.e. the low-order models) can run perfect_model_obs very simply:


./perfect_model_obs


3. Performing the assimilation experiment - filter


This step is done with the program filter, which also uses input.nml for input and run-time control. A successful assimilation will depend on many things: an approprite initial ensemble, monitoring and perhaps correcting the ensemble spread, localization, etc. It is simply not possible to design a one-size-fits-all system that will work for all cases. It is critically important to analyze the results of the assimilation and explore ways of making the assimilation more effective. The DART tutorial and the DART_LAB exercises are an invaluable resource to learn and understand how to determine the effectiveness of, and improve upon, an assimilation experiment. The concepts learned with the low-order models are directly applicable to the most complicated models.

It is important to remember that if filter 'terminates normally', it does not necessarily mean the assimilation was effective!

filter produces two state-space output diagnostic files (Prior_Diag.nc and Posterior_Diag.nc) which contains values of the ensemble mean, ensemble spread, perhaps the inflation values, and (optionally) ensemble members for the duration of the experiment. filter also creates an observation sequence file that contains the input observation information as well as the prior and posterior ensemble mean estimates of that observation, the prior and posterior ensemble spread for that observation, and (optionally), the actual prior and posterior ensemble estimates of that observation. Rather than replicate the observation metadata for each of these, the single metadata is shared for all these 'copies' of the observation. See An overview of the observation sequence for more detail. filter also produces a run-time log file that can greatly aid in determining what went wrong if the program terminates abnormally.

A very short description of some of the most important namelist variables is presented here. Basically, I am only discussing the settings necessary to get filter to run. I can guarantee these settings WILL NOT generate the BEST assimilation. Again, see the module documentation for a full description of each namelist.


&filter_nml  <--- link to the full namelist description!
   async                    = 0
   adv_ens_command          = "./advance_model.csh"
   ens_size                 = 40                 something ≥ 20, please
   start_from_restart       = .false.            .false. requires reading available input files
   output_restart           = .true.
   obs_sequence_in_name     = "obs_seq.out"      whatever you called the output from perfect_model_obs
   obs_sequence_out_name    = "obs_seq.final"
   restart_in_file_name     = "filter_ics"       the file (or base file name) of your ensemble
   restart_out_file_name    = "filter_restart"
   init_time_days           = -1                 the time in the restart file is correct
   init_time_seconds        = -1
   first_obs_days           = -1                 same interpretation as with perfect_model_obs
   first_obs_seconds        = -1
   last_obs_days            = -1                 same interpretation as with perfect_model_obs
   last_obs_seconds         = -1
   num_output_state_members = 10                 # of FULL DART model states to put in state-space output files
   num_output_obs_members   = 40                 # of ensemble member 'copies' of observation to save
   output_interval          = 1
   num_groups               = 1
   input_qc_threshold       =  4.0
   outlier_threshold        =  3.0               Observation rejection criterion!
   output_forward_op_errors = .false.
   output_timestamps        = .false.
   output_inflation         = .true.

   inf_flavor               = 0,                       0                  0 is 'do not inflate'
   inf_start_from_restart   = .false.,                 .false.
   inf_output_restart       = .false.,                 .false.
   inf_deterministic        = .true.,                  .true.
   inf_in_file_name         = 'not_initialized',       'not_initialized'
   inf_out_file_name        = 'not_initialized',       'not_initialized'
   inf_diag_file_name       = 'not_initialized',       'not_initialized'
   inf_initial              = 1.0,                     1.0
   inf_sd_initial           = 0.6,                     0.0
   inf_damping              = 0.9,                     0.0
   inf_lower_bound          = 1.0,                     1.0
   inf_upper_bound          = 1000000.0,               1000000.0
   inf_sd_lower_bound       = 0.6,                     0.0
  /

&ensemble_manager_nml
   single_restart_file_in  = .false.       .false. means each enemble member is in a separate file
   single_restart_file_out = .false.
   perturbation_amplitude  = 0.2           not used if 'single_restart_file_in' is .false.
  /

&assim_tools_nml
   filter_kind             = 1             1 is EAKF, 2 is EnKF ...
   cutoff                  = 0.2           this is your localization - units depend on type of 'location_mod'
  /

&obs_kind_nml
   assimilate_these_obs_types = 'RAW_STATE_VARIABLE'    Again, use a list ...
  /

&model_nml
   assimilation_perior_days    = 0                      the assimilation interval is up to you
   assimilation_perior_seconds = 3600
  /


num_output_state_members are '.true.' so the state vector is output at every time for which there are observations (once a day here). Posterior_Diag.nc and Prior_Diag.nc then contain values for 20 ensemble members once a day. Once the namelist is set, execute filter to integrate the ensemble forward for 24,000 steps with the final ensemble state written to the filter_restart. Copy the perfect_model_obs restart file perfect_restart (the `true state') to perfect_ics, and the filter restart file filter_restart to filter_ics so that future assimilation experiments can be initialized from these spun-up states.


mpirun ./filter        -OR-

mpirun.lsf ./filter    -OR-

./filteru              -OR-

however YOU run filter on your system!


perfect_model_obs

4. ASSESS THE PERFORMANCE!


All the concepts of spread, rmse, rank histograms that were taught in the DART tutorial and in DART_LAB should be applied now. Try the techniques described in the Did my experiment work? section. The 'big three' state-space diagnostics are repeated here because they are so important. The first two require the True_State.nc .


plot_bins.m plots the rank histograms for a set of state variables. This requires you to have all or most of the ensemble members available in the Prior_Diag.nc or Posterior_Diag.nc files.
plot_total_err.m plots the evolution of the error (un-normalized) and ensemble spread of all state variables.
plot_ens_mean_time_series.m    plots the evolution of a set of state variables - just the ensemble mean (and Truth, if available). plot_ens_time_series.m is actually a better choice if you can afford to write all/most of the ensemble members to the Prior_Diag.nc and Posterior_Diag.nc files.

DON'T FORGET ABOUT THE OBSERVATION-SPACE DIAGNOSTICS!