DART Manhattan Release Notes

DART project logo

Jump to the DART Documentation Main Index
version information for this file:
$Id: documentation/html/Manhattan_release.html 10979 2017-02-01 20:07:34Z thoar@ucar.edu $

Dart Overview / Notes for Current Users / Non-backwards Compatible Changes / New Features / Supported Models / Changed Models / New Forward Operators / New Observations / New Diagnostics and Documentation / New Utilities / Known Problems / Terms of Use

Dart Overview

The Data Assimilation Research Testbed (DART) is designed to facilitate the combination of assimilation algorithms, models, and real (or synthetic) observations to allow increased understanding of all three. The DART programs are highly portable, having been compiled with many Fortran 90 compilers and run on linux compute-servers, linux clusters, OSX laptops/desktops, SGI Altix clusters, supercomputers running AIX, and more. Read the Customizations section for help in building on new platforms.

DART employs a modular programming approach to apply an Ensemble Kalman Filter which adjusts model values toward a state that is more consistent with information from a set of observations. Models may be swapped in and out, as can different algorithms in the Ensemble Kalman Filter. The method requires running multiple instances of a model to generate an ensemble of states. A forward operator appropriate for the type of observation being assimilated is applied to each of the states to generate the model's estimate of the observation. Comparing these estimates and their uncertainty to the observation and its uncertainty ultimately results in the adjustments to the model states. See the DART_LAB demos or read more in the DART tutorial.

DART diagnostic output can be written that contains the model state before and after the adjustment, along with the ensemble mean and standard deviation, and prior or posterior inflation values if inflation is enabled. There is also a text file, obs_seq.final, with the model estimates of the observations. There is a suite of MATLAB® functions that facilitate exploration of the results, but the netCDF files are inherently portable and contain all the necessary metadata to interpret the contents with other analysis programs such as NCL, R, etc.

To get started running with Lorenz 63 model refer to Getting Started

[top]

Notes for Current Users

If you have been updating from the rma_trunk branch of the DART subversion repository you will notice that the code tree has been simplified to be more intuitive for users. The new top level directory structure looks like :

if you do try to do an 'svn update' on an existing directory, you will encounter many 'tree conflicts'.

We suggest that current users checkout a fresh version of Manhattan in a new location. To see which files need to be moved, run 'svn status' on your original checked out version. Anything with an M or ? in the first column needs to be moved to the new location in the new tree. Please contact DART if you have any issues migrating your existing code to the new tree structure.

There is a list of non-backwards compatible changes (see below), and a list of new options and functions.

The Manhattan release will continue to be updated for the next few months as we continue to add features. Checking out the Manhattan release branch and running 'svn update' from time to time is the recommended way to update your DART tree.

[top]

Non-backwards Compatible Changes

Unlike previous releases of DART, this version contains more non-backwards compatible changes than usual. Please examine the following list carefully. We do suggest you check out the Manhattan release into a new location and migrate any local changes from previous versions as a second step.

Changes in the Manhattan release (15 May 2015) which are not backwards compatible with the Lanai release (13 Dec 2013):

  1. We no longer require model data to be converted to DART format restart files. We directly read and write NetCDF format only. To specify the input and output files for filter, there are new namelist items in the &filter_nml namelist: 'input_state_file_list' and 'output_state_file_list' .
  2. The information formerly in Prior_Diag.nc and Posterior_Diag.nc has been moved. If you are reading and writing ensemble members from different files, the state information, the ensemble mean and standard deviation, and the inflation mean and standard deviation will all be read and written to separate files:
    If you are reading and writing ensemble members from a single file, all this information will now be in a single NetCDF file but will be stored in different variables inside that file:
    We also now have options for writing files at four stages of the assimilation cycle: 'input', 'preassim', 'postassim', 'output'. This is set in the &filter_nml namelist with stages_to_write.
  3. New model_mod.f90 required routines: There are default version of these available to use if you have no special requirements.
  4. Several of the model_mod.f90 argument lists have changed
  5. There are several namelist changes mainly in the &filter_nml and &perfect_model_mod which are outlined in detail in Manhattan_diffs_from_Lanai
  6. All modules have been moved to DART/assimilation_code/modules/ directory. And similarly all of the programs have moved to DART/assimilation_code/programs/
  7. The location modules which were stored in locations have moved to DART/assimilation_code/location directory
  8. The observation converters which were stored in observations have moved to DART/observations/obs_converters directory
  9. The forward operators have moved from obs_def/obs_def_*_mod.f90 to observations/forward_operators
  10. The tutorial files have moved to DART/documentation/tutorial directory
  11. The program fill_inflation_restart is OBSOLETE since DART inflation files are now in NetCDF format. Now inflation files can be filled using ncap2. Here is an example using version 4.4.2 or later of the NCO tools:
      ncap2 -s "T=1.0;U=1.0;V=1.0" wrfinput_d01 prior_inf.nc'
      ncap2 -s "T=0.6;U=0.6;V=0.6" wrfinput_d01 prior_sd.nc'
    
  12. The default flags in the mkmf_template.XXX files have been updated to be more consistent with current compiler versions.
  13. If you enable the sampling error correction option, the required data is now read from a single netcdf file which supports multiple ensemble sizes. A program is provided to compute additional ensemble sizes if they are not in the default file.
  14. Our use of TYPES and KINDS has been very confusing in the past. In Manhattan we have tried to make it clearer which things in DART are generic quantities (QTY) - temperature, pressure, etc - and which things are specific types of observations - Radiosonde_temperature, Argo_salinity etc.

    Below is a mapping between old and new subroutine names here for reference. We have made these changes to all files distributed with DART. If you have lots of code developed outside of the subversion repository, please contact DART for a sed script to help automate the changes.

    Public subroutines, existing name on left, replacement on right:
        
        assimilate_this_obs_kind()     =>     assimilate_this_type_of_obs(type_index)
        evaluate_this_obs_kind()       =>       evaluate_this_type_of_obs(type_index)
        use_ext_prior_this_obs_kind()  =>  use_ext_prior_this_type_of_obs(type_index)
        
        get_num_obs_kinds()            =>  get_num_types_of_obs()
        get_num_raw_obs_kinds()        =>  get_num_quantities()
        
        get_obs_kind_index()           => get_index_for_type_of_obs(type_name)
        get_obs_kind_name()            => get_name_for_type_of_obs(type_index)
        
        get_raw_obs_kind_index()       =>  get_index_for_quantity(qty_name)
        get_raw_obs_kind_name()        =>  get_name_for_quantity(qty_index)
        
        get_obs_kind_var_type()        =>  get_quantity_for_type_of_obs(type_index)
        
        get_obs_kind()                 =>  get_obs_def_type_of_obs(obs_def)
        set_obs_def_kind()             =>  set_obs_def_type_of_obs(obs_def)
        
        get_kind_from_menu()           =>  get_type_of_obs_from_menu()
        
        read_obs_kind()                =>   read_type_of_obs_table(file_unit, file_format)
        write_obs_kind()               =>  write_type_of_obs_table(file_unit, file_format)
        
        maps obs_seq nums to specific type nums, only used in read_obs_seq:
        map_def_index()                => map_type_of_obs_table()
        
        removed this.  apparently unused, and simply calls get_obs_kind_name():
        get_obs_name()
        
        apparently unused anywhere, removed:
        add_wind_names()
        do_obs_form_pair()
    
    Public integer parameter constants and subroutine formal argument names, old on left, new on right:
    
       KIND_ => QTY_
       kind  => quantity
       
       TYPE_ => TYPE_
       type  => type_of_obs
       
       integer parameters:
       max_obs_generic  =>  max_defined_quantities  (not currently public, stays private)
       max_obs_kinds    =>  max_defined_types_of_obs 
    
  15. For smaller models we support single file input and output. These files contain all of the member information, mean, standard deviation and inflation values for all of the state variables. This can be run with cycling and all time steps will be appended to the file.

    For perfect_model_obs we provide a perfect_input.cdl file which contains a single ensemble member which will be considered the 'truth' and observations will be generated based on those values. The output will contain all of the cycling timesteps all of the state variables.

    For filter we provide a filter_input.cdl file which contains all of the state member variables and potentially inflation mean and standard deviation values. The output will contain all of the cycling timesteps all of the state variables. Additionally you have the option to write out different stages during the assimilation in the &filter_nml stages_to_write mentioned above.

    To generate a NetCDF file from a .cdl file run:

       ncgen -o perfect_input.nc perfect_input.cdl
       ncgen -o filter_input.nc filter_input.cdl
       

[top]

New Features

[top]

Supported Models

Currently we support the models listed below. There are several new models that have been added that are not on the Lanai Release including CM1, CICE, and ROMS. Any previously supported models not on this list are still supported in DART classic

The DART/models/template directory contains sample files for adding a new model. See the Adding a Model section of the DART web pages for more help on adding a new model.

[top]

Changed Models

[top]

New Observation Types/Forward Operators

[top]

New Observation Types/Sources

[top]

New Diagnostics and Documentation

Better Web Pages. We've put a lot of effort into expanding our documentation. For example, please check out the MATLAB diagnostics section or the pages outlining the observation sequence file contents.


But there's always more to add. Please let us know where we are lacking.

[top]

New Utilities

This section describes updates and changes to the tutorial materials, scripting, setup, and build information since the Lanai release.

[top]

Known Problems

[top]

Terms of Use

DART software - Copyright UCAR. This open source software is provided by UCAR, "as is", without charge, subject to all terms of use at http://www.image.ucar.edu/DAReS/DART/DART_download

Contact: DART core group
Revision: $Revision: 10979 $
Source: $URL: https://www.image.ucar.edu/DAReS/DART/Manhattan/documentation/html/Manhattan_release.html $
Change Date: $Date: 2017-02-01 13:07:34 -0700 (Wed, 01 Feb 2017) $
Change history:  try "svn log" or "svn diff"