DART Classic Documentation Switch to Manhattan?

### Useful Software

The following free open-source tools have proven to be very useful:

1. ncview: a great visual browser for netCDF files.
2. Panoply: another visual browser for netCDF, HDF, and GRIB files with many options for map projections and data slicing.
3. the netCDF Operators (NCO): tools to perform operations on netCDF files like concatenating, differencing, averaging, etc.
4. An MPI environment: to run larger jobs in parallel. DART can be used without MPI, especially the low order models where the memory use is small. The larger models often require MPI so that filter can be run as a parallel job, both for speed and memory size reasons. Common options are OpenMPI or MPICH. See the DART MPI introduction in mpi_intro.html.
5. Observation Processing And Wind Synthesis (OPAWS): OPAWS can process NCAR Dorade (sweep) and NCAR EOL Foray (netcdf) radar data. It analyzes (grids) data in either two-dimensions (on the conical surface of each sweep) or three-dimensions (Cartesian). Analyses are output in netcdf, Vis5d, and/or DART (Data Assimilation Research Testbed) formats.

The following licensed (commercial) tools have proven to be very useful:

1. Matlab®: An interactive and programming language for computation and visualization. We supply our diagnostic and plotting routines as Matlab® scripts.
Free alternatives to Matlab® (for which we unfortunately do not have the resources to support, but would happily accept user contributions) include:
1. Octave
2. SciPy plus
3. matplotlib
4. The R programming language has similiar functionality but a different enough syntax that the diagnostic and plotting routines we supply which work with Matlab® are unlikely to be easy to port.
5. The NCAR Command Language (NCL) package also does computation and plotting, but again with a different enough syntax that the routines supplied with DART would need major rewriting to work.

[top]

## DART platforms/compilers/batch systems

We work to keep the DART code highly portable. We avoid compiler-specific constructs, require no system-specific functions, and try as much as possible to be easy to build on new platforms.

DART has been compiled and run on Apple laptops and workstations, Linux clusters small and large, SGI Altix systems, IBM Power systems, IBM Intel systems, Cray systems.

DART has been compiled with compilers from Intel, PGI, Cray, GNU, IBM, Pathscale.

MPI versions of DART have run under batch systems including LSF, PBS, Moab/Torque, and Sun Grid Engine.

[top]

## Requirements to install and run DART

DART is intended to be highly portable among Unix/Linux operating systems. At this point we have no plans to port DART to Windows machines.

Minimally, you will need:

1. a Fortran90 compiler,
2. the netCDF libraries built with the F90 interface,
3. perl (just about any version),
4. an environment that understands csh or tcsh, and
5. the old unix standby ... make

History has shown that it is a very good idea to make sure your run-time environment has the following:

limit stacksize unlimited
limit datasize unlimited

Additionally, what has proven to be nice (but not required) is:

1. ncview: a great visual browser for netCDF files.
2. the netCDF Operators (NCO): tools to perform operations on netCDF files like concatenating, slicing, and dicing
3. Some sort of MPI environment. Put another way, DART does not come with MPICH, LAM-MPI, or OpenMPI; but we use them all the time. You should become familiar with the DART MPI introduction in mpi_intro.html.
4. The DART diagnostic scripts are written for Matlab®. Starting with the Matlab® R2008B release it has native netCDF support. You will still need these third-party netCDF toolboxes: mexnc and snctools.

### Requirements: a Fortran90 compiler

The DART software is written in standard Fortran 90, with no compiler-specific extensions. It has been compiled with and run with several versions of each of the following: GNU Fortran Compiler ("gfortran") (free), Intel Fortran Compiler for Linux and OS X. IBM XL Fortran Compiler, Portland Group Fortran Compiler, Lahey Fortran Compiler, Pathscale Fortran Compiler, Since recompiling the code is a necessity to experiment with different models, there are no binaries to distribute.

### Requirements: the netCDF library

DART uses the netCDF self-describing data format for the results of assimilation experiments. These files have the extension .nc and can be read by a number of standard data analysis tools. In particular, DART also makes use of the F90 interface to the library which is available through the netcdf.mod and typesizes.mod modules. IMPORTANT: different compilers create these modules with different "case" filenames, and sometimes they are not both installed into the expected directory. It is required that both modules be present. The normal place would be in the netcdf/include directory, as opposed to the netcdf/lib directory.

If the netCDF library does not exist on your system, you must build it (as well as the F90 interface modules). The library and instructions for building the library or installing from an RPM may be found at the netCDF home page: http://www.unidata.ucar.edu/software/netcdf/

NOTE: The location of the netCDF library, libnetcdf.a, and the locations of both netcdf.mod and typesizes.mod will be needed later.

### Requirements: if you have your own model

If you want to run your own model, all you need is an executable and some scripts to interface with DART - we have templates and examples. If your model can be called as a subroutine, life is good, and the hardest part is usually a routine to parse the model state vector into one whopping array - and back. Again - we have templates, examples, and a document describing the required interfaces. That document exists in the DART code - DART/models/model_mod.html - as does all the most current documentation. Almost every DART program/module has a matching piece of documentation.

Starting with the Jamaica release, there is an option to compile with the MPI (Message Passing Interface) libraries in order to run the assimilation step in parallel on hardware with multiple CPUs. Note that this is optional; MPI is not required. If you do want to run in parallel, then we also require a working MPI library and appropriate cluster or SMP hardware. See the MPI intro for more information on running with the MPI option.

One of the beauties of ensemble data assimilation is that even if (particularly if) your model is single-threaded, you can still run efficiently on parallel machines by dealing out each ensemble member (a unique instance of the model) to a separate processor. If your model cannot run single-threaded, fear not, DART can do that too, and simply runs each ensemble member one after another using all the processors for each instance of the model.

[top]

The DART source code is now distributed through an anonymous Subversion server. The big advantage is the ability to patch or update existing code trees at your discretion. Subversion (the client-side app is 'svn') allows you to compare your code tree with one on a remote server and selectively update individual files or groups of files. Furthermore, now everyone has access to any version of any file in the project, which is a huge help for developers. I have a brief summary of the svn commands I use most posted at: http://www.image.ucar.edu/~thoar/svn_primer.html

The resources to develop and support DART come from our ability to demonstrate our growing user base. We ask that you register at our download site http://www.image.ucar.edu/DAReS/DART/DART_download and promise that the information will only be used to notify you of new DART releases and shown to our sponsers in an aggregated form: "Look - we have three users from Tonawanda, NY". After filling in the form, you will be directed to a website that has instructions on how to download the code.

If you follow the instructions on the download site, you should wind up with a directory named my_path_to/DART, which we call $DARTHOME. Compiling the code in this tree (as is usually the case) will necessitate much more space. If you cannot use svn, just let me know and I will create a tar file for you. svn is so superior to a tar file that a tar file should be considered a last resort. ## Installing DART : document conventions All filenames look like this -- (typewriter font, green). Program names look like this -- (italicized font, green). user input looks like this -- (bold, magenta). commands to be typed at the command line are contained in an indented gray box. And the contents of a file are enclosed in a box with a border: &hypothetical_nml obs_seq_in_file_name = "obs_seq.in", obs_seq_out_file_name = "obs_seq.out", init_time_days = 0, init_time_seconds = 0, output_interval = 1 &end ## Installing DART The entire installation process is summarized in the following steps: The code tree is very "bushy"; there are many directories of support routines, etc. but only a few directories involved with the customization and installation of the DART software. If you can compile and run ONE of the low-order models, you should be able to compile and run ANY of the low-order models. For this reason, we can focus on the Lorenz 63 model. Subsequently, the only directories with files to be modified to check the installation are: DART/mkmf and DART/models/lorenz_63/work. We have tried to make the code as portable as possible, but we do not have access to all compilers on all platforms, so there are no guarantees. We are interested in your experience building the system, so please send us a note at dart @ ucar .edu ## Customizing the build scripts -- Overview. DART executable programs are constructed using two tools: make and mkmf. The make utility is a very common piece of software that requires a user-defined input file that records dependencies between different source files. make then performs a hierarchy of actions when one or more of the source files is modified. mkmf is a perl script that generates a make input file (named Makefile) and an example namelist input.nml.program_default with the default values. The Makefile is designed specifically to work with object-oriented Fortran90 (and other languages) for systems like DART. mkmf requires two separate input files. The first is a template' file which specifies details of the commands required for a specific Fortran90 compiler and may also contain pointers to directories containing pre-compiled utilities required by the DART system. This template file will need to be modified to reflect your system. The second input file is a path_names' file which includes a complete list of the locations (either relative or absolute) of all Fortran90 source files that are required to produce a particular DART program. Each 'path_names' file must contain a path for exactly one Fortran90 file containing a main program, but may contain any number of additional paths pointing to files containing Fortran90 modules. An mkmf command is executed which uses the 'path_names' file and the mkmf template file to produce a Makefile which is subsequently used by the standard make utility. Shell scripts that execute the mkmf command for all standard DART executables are provided as part of the standard DART software. For more information on mkmf see the FMS mkmf description. One of the benefits of using mkmf is that it also creates an example namelist file for each program. The example namelist is called input.nml.program_default, so as not to clash with any exising input.nml that may exist in that directory. ### Building and Customizing the 'mkmf.template' file A series of templates for different compilers/architectures exists in the DART/mkmf/ directory and have names with extensions that identify the compiler, the architecture, or both. This is how you inform the build process of the specifics of your system. Our intent is that you copy one that is similar to your system into mkmf.template and customize it. You can also create a soft link: rm mkmf.template ln -s mkmf.template.xxxx.yyyy mkmf.template For the discussion that follows, knowledge of the contents of one of these templates (i.e. mkmf.template.intel.linux) is needed. Note that only the LAST lines are shown here, the head of the file is just a big comment (worth reading, btw). ... MPIFC = mpif90 MPILD = mpif90 FC = ifort LD = ifort NETCDF = /usr/local INCS = -I${NETCDF}/include
LIBS = -L${NETCDF}/lib -lnetcdf FFLAGS = -O2$(INCS)
LDFLAGS = $(FFLAGS)$(LIBS)

Essentially, each of the lines defines some part of the resulting Makefile. Since make is particularly good at sorting out dependencies, the order of these lines really doesn't make any difference. The FC = ifort line ultimately defines the Fortran90 compiler to use, etc. The lines which are most likely to need site-specific changes start with FC, LD, and NETCDF.

If you have MPI installed on your system MPIFC, MPILD dictate which compiler will be used in that instance. If you do not have MPI, these variables are of no consequence.

variablevalue
FC the Fortran compiler
LD the name of the loader; typically, the same as the Fortran compiler
NETCDF the location of your netCDF installation containing netcdf.mod and typesizes.mod. Note that the value of the NETCDF variable will be used by the FFLAGS, LIBS, and LDFLAGS variables.

### Customizing the 'path_names_*' file

Several path_names_* files are provided in the work directory for each specific model, in this case: DART/models/lorenz_63/work. Since each model comes with its own set of files, the path_names_* files need no customization.

## Building the Lorenz_63 DART project.

Currently, DART executables are constructed in a work subdirectory under the directory containing code for the given model. In the top-level DART directory, change to the L63 work directory and list the contents:

cd DART/models/lorenz_63/work
ls

With the result:

dart:~/<1>models/lorenz_63/work > ls
Posterior_Diag.nc                       mkmf_restart_file_tool                  path_names_obs_diag
Prior_Diag.nc                           mkmf_wakeup_filter                      path_names_obs_sequence_tool
True_State.nc                           obs_seq.final                           path_names_perfect_model_obs
filter_ics                              obs_seq.in                              path_names_preprocess
filter_restart                          obs_seq.out                             path_names_restart_file_tool
input.nml                               obs_seq.out.average                     path_names_wakeup_filter
mkmf_create_fixed_network_seq           obs_seq.out.x                           perfect_ics
mkmf_create_obs_sequence                obs_seq.out.xy                          perfect_restart
mkmf_filter                             obs_seq.out.xyz                         quickbuild.csh
mkmf_obs_diag                           obs_seq.out.z                           set_def.out
mkmf_obs_sequence_tool                  path_names_create_fixed_network_seq     workshop_setup.csh
mkmf_perfect_model_obs                  path_names_create_obs_sequence
mkmf_preprocess                         path_names_filter
0[537] dart:~/<1>models/lorenz_63/work >


There are nine mkmf_xxxxxx files for the programs

1. preprocess,
2. create_obs_sequence,
3. create_fixed_network_seq,
4. obs_sequence_tool,
5. perfect_model_obs,
6. filter,
7. wakeup_filter,
8. obs_diag (the simple version for 1D models), and
9. restart_file_tool

along with the corresponding path_names_xxxxxx files. There are also files that contain initial conditions, netCDF output, and several observation sequence files, all of which will be discussed later. You can examine the contents of one of the path_names_xxxxxx files, for instance path_names_filter, to see a list of the relative paths of all files that contain Fortran90 modules required for the program filter for the L63 model. All of these paths are relative to your DART directory. The first path is the main program (filter.f90) and is followed by all the Fortran90 modules used by this program (after preprocessing).

The mkmf_xxxxxx scripts are cryptic but should not need to be modified -- as long as you do not restructure the code tree (by moving directories, for example). The only function of the mkmf_xxxxxx script is to generate a Makefile and an input.nml.program_default file. It is not supposed to compile anything -- make does that:

csh mkmf_preprocess
make

The first command generates an appropriate Makefile and the input.nml.preprocess_default file. The second command results in the compilation of a series of Fortran90 modules which ultimately produces an executable file: preprocess. Should you need to make any changes to the DART/mkmf/mkmf.template, you will need to regenerate the Makefile.

The preprocess program actually builds source code to be used by all the remaining modules. It is imperative to actually run preprocess before building the remaining executables. This is how the same code can assimilate state vector 'observations' for the Lorenz_63 model and real radar reflectivities for WRF without needing to specify a set of radar operators for the Lorenz_63 model!

preprocess reads the &preprocess_nml namelist to determine what observations and operators to incorporate. For this exercise, we will use the values in input.nml. preprocess is designed to abort if the files it is supposed to build already exist. For this reason, it is necessary to remove a couple files (if they exist) before you run the preprocessor. It is just a good habit to develop.

\rm -f ../../../obs_def/obs_def_mod.f90
\rm -f ../../../obs_kind/obs_kind_mod.f90
./preprocess
ls -l ../../../obs_def/obs_def_mod.f90
ls -l ../../../obs_kind/obs_kind_mod.f90

This created ../../../obs_def/obs_def_mod.f90 from ../../../obs_kind/DEFAULT_obs_kind_mod.F90 and several other modules. ../../../obs_kind/obs_kind_mod.f90 was created similarly. Now we can build the rest of the project.

A series of object files for each module compiled will also be left in the work directory, as some of these are undoubtedly needed by the build of the other DART components. You can proceed to create the other programs needed to work with L63 in DART as follows:

csh mkmf_create_obs_sequence
make
csh mkmf_create_fixed_network_seq
make
csh mkmf_perfect_model_obs
make
csh mkmf_filter
make
csh mkmf_obs_diag
make

The result (hopefully) is that six executables now reside in your work directory. The most common problem is that the netCDF libraries and include files (particularly typesizes.mod) are not found. Find them, edit the DART/mkmf/mkmf.template to point to their location, recreate the Makefile, and try again. The next most common problem is from the gfortran compiler complaining about "undefined reference to system_'" which will be covered in the Platform-specific notes section.

programpurpose
preprocess creates custom source code for just the observations of interest
create_obs_sequence specify a (set) of observation characteristics taken by a particular (set of) instruments
create_fixed_network_seq specify the temporal attributes of the observation sets
perfect_model_obs spinup, generate "true state" for synthetic observation experiments, ...
filter perform experiments
obs_diag creates observation-space diagnostic files to be explored by the Matlab® scripts.
obs_sequence_tool manipulates observation sequence files. It is not generally needed (particularly for low-order models) but can be used to combine observation sequences or convert from ASCII to binary or vice-versa. Since this is a specialty routine - we will not cover its use in this document.
wakeup_filter is only needed for MPI applications. We're starting at the beginning here, so we're going to ignore this one, too.

### Checking the build.

The DART/tutorial documents are an excellent way to kick the tires on DART and learn about ensemble data assimilation. If you've been able to build the Lorenz 63 model, you have correctly configured your mkmf.template and you can run anything in the tutorial.

[top]

## Platform-specific notes.

Most of the platform-specific notes are in the appropriate mkmf.template.xxxx.yyyy file. There are very few situations that require making additional changes.

### gfortran

For some reason, the gfortran compiler does not require an interface to the system() routine while all the other compilers we have tested do need the interface. This makes it impossible to have a module that is compiler-independent. The interface is needed in the null_mpi_utilities_mod.f90, and/or mpi_utilities_mod.f90. The problem surfaces at link time :


null_mpi_utilities_mod.o(.text+0x160): In function __mpi_utilities_mod__shell_execute':
: undefined reference to system_'
null_mpi_utilities_mod.o(.text+0x7c8): In function __mpi_utilities_mod__destroy_pipe':
: undefined reference to system_'
null_mpi_utilities_mod.o(.text+0xbb9): In function __mpi_utilities_mod__make_pipe':
: undefined reference to system_'
collect2: ld returned 1 exit status
make: *** [preprocess] Error 1



There is a script to facilitate making the appropriate change to null_mpi_utilities_mod.f90 and mpi_utilities_mod.f90. Run the shell script DART/mpi_utilities/fixsystem with no arguments to simply 'flip' the state of these files (i.e. if the system block is defined, it will undefine the block by commenting it out; if the block is commented out, it will define it by uncommenting the block). If you want to hand-edit null_mpi_utilities_mod.f90 and mpi_utilities_mod.f90 - look for the comment block that starts ! BUILD TIP and follow the directions in the comment block.

### module mismatch errors

Compilers create modules in their own particular manner ... a module built by one compiler may not (will usually not) be useable by another compiler. Sometimes it happens that the Fortran90 modules for the netCDF interface compiled by compiler A is trying to be used by compiler B. This generally results in an error message like:


Fatal Error: File 'netcdf.mod' opened at (1) is not a <pick_your_compiler> module file
make: *** [utilities_mod.o] Error 1



The only solution here is to make sure the mkmf.template file is referencing the appropriate netCDF installation.

### endian-ness errors

The endian-ness of the binary files is specific to the chipset, not the compiler or the code (normally). There are some models that require a specific endian binary file. Most compilers have some sort of ability to read and/or write binary files of a specific (or non-native) endianness by throwing some compile flags. It is generally an 'all-or-nothing' approach in that trying to micromanage which files are opened with native endianness and which files are openened with the non-native endianness is generally too time-consuming and fraught with error to be of much use. If the compile flags exist and are known to us, we try to include them in the comment section of the individual mkmf.template.xxxx.yyyy file.

These errors most often manifest themselves as 'time' errors in the DART execution. The restart/initial conditions files have the valid time of the ensuing model state as the first bit of information in the header, and if these files are 'wrong'-endian, the encoded times are nonsensical.

### MPI

If you want to use MPI and are interested in testing something simple before total immersion: try running the MPI test routines in the DART/doc/mpi directory. This directory contains some small test programs which use both MPI and the netCDF libraries. It may be simpler to debug any build problems here, and if you need to submit a problem report to your system admin people these single executables are much simpler than the entire DART build tree.

[top]

## Was the (simple, nonMPI) Install successful?

In keeping with the 'start simple' philosophy - we will test the installation with the lorenz_63 model. This section is not intended to provide any details of why we are doing what we are doing - this is sort of a 'black-box' test.

In the DART/models/lorenz_63/work directory, there is a shell script named workshop_setup.csh. It will build all the executables for this model and run a simple perfect model experiment. The initial conditions files and observations sequences are in ASCII, so there is no portability issue, but there may be some roundoff error in the conversion from ASCII to machine binary. With such a highly nonlinear model, small differences in the initial conditions will result in a different model trajectory. Your results should start out looking VERY SIMILAR and may diverge with time.

The canned experiment starts from a single known state and advances the model to the times defined in obs_seq.in and applies a forward operator to calculate the model's estimate of the observation. Noise with specified characteristics is added to this value. Both the noise-free and noisy version of the observation are recorded in obs_seq.out. There are 200 time steps in this experiment - the temporal evolution of the model state (the model 'truth') is recorded in a netCDF file.

The second part of this experiment is the actual assimilation. The ensemble of initial states is contained in filter_ics. There are initial states for 80 ensemble members in this file (our canned example will only use the first 20). Each model state is advanced until the time of one of the observations in obs_seq.out and the observation is assimilated. Each ensemble member has its own estimate of what it thinks the observation should be - these estimates are recorded in obs_seq.final. As the ensemble members advance, all of the state information is recorded in a pair of netCDF files.

cd DART/models/lorenz_63/work
./workshop_setup.csh

Here is a list of the files that are created by workshop_setup.csh:

from executable "perfect_model_obs"
obs_seq.out the observations at some predefined times and locations
True_State.nc a netCDF file containing the model trajectory
perfect_restart the final state of the model - in ASCII

from executable "filter"
Prior_Diag.nc the model states right before assimilation
Posterior_Diag.nc the model states immediately after assimilation
obs_seq.final the model estimates of the observations (an integral part of the data assimilation process)
filter_restart the ensemble of final model states

from both
dart_log.out the 'important' run-time output (this grows with each execution)
dart_log.nml the input parameters used for an experiment

### If you have Matlab® with netCDF support

The simplest way to determine if the installation is successful is to run some of the functions we have available in DART/matlab/. Usually, we launch Matlab® from the DART/models/lorenz_63/work directory and use the Matlab® addpath command to make the DART/matlab/ functions available. In this case, we know the true state of the model that is consistent with the observations. The following Matlab® scripts compare the ensemble members with the truth and can calculate an error.

<unix_prompt> cd DART/models/lorenz_63/work
<unix_prompt> matlab
... (lots of startup messages I'm skipping)...
>> plot_total_err % no input arguments needed
... (some output I'm skipping) ...
>> plot_ens_time_series % again - no input arguments needed

plot_total_err plot_ens_time_series

From the plot_ens_time_series graphic, you can see the individual green ensemble members getting more constrained as time evolves. If your figures look similar to these, that's pretty much what you're looking for and you should feel pretty confident that everything is working.

### If you have ncview

It is possible to get a glimpse of the evolution of the experiment by using ncview, but it is not really made for comparing the contents of two netCDF files against one another (as is done in the Matlab® scripts). All you can really do is to check that the system is actually evolving as one would expect.

 ncview Prior_Diag.nc results in the following - which I call the navigation window. click on 'state' - this will bring up the image to the right. With the mouse, drift over the lower-left portion of the image and notice that the 'Current:' portion of the navigation window tracks your cursor. For what it's worth - in this view the bottom row is the ensemble mean, the next row up is the ensemble spread, the third row is ensemble member 1 ... From left-to-right, there are three values - one for each model variable. You can think of them as X-Y-Z. Position the cursor on the lower-left portion until you get 'Current(:i=0,j=0) #### (x=1,y=1)' and click. This should spawn a timeseries in another window: repeat until you get 'Current(:i=1,j=0) #### (x=2,y=1)' and click - another timeseries is superimposed on the figure, if you got x=2, this is the second of the 3 state variables of the Lorenz '63 system. repeat until you get 'Current(:i=2,j=0) #### (x=3,y=1)' and click If you like, you can 'Close' that window and navigate to 'Current(:i=0,j=1) #### (x=1,y=2)' and click to check the spread of the ensemble

### If you have ... neither of those ...

You are going to have to plot the values from the netCDF on your own. If you are not familiar with the netCDF format - now's the time. It's a wonderfully self-describing format. What you need to know is that the True_State.nc file contains exactly 1 'copy' - the true state of the model as it was used to generate the observations. There are many 'copies' in the Prior_Diag.nc and Posterior_Diag.nc files - all of the ensemble members, the ensemble mean, ensemble spread, and a couple more that pertain to parameters associated with the assimilation. The netCDF files have all the information in them to decode the information about each 'copy'.

models/lorenz_63/work > ncdump -v CopyMetaData True_State.nc
netcdf True_State {
dimensions:
locationrank = 1 ;
copy = 1 ;
time = UNLIMITED ; // (200 currently)
NMLlinelen = 129 ;
NMLnlines = 187 ;
StateVariable = 3 ;
variables:
int copy(copy) ;
copy:long_name = "ensemble member or copy" ;
copy:units = "nondimensional" ;
copy:valid_range = 1, 1 ;
char inputnml(NMLnlines, NMLlinelen) ;
inputnml:long_name = "input.nml contents" ;
double time(time) ;
time:long_name = "time" ;
time:axis = "T" ;
time:cartesian_axis = "T" ;
time:calendar = "no calendar" ;
time:units = "days since 0000-00-00 00:00:00" ;
double loc1d(StateVariable) ;
loc1d:long_name = "location on unit circle" ;
loc1d:dimension = 1 ;
loc1d:units = "nondimensional" ;
loc1d:valid_range = 0., 1. ;
int StateVariable(StateVariable) ;
StateVariable:long_name = "State Variable ID" ;
StateVariable:units = "indexical" ;
StateVariable:valid_range = 1, 3 ;
double state(time, copy, StateVariable) ;
state:long_name = "model state or fcopy" ;

// global attributes:
:title = "true state from control" ;
:assim_model_source = "URL: /DART/trunk/assim_model/assim_model_mod.f90 $" ; :assim_model_revision = "$Revision$" ; :assim_model_revdate = "$Date$" ; :creation_date = "YYYY MM DD HH MM SS = 2009 02 18 13 27 58" ; :model_source = "URL: DART/trunk/model/lorenz_63/model_mod.f90$" ;
:model_revision = "$Revision$" ;
:model_revdate = "$Date$" ;
:model = "Lorenz_63" ;
:model_r = 28. ;
:model_b = 2.6666666666667 ;
:model_sigma = 10. ;
:model_deltat = 0.01 ;
data:

"true state" ;
}

The only real difference between the Prior_Diag.nc (or Posterior_Diag.nc) and the True_State.nc is that the former has more 'copies'. If you dump the 'CopyMetaData' variable, here they are - listed in the order they appear in the netCDF file. Take a look at the shape of the 'state' variable in the preceeding ncdump: state(time, copy, StateVariable). 200 timesteps - 1 copy - 3 state variables. The other two netCDF files have more copies, so you need to know how to index them to retrieve the copy of interest. Simply dump the 'CopyMetaData' variable

models/lorenz_63/work > ncdump -v CopyMetaData Posterior_Diag.nc
netcdf Posterior_Diag {
...
"ensemble mean        ",
"ensemble member      1",
"ensemble member      2",
"ensemble member      3",
"ensemble member      4",
"ensemble member      5",
"ensemble member      6",
"ensemble member      7",
"ensemble member      8",
"ensemble member      9",
"ensemble member     10",
"ensemble member     11",
"ensemble member     12",
"ensemble member     13",
"ensemble member     14",
"ensemble member     15",
"ensemble member     16",
"ensemble member     17",
"ensemble member     18",
"ensemble member     19",
"ensemble member     20",
"inflation mean        ",
"inflation sd          " ;
}

The 22nd copy is ensemble member 20, for example. So - using pseudo-syntax that assumes you start counting with '1': state(:,22,:) is a 200-by-3 matrix for ensemble member 20. Each row is a timestep, each column is a state variable. This is a 3 variable model. Want to know the time indices? There is a 'time' variable - complete with units.

[top]

The code distribution was getting 'cluttered' with datasets, boundary conditions, intial conditions, ... large files that were not necessarily interesting to all people who downloaded the DART code. Worse, subversion makes a local hidden copy of the original repository contents, so the penalty for being large is doubled. It just made sense to make all the large files available on as 'as-needed' basis.

To keep the size of the DART distribution down we have a separate www-site to provide some observation sequences, initial conditions, and general datasets. It is our intent to populate this site with some 'verification' results, i.e. assimilations that were known to be 'good' and that should be fairly reproducible - appropriate to test the DART installation.

Please be patient as I make time to populate this directory. (yes, 'make', all my 'found' time is taken ...)
Observation sequences can be found at http://www.image.ucar.edu/pub/DART/Obs_sets.

Verification experiments will be posted to http://www.image.ucar.edu/pub/DART/VerificationData as soon as I can get to it. These experiments will consist of initial conditions files for testing different high-order models like CAM, WRF, POP ...
The low-order models are already distributed with verification data in their work directories.

Useful bits for CAM can be found at http://www.image.ucar.edu/pub/DART/CAM.

Useful bits for WRF can be found at http://www.image.ucar.edu/pub/DART/WRF.

Useful bits for MPAS_ocn can be found at http://www.image.ucar.edu/pub/DART/MPAS_OCN.

[top]

## Frequently Asked Questions for DART

#### What kind of data assimilation does DART do?

There are two main techniques for doing data assimilation: variational and ensemble methods. DART uses a variety of ensemble Kalman filter techniques.

#### What parts of the DART source should I feel free to alter?

We distribute the full source code for the system so you're free to edit anything you please. However, the system was designed so that you should be able to add code in a few specific places to add a new model, work with new observation types, or change the assimilation algorithm.

To add a new model you should be able to add a new DART/models/XXX/model_mod.f90 file to interface between your model and DART. We expect that you should not have to alter any code in your model to make it work with DART.

To add new observation types you should be able to add a new DART/obs_def/obs_def_XXX_mod.f90 file. If there is not already a converter for this observation type you can add a converter in DART/observations/XXX.

If you are doing data assimilation algorithm research you may be altering some of the core DART routines in the DART/assim_tools/assim_tools_mod.f90 or DART/filter/filter.f90 files. Please feel free to email DART support (dart at ucar.edu) for help with how to do these modifications so they work with the parallel version of DART correctly.

If you add support for a new observation type, a new model, or filter kind, we'd love for you to send a copy of it back to us for inclusion in the DART distribution.

#### What systems and compilers do you support? What other tools do I need?

We run on almost any Linux-based system including laptops, clusters, and supercomputers. This includes IBMs, Crays, SGIs, Macs. We discourage trying to use Windows but it has been done using the CygWin package.

We require a Fortran 90 compiler. Common ones in use are from GNU (gfortran), Intel, PGI, PathScale, IBM, and g95.

We need a compatible NetCDF library, which means compiled with the same compiler you build DART with, and built with the Fortran interfaces.

You can run DART as a single program without any additional software. To run in parallel on a cluster or other multicore platform you will need a working MPI library and runtime system. If one doesn't come with your system already OpenMPI is a good open-source option.

Our diagnostic routines are Matlab® scripts, which is a commercial math/visualization package. Some users use IDL, NCL, or R but they have to adapt our scripts themselves.

### Installation Questions

#### How do I get started?

Go to the extensive DART web pages where there are detailed instructions on checking the source out of our subversion server, compiling, running the tutorials, and examples of other users' applications of DART.

If you really hate reading instructions you can try looking at the README in the top level directory. But if you run into problems please read the full setup instructions before contacting us for help. We will start out suggesting you read those web pages first anyway.

#### I'm trying to build with MPI and getting errors.

The MPI compiler commands are usually scripts or programs which add additional arguments and then call the standard Fortran compiler. If there is more than one type of compiler on a system you must find the version of MPI which was compiled to wrap around the compiler you are using.

In the DART/mpi_utilities/tests directory are some small programs which can be used to test compiling and running with MPI.

If you are using version 1.10.0 of OpenMPI and getting compiler errors about being unable to find a matching routine for calls to MPI_Get() and/or MPI_Reduce(), please update to version 1.10.1 or later. There were missing interfaces in the 1.10.0 release which are fixed in the 1.10.1 release.

#### I'm getting errors from subversion when I'm trying to check out a copy of DART.

If you are behind some kind of firewall they may not allow the ports needed to talk to the subversion server. If you can, try from a machine outside the firewall, or talk to your system support people about how to access a subversion server. Sometimes there are machines which allow subversion access which share filesystems with machines that are not allowed. It's always better if you can go back later and update your copy of DART with subversion to keep it in sync with the server; making a tar of the checked out source and moving it to another machine won't let you do this.

#### I'm getting errors related to NetCDF when I try to build the executables.

Any application that uses the NetCDF data libraries must be compiled with exactly the same compiler as the libraries were built with. On systems which have either multiple compilers, or multiple versions of the same compiler, there is the possibilty that the libraries don't match the compiler you're using to compile DART. Options here are:

• If there are multiple versions of the NetCDF libraries installed, find a method to select the right version; e.g. specify the exact path to the include files and libraries in your mkmf.template file, or load the right module if your system uses the 'module' command to select software options.
• Change the version of the compiler you are using to build DART to match the one used to build NetCDF.
• Build your own version of the NetCDF libraries with the compiler you prefer to use. See this web page for help in building the libraries. DART requires only the basic library with the NetCDF 3 interfaces, but will work with NetCDF 4 versions. Building NetCDF 4 does require additional libraries such as HDF, libz, etc.

If you believe you are using the right version of the compiler, then check to see if the Fortran interfaces have been compiled into a single library with the C code, or if there are two libraries, libnetcdf.a and libnetcdff.a (note the 2 f's in the second library). The library lines in your mkmf.template must reference either one or both libraries, depending on what exists. This is a choice that is made by the person who built the NetCDF libraries and cannot be predicted beforehand.

#### I'm getting errors about undefined symbol "_system_" when I try to compile.

If you're running the Lanai release or code from the classic later than 2013, the DART Makefiles should automatically call a script in the DART/mpi_utilities directory named fixsystem. This script tries to alter the MPI source code in that directory to work with your compiler. If you still get a compiler error look at this script and see if you have to add a case for the name of your compiler.

If you're running the Kodiak release or earlier, you have to run fixsystem yourself before compiling. We distributed the code so it would work without change for the gfortran compiler, but all other compilers require that you run fixsystem before trying to compile.

#### I have a netCDF library but I'm getting errors about unrecognized module format.

The netCDF libraries need to be built by the same version of the same compiler as you are building DART. If your system has more than one compiler on it (e.g. intel ifort and gfortran) or multiple versions of the same compiler (e.g. gfortran 4.1 and 4.5) you must have a version of the netCDF libraries which was built with the same version of the same compiler as you're using to build DART.

There are several important options when the netCDF libraries are built that change what libraries you get and whether you have what you need. The problems we run into most frequently are:

• If the netCDF installation includes only the C library routines and not the Fortran interfaces. We require both.
• The C routines are always in -lnetcdf, but the Fortran interfaces can either be included in that single library or placed in a separate -lnetcdff library (note 2 f's).
• If HDF support is included, additional libraries are required to link an executable. Most of our mkmf template files have comments about the usual list of required libraries that you need to include.

Bottom line: What you need to set for the library list in your DART/mkmf/mkmf.template file depends on how your netCDF was built.

#### My model runs in single precision and I want to compile DART the same way.

We recommend that you run an assimilation with Fortran double precision reals (e.g. all real values are real*8 or 64 bits). However if your model is compiled in single precision (real*4 or 32 bits) then there is an option to build DART the same way. Edit DART/common/types_mod.f90 and change the definition of R8 to equal R4 (comment out the existing line and comment in the following line). Rebuild all DART executables and it will run with single precision reals. We declare every real variable inside DART with an explicit size, so we do not recommend using compiler flags to try to change the default real variable precision because it will not affect the DART code.

#### I'm trying to run an MPI filter and I'm getting N copies of every message.

Look in the log or in the standard output for the message: 'initialize_mpi_utilities: Running with N MPI processes.' Instead of this message, if you see: initialize_mpi_utilities: Running single process then you have NOT successfully compiled with MPI; you are running N duplicate copies of a single-task program. Rerun the quickbuild.csh script with the -mpi flag to force it to build filter with mpif90 or whatever the mpi compiler wrapper is called on your system.

#### How does DART interact with running my model?

If you are running one of the "low-order" models (e.g. one of the Lorenz models, the null model, the pe2lyr model, etc), the easiest way to run is to let DART control advancing the model when necessary. You run the "filter" executable and it runs both the assimilation and model advances until all observations in the input observation sequence file have been assimilated. See the "async" setting in the filter namelist documentation for more information.

If you are running a large model with a complicated configuration and/or run script, you will probably want to run the assimilation separately from the model advances. To do this, you will need to script the execution, and break up the observations into single timestep chunks per file. The scripting will need to create filter input files from the model files, link the current observation file to the input filename in the namelist, copy or rename any inflation files from the previous assimilation step, run filter, convert the filter output to model input files, and then run the model. There are example scripts which do this in the WRF shell_scripts directory, also the MPAS shell_scripts directory. These scripts are both highly model-dependent as well as computing system dependent.

If you are running any of the CESM models (e.g. CAM, POP, CLM) then the scripts to set up a CESM case with assimilation are provided in the DART distribution. At run time, the run script provided by CESM is used. After the model advance a DART script is called to do the assimilation. The "multi-instance" capability of CESM is used to manage the multiple copies of the components which are needed for assimilation, and to run them all as part of a single job.

#### After assimilating, my model variables are out of range.

One of the assumptions of the Kalman filter is that the model states and the observation values have gaussian distributions. The assimilation can work successfully even if this is not actually true but there are certain cases where this leads to problems.

If any of the model state values must remain bounded, for example values which must remain positive, or must remain between 0 and 1, you may have to add some additional code to ensure the posterior values obey these constraints. It is not an indication of an error if after the assimilation some values are outside the required range.

Most users deal with this, successfully, by letting the assimilation update the values as it will, and then during the step where the model data is converted from DART format to the model native format, any out-of-range values are changed at that time. For example, the WRF model has a namelist item in the &model_nml namelist which can be set at run-time to list which variables have minimum and/or maximum values and the conversion code will enforce the given limits.

Generally this works successfully, but if the observations or the model are biased and the assimilation is continuously trying to move model state out of range, the distribution can become seriously unbalanced. In this case another solution, which requires more coding, might be to convert the values to a log scale on import to DART, do the assimilation with the log of the observation values, and then convert back to the original scale at export time. This ensures the values stay positive, which is common requirement for legal values.

#### After assimilating, my job finished successfully but no values changed.

This is a common problem, especially when adding a new observation type or trying to assimilate with a new model. But it can happen at any time and can be confusing about why nothing is changing. See this web page for a list of common causes of the assimilation output state being the same as the input state, and how to determine which one is responsible.

#### You have lots of namelists. How can I tell what to set?

Each module in DART has an html web page which describes the namelists in detail. Start with DART/index.html and follow the links to all the other modules and namelists in the system. If you want help with setting up an experiment the DART/filter/filter.html page has some introductory advice for some of the more important namelist settings.

#### I'm not getting an error but I am getting MPI timeouts

If your job is getting killed for no discernable reason but is usually during computing prior or posterior forward operators, or during writing the diagnostics file, the problem may be caused by the MPI timeout limit. This usually happens only when the number of MPI tasks is much larger than the number of ensemble members, and there are very slow forward operator computations or very large states to write into the diagnostics files. In the standard DART distribution only the first N tasks (where N is the number of ensemble members) are doing work during the forward operators, or only 1 task for writing diagnostic files. All the other tasks will be waiting at an MPI barrier. If they wait there long enough they reach the timeout threshold which assumes that at least one or more other tasks have failed and so they exit.

The solutions are either to set an environment variable that lengthens the timeout threshold, run with fewer MPI tasks, or ask the DART team to be a Beta user of a newer version of DART which does not have such large time differentials between different MPI tasks.

#### filter is finishing but my job is hanging at exit

If filter finishes running, including the final timestamp message to the log file, but then the MPI job does not exit (the next line in the job script is not reached), and you have set the MPI timeout to be large to avoid the job being killed by MPI timeouts, then you have run into a bug we also have seen. We believe this to be an MPI library bug which only happens under a specific set of circumstances. We can reproduce it but cannot find a solution. The apparent bug happens more frequently with larger processor counts (usually larger than about 4000 MPI tasks), so if you run into this situation try running with a smaller MPI task count if possible, and not setting the MPI debug flags. We have seen this happen on the NCAR supercomputer Yellowstone with both the MPICH2 and PEMPI MPI libraries.

### The WRF Weather Model and DART

#### How does DART interact with running WRF?

Most users with large WRF domains run a single cycle of filter to do assimilation, and then advance each ensemble member of WRF from a script, possibly submitting them in a batch to the job queues.

For smaller WRF runs, if WRF can be compiled without MPI (the 'serial' configuration) then filter can cycle inside the same program, advancing multiple ensemble members in parallel. See the WRF documentation pages for more details.

#### I have completed running filter and I have the filter_restart.#### files. Can you refer me to the utility to convert them back to a set of wrfinput_d01 files?

If you are using the advance_model.csh script that is distributed with DART, it will take care of converting the filter output files back to the WRF input files for the next model advance.

If you are setting up a free run or doing something different than what the basic script supports, read on to see what must be done.

When you finish running DART it will have created a set of sssss.#### restart files, where the sssss part of the filename comes from the setting of &filter_nml :: restart_out_file_name (and is frequently filter_restart). The .#### is a 4 digit number appended by filter based on the ensemble number. These files contain the WRF state vector data that was used in the assimilation, which is usually a subset of all the fields in a wrfinput_d01 file.

dart_to_wrf is the standard utility to insert the DART state information into a WRF input file, e.g. wrfinput_d01. For multiple WRF domains, a single run of the converter program will update the _d02, _d03, ..., files at the same time as the _d01 file.

In the input.nml file, set the following:

&dart_to_wrf_nml
dart_restart_name  = 'filter_restart.####',
/


where '####' is the ensemble member number. There is no option to alter the input/output WRF filename. Run dart_to_wrf. Remember to preserve each wrfinput_d01 file or you will simply keep overwriting the information in the same output file. Repeat for each ensemble member and you will be ready to run WRF to make ensemble forecasts.

If filter is advancing the WRF model, and you want to spawn forecasts from intermediate assimilation steps:
Use the assim_model_state_ic.#### files instead of the filter_restart.#### files, and set the model_advance_file namelist item to be .true. .

### The CESM Climate Model and DART

#### How does DART interact with CESM?

The CESM climate model comes with its own configuration, build, run, and archive scripts already. The DART distribution supplies a 'setup' script that calls the CESM scripts to build a new case, and then add a section to the CESM run script so the DART code will be run after each CESM model advance. The DART setup scripts are needed only when building a new case. At run-time the CESM run scripts are used to start the job. The CESM "multi-instance" capability is used to run multiple ensemble members as part of a single job.

#### I want to assimilate with one of the CESM models. Where do I start?

We use the CESM framework to execute the CESM model components, and then call the DART assimilation via an addition to the standard CESM run script. We provide a set of setup scripts in our DART/models/XXX/shell_scripts directories, where XXX is currently one of: cam, POP, clm, or CESM. Start with the shell script, set the options you want there, and then run the script. It calls the standard CESM 'build_case' scripts, and stages the files that will be needed for assimilation. See comments in the appropriate setup script for more details of how to proceed.

#### I'm getting a mysterious run-time error from CESM about box rearranging.

Certain versions of CESM (including CESM1_5_alpha02d) won't run with 3 instances (ensemble members). We are unsure what other instance sizes fail. The error message is about box rearranging from box_rearrange.F90. This is a problem in CESM and should be reported via their Bugzilla process. 4 instances works fine.

#### I'm getting 'update_reg_list' errors trying to assimilate with POP.

If you are trying to assimilate with POP and you get this error:

ERROR FROM:
routine: update_reg_list
message: max_reg_list_num ( 80) is too small ... increase

The most likely cause is that the POP-DART model interface code is trying to read the POP grid information and the default file is in the wrong kind of binary for this system (big-endian and not little-endian). At this point the easiest solution is to rebuild the DART executables with a flag to swap the bytes as it is reading binary files. For the Intel compiler, see the comments at the top of the mkmf file about adding '-convert bigendian' to the FFLAGS line.

#### I'm getting asked to confirm removing files when CESM is built.

If you have the rm (remove) command aliased to require you to confirm removing files, the CESM build process will stop and wait for you to confirm removing the files. You should reply yes when prompted.

If you have questions about the DART setup scripts and how they interact with CESM it is a good idea to set up a standalone CESM case without any DART scripts or commands to be sure you have a good CESM environment before trying to add DART. The DART setup script uses CESM scripts and commands and cannot change how those scripts behave in your environment.

#### I'm getting module errors when CESM is built.

DART only uses the plain NetCDF libraries for I/O. CESM can be configured to use several versions of NetCDF including PIO, parallel netCDF, and plain netCDF. Be sure you have the correct modules loaded before you build CESM. If there are questions, try setting up a CESM case without DART and resolve any build errors or warnings there before using the DART scripts.