Estimation as minimization
§Solve for x with an approximate, iterative method rather than an exact matrix inversion
§
§Start with guess x0, compute gradient                                   efficiently with an adjoint model, search for minimum along -„, compute new „ and repeat
§Good for non-linear problems;  use conjugate gradient or BFGS approaches
§Low-rank covariance matrix built up as iterations progress
§As with Kalman filter, transport errors can be handled as dynamic noise
Problem: computing the gradient is expensive in forward mode.  Also, it takes a lot of space to save this vector at many time steps.  The beauty of the variational approach is that the gradient can be calculated with a single back pass of the adjoint model, rather than many forward passes of the regular forward model.