University of Texas at Austin
A Stochastic Newton Method for Statistical Inverse Problems Governed by PDEs
We present a Markov-chain Monte Carlo (MCMC) method for sampling high-dimensional, expensive-to-evaluate probability density functions that arise in the Bayesian solution of statistical inverse problems governed by PDEs. The method builds on previous work in Langevin dynamics, which uses gradient information to guide the sampling in useful directions, improving acceptance probabilities and convergence rates. We extend the Langevin idea to exploit local Hessian information, leading to what is effectively a stochastic version of Newton's method. A large-scale inexact-Newton-CG variant is developed, analogous to methods used in PDE-constrained optimization. We apply
the method to the Bayesian solution of an inverse problem governed by seismic wave propagation, for which we observe substantially speedups over a blackbox MCMC method.
This work is joint with Carsten Burstedde, James Martin, and
Lucas Wilcox at UT-Austin.