## Markov Chain Monte Carlo & Simulated Annealing |
Top Previous Next |

Markov chain Monte Carlo (MCMC) is a method for exploring the calibration of a model. Simplistically, MCMC performs a random walk on the likelihood surface specified by the payoff function. The equilibrium distribution of the random walk reflects the probability density of the payoff. This makes it possible to directly assess the posterior probability of the calibration given the data and priors (if used), calculating confidence bounds without making any assumptions that require a well-behaved (e.g., ellipsoid) likelihood surface.
For this to make sense, the payoff should be a likelihood or log-likelihood, up to a constant factor. The area under the likelihood surface need not integrate to 1; the algorithm takes care of that scaling. You can still use MCMC to explore other surfaces, but the interpretation may not be sensible.
Simulated Annealing (SA) is a method for exploration of payoff surfaces with multiple optima.
The MCMC algorithm is closely related to Simulated Annealing (SA), in that both explore a surface stochastically, using the Metropolis acceptance criterion for proposed points. We provide parameters that permit use of the MCMC routine for SA, though this is not the primary focus of the feature.
The MCMC feature requires the following: -A model -A payoff file (.vpd) -An optimization control file (.voc)
It produces: -A tab-delimited file listing all accepted points and payoffs (after the optional burnin period). After deleting the payoff column of this file, you may use it as a File method input to sensitivity runs, so that you can explore the response of variables in your model subject to the posterior probability from the calibration. -A .dat file containing additional diagnostics, and a .tab file containing additional information on points explored. -(Payoff sensitivity only) a tab-delimited file, named myrun_sensitive.tab, listing parameter confidence bounds.
The MCMC routine may be accessed in two ways: as a payoff sensitivity method, following a Powell optimization, and as a standalone estimation (MCMC) or optimization (SA) method. A number of options are available for controlling the algorithm, but in general the defaults are useful and you should be able to use it with little or no tuning. See Options.
Examples
See the examples in OptSensi .
Background Technically, the algorithm is a differential evolution-Markov chain hybrid, which runs multiple chains simultaneously. Proposed points are generated from random recombination of the chains’ current states, with noise. The algorithm is adapted from:
Accelerating Markov Chain Monte Carlo Simulation by Differential Evolution with Self-Adaptive Randomized Subspace Sampling Jasper A. Vrugt, C.J.F. ter Braak, C.G.H. Diks, Bruce A. Robinson, James M. Hyman, Dave Higdon International Journal of Nonlinear Sciences & Numerical Simulation,10(3), 271-288, March 2009
A Markov Chain Monte Carlo version of the genetic algorithm Differential Evolution: easy Bayesian computing for real parameter spaces Cajo J. F. Ter Braak Stat Comput (2006) 16:239–249 DOI 10.1007/s11222-006-8769-1 |