
Techniques for proving Asynchronous Convergence results for Markov Chain Monte Carlo methods
Markov Chain Monte Carlo (MCMC) methods such as Gibbs sampling are findi...
read it

On the utility of MetropolisHastings with asymmetric acceptance ratio
The MetropolisHastings algorithm allows one to sample asymptotically fr...
read it

Efficient Bernoulli factory MCMC for intractable likelihoods
Acceptreject based Markov chain Monte Carlo (MCMC) algorithms have trad...
read it

Moving Target Monte Carlo
The Markov Chain Monte Carlo (MCMC) methods are popular when considering...
read it

Convergence diagnostics for Markov chain Monte Carlo
Markov chain Monte Carlo (MCMC) is one of the most useful approaches to ...
read it

A splitting method to reduce MCMC variance
We explore whether splitting and killing methods can improve the accurac...
read it
MetropolisHastings with Averaged Acceptance Ratios
Markov chain Monte Carlo (MCMC) methods to sample from a probability distribution π defined on a space (Θ,𝒯) consist of the simulation of realisations of Markov chains {θ_n,n≥1} of invariant distribution π and such that the distribution of θ_i converges to π as i→∞. In practice one is typically interested in the computation of expectations of functions, say f, with respect to π and it is also required that averages M^1∑_n=1^Mf(θ_n) converge to the expectation of interest. The iterative nature of MCMC makes it difficult to develop generic methods to take advantage of parallel computing environments when interested in reducing time to convergence. While numerous approaches have been proposed to reduce the variance of ergodic averages, including averaging over independent realisations of {θ_n,n≥1} simulated on several computers, techniques to reduce the "burnin" of MCMC are scarce. In this paper we explore a simple and generic approach to improve convergence to equilibrium of existing algorithms which rely on the MetropolisHastings (MH) update, the main building block of MCMC. The main idea is to use averages of the acceptance ratio w.r.t. multiple realisations of random variables involved, while preserving π as invariant distribution. The methodology requires limited change to existing code, is naturally suited to parallel computing and is shown on our examples to provide substantial performance improvements both in terms of convergence to equilibrium and variance of ergodic averages. In some scenarios gains are observed even on a serial machine.
READ FULL TEXT
Comments
There are no comments yet.