1

So, let's say I have the following 2-dimensional target distribution that I would like to sample from (a mixture of bivariate normal distributions) -

import numba
import numpy as np
import scipy.stats as stats
import seaborn as sns
import pandas as pd
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
%matplotlib inline

def targ_dist(x):

target = (stats.multivariate_normal.pdf(x,[0,0],[[1,0],
[0,1]])+stats.multivariate_normal.pdf(x,[-6,-6],[[1,0.9],
[0.9,1]])+stats.multivariate_normal.pdf(x,[4,4],[[1,-0.9],[-0.9,1]]))/3
return target

and the following proposal distribution (a bivariate random walk) -

def T(x,y,sigma):

return stats.multivariate_normal.pdf(y,x,[[sigma**2,0],[0,sigma**2]])

The following is the Metropolis Hastings code for updating the "entire" state in every iteration -

#Initialising

n_iter = 30000

# tuning parameter i.e. variance of proposal distribution
sigma = 2

# initial state
X = stats.uniform.rvs(loc=-5, scale=10, size=2, random_state=None)

# count number of acceptances
accept = 0

# store the samples
MHsamples = np.zeros((n_iter,2))

# MH sampler
for t in range(n_iter):

    # proposals
    Y = X+stats.norm.rvs(0,sigma,2)

    # accept or reject
    u = stats.uniform.rvs(loc=0, scale=1, size=1)

    # acceptance probability
    r = (targ_dist(Y)*T(Y,X,sigma))/(targ_dist(X)*T(X,Y,sigma))
    if u < r:
        X = Y
        accept += 1
    MHsamples[t] = X

However, I would like to update "per component" (i.e. component-wise updating) in every iteration. Is there a simple way of doing this?

Thank you for your help!

1 Answers1

1

[I did not read your code. I'm answering the statistical question.]

Sure, you can do a Metropolis update on each of the generations from full conditional distributions; this would just be the extreme version of a Metropolis-within-Gibbs -- it's like Gibbs sampling because every one of the conditionals you want to sample from would be conditional on all the others (and it's an extreme form of M-within-G in the sense that every one of those full conditional draws is obtained via a Metropolis-Hastings step).

[You can also do numerous other variations on just sampling full conditionals -- you just have to establish the usual properties of the Markov Chain you need; this is usually reasonably straightforward to argue, but in some situations you can get yourself into a situation where it's possible to screw things up (where the properties you require won't necessarily hold) so you can't just assume something works.]

You should, however, consider the sampling scheme: it's better to have a reversible scheme than just to cycle through $\theta_1,\theta_2,...,\theta_k$ (for example you can randomize the order or you can go forward and then backward within each iteration -- those are examples of reversible schemes).

It's not necessarily ideal to work this way, but there's nothing stopping you doing it in general.

Glen_b
  • 257,508
  • 32
  • 553
  • 939
  • Maybe this is well-known, but would you have a reference for the penultimate paragraph? – hejseb Oct 27 '17 at 20:08
  • Unless I misunderstand something in the question here, component-wise updates means you're using a different proposal for each component, in which case you don't necessarily satisfy detailed balance unless it's explicitly constructed to be reversible. See Juho's answer [here](https://stats.stackexchange.com/a/118453/805) for explicit discussion of the issue with component-wise updating. This is usually thought of as a Gibbs-sampling issue (so mentions of the issue often occurs in discussion of it) but it would be a potential problem with component-wise updating in MH more generally. ...ctd – Glen_b Oct 27 '17 at 23:05
  • ctd... Google on keywords like reversibility and detailed balance to find more information, but I think Juho's answer there is an excellent starting place (I hadn't seen it before just now, I came across it while looking for a good reference to offer you). XI'an's comment underneath it is a good short summary (the individual moves are reversible, the generally the composition is not). For additional discussion, see Wikipedia [here](https://en.wikipedia.org/wiki/Markov_chain#Closest_reversible_Markov_chain), for example – Glen_b Oct 27 '17 at 23:05
  • Here's a journal article type reference if that's of any help: Richard A. Levine (2005) A note on Markov chain Monte Carlo sweep strategies, Journal of Statistical Computation and Simulation, 75:4, 253-262, DOI: 10.1080/0094965042000223671 ... – Glen_b Oct 28 '17 at 05:07