## Monday, March 1, 2010

### Example 7.24: Sampling from a pathological distribution

Evans and Rosenthal consider ways to sample from a distribution with density given by:

`f(y) = c e^(-y^4)(1+|y|)^3`

where c is a normalizing constant and y is defined on the whole real line.

Use of the probability integral transform (section 1.10.8) is not feasible in this setting, given the complexity of inverting the cumulative density function.

The Metropolis--Hastings algorithm is a Markov Chain Monte Carlo (MCMC) method for obtaining samples from a probability distribution. The intuition behind this algorithm is that it chooses proposal probabilities so that after the process has converged we are generating draws from the desired distribution. A further discussion can be found on page 610 of Section 11.3 of the Evans and Rosenthal text, or on page 25 of Gelman et al.

We find the acceptance probability a(x, y) in terms of two densities, f(y) and q(x,y) (a proposal density, in our example, normal with specified mean and unit variance) so that

`a(x,y) = min(1, [c*f(y)*q(y,x)]/[c*f(x)*q(x,y)]       = min(1, [e^(-y^4+x^4)(1+|y|)^3]/[(1+|x|)^3]`

where we omit some algebra.

Begin by picking an arbitrary value for X_1. The Metropolis--Hastings algorithm then proceeds by computing the value X_{n+1} as follows:

`1. Generate y from a Normal(X_n, 1).2. Compute a(x, y) as above.3. With probability a(x, y), let X_{n+1} = y                              (i.e., use proposal value).    Otherwise, with probability 1-a(x, y), let X_{n+1} = X_n                              (i.e., keep previous value). `

The code allows for a burn-in period, which we set at 50,000 iterations, and a desired sample from the target distribution, which we make 5,000. To reduce auto-correlation, we take only every twentieth variate. In a later entry, we'll compare the resulting variates with the true distribution.

SAS

The SAS code is fairly straightforward.

`data mh;  burnin = 50000;  numvals = 5000; thin = 20;  x = normal(0);  do i = 1 to (burnin + (numvals * thin));    y = normal(0) + x;    switchprob = min(1, exp(-y**4 + x**4) *     (1 + abs(y))**3 * (1 + abs(x))**(-3));    if uniform(0) lt switchprob then x = y;      * if we don't change x, the previous value is kept--            no code needed;    if (i gt burnin) and mod(i-burnin, thin) = 0 then output;      * only start saving if we're past the burn-n period,            then thin out;  end;run;`

R

In R we first define a function to compute a(x,y):

`alphafun = function(x, y) {    return(exp(-y^4+x^4)*(1+abs(y))^3*      (1+abs(x))^-3) } `

Then we proceed as in the SAS example.

`numvals = 5000; burnin = 50000; thin = 20res = numeric(numvals)i = 1 xn = rnorm(1) # arbitrary value to startfor (i in 1:(burnin + numvals * thin)) {   propy = rnorm(1, xn, 1)    alpha = min(1, alphafun(xn, propy))    xn = sample(c(propy, xn), 1, prob=c(alpha,1-alpha))   if ((i > burnin) & ((i-burnin) %% thin == 0))       res[(i-burnin)/thin] = xn } `

The resulting draws from the distribution are available in the res vector.