Showing posts with label pathological distribution. Show all posts
Showing posts with label pathological distribution. Show all posts

Friday, March 5, 2010

Example 7.25: compare draws with distribution

In example 7.24, we demonstrated a Metropolis-Hastings algorithm for generating observations from awkward distributions. In such settings it is desirable to assess the quality of draws by comparing them with the target distribution.

Recall that the distribution function is


f(y) = c e^(-y^4)(1+|y|)^3


The constant c was not needed to generate draws, but is required for calculation of the probability density function. We can find the normalizing constant c using symbolic mathematics software (try running it yourself using Wolfram Alpha). This yields a result .25 + (.75 * Pi^(.5) + Gamma(5/4) + Gamma(7/4) for the integral over the positive real line, which when doubled gives a value of c=6.809610784.



SAS

To compare the distribution of the variates with the truth, we first generate a density estimate using proc kde (sections 2.6.4, 5.1.16), saving the density estimates in a new data set.


ods select none;
proc kde data=mh;
univar x / out=mhepdf;
run;
ods exclude none;


Then we generate the true values, using the constant calculated above.


data mh2;
set mhepdf;
true = 6.809610784**(-1) *
exp(-value**4) * (1 + abs(value))**3;
run;


Finally, we can plot the estimated and true values together. We use the legend statement (section 5.2.14) to approximate the R legend created below.


legend1 position=(bottom center inside)
across=1 noframe label=none value=(h=1.5);
axis1 label=(angle=90 "Density");
symbol1 i=j l=1 w=3 c=black;
symbol2 i=j l=2 w=3 c=black;
proc gplot data = mh2;
plot (density true)*value / overlay vaxis = axis1 legend=legend1;
label value="x" density="Simulated" true="True";
run;


R

In R, we begin by defining a function to calculate f(y)


pdfeval = function(x) {
return(1/6.809610784*exp(-x^4)*(1+abs(x))^3)
}


The curve() function in R is useful for plotting values of functions over a range. We use it here to plot f(y). Then we use the density() function (section 5.1.16) to estimate the density function from the variates. Rather than save this information in an object, we use it as input to the lines function (section 5.2.1).


curve(pdfeval, from=-2, to=2, lwd=2, lty=2, type="l",
ylab="probability", xlab="Y")
lines(density(res), lwd=2, lty=1)
legend(-1, .1, legend=c("True", "Simulated"),
lty=2:1, lwd=2, cex=1.8, bty="n")


Results from both systems are plotted below.




We note that in each plot, the sampled variates appear to miss the distribution function near 0. This is an artifact of the default smoothing used by the density estimators, and could be tuned away.

Monday, March 1, 2010

Example 7.24: Sampling from a pathological distribution

Evans and Rosenthal consider ways to sample from a distribution with density given by:


f(y) = c e^(-y^4)(1+|y|)^3


where c is a normalizing constant and y is defined on the whole real line.

Use of the probability integral transform (section 1.10.8) is not feasible in this setting, given the complexity of inverting the cumulative density function.

The Metropolis--Hastings algorithm is a Markov Chain Monte Carlo (MCMC) method for obtaining samples from a probability distribution. The intuition behind this algorithm is that it chooses proposal probabilities so that after the process has converged we are generating draws from the desired distribution. A further discussion can be found on page 610 of Section 11.3 of the Evans and Rosenthal text, or on page 25 of Gelman et al.

We find the acceptance probability a(x, y) in terms of two densities, f(y) and q(x,y) (a proposal density, in our example, normal with specified mean and unit variance) so that


a(x,y) = min(1, [c*f(y)*q(y,x)]/[c*f(x)*q(x,y)]
= min(1, [e^(-y^4+x^4)(1+|y|)^3]/[(1+|x|)^3]


where we omit some algebra.

Begin by picking an arbitrary value for X_1. The Metropolis--Hastings algorithm then proceeds by computing the value X_{n+1} as follows:


1. Generate y from a Normal(X_n, 1).
2. Compute a(x, y) as above.
3. With probability a(x, y), let X_{n+1} = y
(i.e., use proposal value).
Otherwise, with probability 1-a(x, y), let X_{n+1} = X_n
(i.e., keep previous value).


The code allows for a burn-in period, which we set at 50,000 iterations, and a desired sample from the target distribution, which we make 5,000. To reduce auto-correlation, we take only every twentieth variate. In a later entry, we'll compare the resulting variates with the true distribution.

SAS

The SAS code is fairly straightforward.


data mh;
burnin = 50000; numvals = 5000; thin = 20;
x = normal(0);
do i = 1 to (burnin + (numvals * thin));
y = normal(0) + x;
switchprob = min(1, exp(-y**4 + x**4) *
(1 + abs(y))**3 * (1 + abs(x))**(-3));
if uniform(0) lt switchprob then x = y;
* if we don't change x, the previous value is kept--
no code needed;
if (i gt burnin) and mod(i-burnin, thin) = 0 then output;
* only start saving if we're past the burn-n period,
then thin out;
end;
run;



R

In R we first define a function to compute a(x,y):


alphafun = function(x, y) {
return(exp(-y^4+x^4)*(1+abs(y))^3*
(1+abs(x))^-3)
}


Then we proceed as in the SAS example.


numvals = 5000; burnin = 50000; thin = 20
res = numeric(numvals)
i = 1
xn = rnorm(1) # arbitrary value to start
for (i in 1:(burnin + numvals * thin)) {
propy = rnorm(1, xn, 1)
alpha = min(1, alphafun(xn, propy))
xn = sample(c(propy, xn), 1, prob=c(alpha,1-alpha))
if ((i > burnin) & ((i-burnin) %% thin == 0))
res[(i-burnin)/thin] = xn
}


The resulting draws from the distribution are available in the res vector.