Showing posts with label MCMC. Show all posts
Showing posts with label MCMC. Show all posts

Tuesday, October 4, 2011

Example 9.8: New stuff in SAS 9.3-- Bayesian random effects models in Proc MCMC




Rounding off our reports on major new developments in SAS 9.3, today we'll talk about proc mcmc and the random statement.

Stand-alone packages for fitting very general Bayesian models using Markov chain Monte Carlo (MCMC) methods have been available for quite some time now. The best known of these are BUGS and its derivatives WinBUGS (last updated in 2007) and OpenBUGS . There are also some packages available that call these tools from R.

Today we'll consider a relatively simple model: Clustered Poisson data where cluster means are a constant plus a cluster-specific exponentially-distributed random effect. To be clear:
y_ij ~ Poisson(mu_i)
log(mu_i) = B_0 + r_i
r_i ~ Exponential(lambda)
Of course in Bayesian thinking all effects are random-- here we use the term in the sense of cluster-specific effects.

SAS
Several SAS procedures have a bayes statement that allow some specific models to be fit. For example, in Section 6.6 and example 8.17, we show Bayesian Poisson and logistic regression, respectively, using proc genmod. But our example today is a little unusual, and we could not find a canned procedure for it. For these more general problems, SAS has proc mcmc, which in SAS 9.3 allows random effects to be easily modeled.

We begin by generating the data, and fitting the naive (unclustered) model. We set B_0 = 1 and lambda = 0.4. There are 200 clusters of 10 observations each, which we might imagine represent 10 students from each of 200 classrooms.

data test2;
truebeta0 = 1;
randscale = .4;
call streaminit(1944);
do i = 1 to 200;
randint = rand("EXPONENTIAL") * randscale;
do ni = 1 to 10;
mu = exp(truebeta0 + randint);
y = rand("POISSON", mu);
output;
end;
end;
run;

proc genmod data = test2;
model y = / dist=poisson;
run;

Standard Wald 95%
Parameter Estimate Error Confidence Limits

Intercept 1.4983 0.0106 1.4776 1.5190

Note the inelegant SAS syntax for fitting an intercept-only model. The result is pretty awful-- 50% bias with respect to the global mean. Perhaps we'll do better by acknowledging the clustering. We might try that with normally distributed random effects in proc glimmix.

proc glimmix data = test2 method=laplace;
class i;
model y = / dist = poisson solution;
random int / subject = i type = un;
run;

Cov Standard
Parm Subject Estimate Error
UN(1,1) i 0.1682 0.01841

Standard
Effect Estimate Error t Value Pr > |t|
Intercept 1.3805 0.03124 44.20 <.0001

No joy-- still a 40% bias in the estimated mean. And the variance of the random effects is biased by more than 50%! Let's try fitting the model that generated the data.

proc mcmc data=test2 nmc=10000 thin=10 seed=2011;
parms fixedint 1 gscale 0.4;

prior fixedint ~ normal(0, var = 10000);
prior gscale ~ igamma(.01 , scale = .01 ) ;

random rint ~ gamma(shape=1, scale=gscale) subject = i initial=0.0001;
mu = exp(fixedint + rint);
model y ~ poisson(mu);
run;

The key points of the proc mcmc statement are nmc, the total number of Monte Carlo iterations to perform, and thin, which includes only every nth sample for inference. The prior and model statements are fairly obvious; we note that in more complex models, parameters that are listed within a single prior statement are sampled as a block. We're placing priors on the fixed (shared) intercept and the scale of the exponential. The mu line is actually just a programming statement-- it uses the same syntax as data step programming.
The newly available statement is random. The syntax here is similar to those for the other priors, with the addition of the subject option, which generates a unique parameter for each level of the subject variable. The random effects themselves can be used in later statements, as shown, to enter into data distributions. A final note here is that the exponential distribution isn't explicitly available, but since the gamma distribution with shape fixed at 1 defines the exponential, this is not a problem. Here are the key results.

Posterior Summaries

Standard
Parameter N Mean Deviation
fixedint 1000 1.0346 0.0244
gscale 1000 0.3541 0.0314

Posterior Intervals

Parameter Alpha HPD Interval
fixedint 0.050 0.9834 1.0791
gscale 0.050 0.2937 0.4163

The 95% HPD regions include the true values of the parameters and the posterior means are much less biased than in the model assuming normal random effects.

As usual, MCMC models should be evaluated carefully for convergence and coverage. In this example, I have some concerns (see default diagnostic figure above) and if it were real data I would want to do more.

R
The CRAN task view on Bayesian Inference includes a summary of tools for general and model-specific MCMC tools. However, there is nothing like proc mcmc in terms of being a general and easy to use tool that is native to R. The nearest options are to use R front ends to WinBUGS/OpenBUGS (R2WinBUGS) or JAGS (rjags). (A brief worked example of using rjags was posted last year by John Myles White.) Alternatively, with some math and a little sweat, the mcmc package would also work. We'll explore an approach through one or more of these packages in a later entry, and would welcome a collaboration from anyone who would like to take that on.

Monday, December 6, 2010

Example 8.17: Logistic regression via MCMC



In examples 8.15 and 8.16 we considered Firth logistic regression and exact logistic regression as ways around the problem of separation, often encountered in logistic regression. (Re-cap: Separation happens when all the observations in a category share a result, or when a continuous covariate predicts the outcome too well. It results in a likelihood maximized when a parameter is extremely large, and causes trouble with ordinary maximum likelihood approached.) Another option is to use Bayesian methods. Here we focus on Markov chain Monte Carlo (MCMC) approaches to Bayesian analysis.

SAS

SAS access to MCMC for logistic regression is provided through the bayes statement in proc genmod. There are several default priors available. The normal prior is the most flexible (in the software), allowing different prior means and variances for the regression parameters. The prior is specified through a separate data set. We begin by setting up the data in the events/trials syntax. Then we define a fairly vague prior for the intercept and the effect of the covariate: uncorrelated, and each with a mean of zero and a variance of 1000 (or a precision of 0.001). Finally, we call proc genmod to implement the analysis.


data testmcmc;
x=0; count=0; n=100; output;
x=1; count=5; n=100; output;
run;


data prior;
input _type_ $ Intercept x;
datalines;
Var 1000 1000
Mean 0 0
;
run;

title "Bayes with normal prior";
proc genmod descending data=testmcmc;
model count/n = x / dist=binomial link=logit;
bayes seed=10231995 nbi=1000 nmc=21000
coeffprior=normal(input=prior) diagnostics=all
statistics=summary;
run;

In the forgoing, nbi is the length of the burn-in and nmc is the total number of Monte Carlo iterations. The remaining options define the prior and request certain output. The diagnostics=all option generates many results, including posterior autocorrelations, Gelman-Rubin, Geweke, Raftery-Lewis, and Heidelberger-Welch diagnostics. The summary statistics are presented below; the diagnostics are not especially encouraging.

Posterior Summaries

Standard
Parameter N Mean Deviation

Intercept 21000 -20.3301 10.3277
x 21000 17.2857 10.3368

Posterior Summaries

Percentiles
Parameter 25% 50% 75%

Intercept -27.6173 -18.5558 -11.9025
x 8.8534 15.5267 24.6024

It seems that perhaps this prior is too vague. Perhaps we can make it a little more precise. A log odds ratio of 10 implies an odds ratio > 22,000, so perhaps we can accept a prior variance of 25, with about 95% of the prior weight between -10 and 10.

data prior;
input _type_ $ Intercept x;
datalines;
Var 25 25
Mean 0 0
;
run;

ods graphics on;
title "Bayes with normal prior";
proc genmod descending data=testmcmc;
model count/n = x / dist=binomial link=logit;
bayes seed=10231995 nbi=1000 nmc=21000
coeffprior=normal(input=prior) diagnostics=all
statistics=(summary interval) plot=all;
run;

Posterior Summaries

Standard
Parameter N Mean Deviation

Intercept 21000 -6.5924 1.7958
x 21000 3.5169 1.8431

Posterior Summaries

Percentiles
Parameter 25% 50% 75%

Intercept -7.6347 -6.3150 -5.2802
x 2.1790 3.2684 4.5929

Posterior Intervals

Parameter Alpha Equal-Tail Interval HPD Interval

Intercept 0.050 -10.8101 -3.8560 -10.2652 -3.5788
x 0.050 0.5981 7.7935 0.3997 7.4201

These are more plausible values, and the diagnostics are more favorable.
In the above, we added the keyword interval to generate confidence regions, and used the ods graphics on statement to enable ODS graphics and the plot=all option to generate the graphical output shown above.

R
There are several packages in R that include MCMC approaches. Here we use the MCMCpack package, which include the MCMClogit() function. It appears not to accept the weights option mentioned previously, so we generate data at the observation level to begin. Then we run the MCMC.

events.0=0 # for X = 0
events.1=5 # for X = 1
x = c(rep(0,100), rep(1,100))
y = c(rep(0,100-events.0), rep(1,events.0),
rep(0, 100-events.1), rep(1, events.1))

library(MCMCpack)
logmcmc = MCMClogit(y~as.factor(x), burnin=1000, mcmc=21000, b0=0, B0=.04)

The MCMClogit() accepts a formula object and allows the burn-in and number of Monte Carlo iterations to be specified. The prior mean b0 can be specified as a vector if different for each parameter, or as a scalar, as show. Similarly, the prior precision B0 can be a matrix or a scalar; if scalar, the parameters are uncorrelated in the prior.

> summary(logmcmc)

Iterations = 1001:22000
Thinning interval = 1
Number of chains = 1
Sample size per chain = 21000

1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:

Mean SD Naive SE Time-series SE
(Intercept) -6.570 1.816 0.01253 0.03139
as.factor(x)1 3.513 1.859 0.01283 0.03363

2. Quantiles for each variable:

2.5% 25% 50% 75% 97.5%
(Intercept) -10.6591 -7.634 -6.299 -5.229 -3.831
as.factor(x)1 0.6399 2.147 3.292 4.599 7.698

plot(logmcmc)

The result of the plot() is shown below. These results and those from SAS are reassuringly similar. Many diagnostics are available through the coda package. The codamenu() function allows simple menu-based access to its tools.

Monday, March 1, 2010

Example 7.24: Sampling from a pathological distribution

Evans and Rosenthal consider ways to sample from a distribution with density given by:


f(y) = c e^(-y^4)(1+|y|)^3


where c is a normalizing constant and y is defined on the whole real line.

Use of the probability integral transform (section 1.10.8) is not feasible in this setting, given the complexity of inverting the cumulative density function.

The Metropolis--Hastings algorithm is a Markov Chain Monte Carlo (MCMC) method for obtaining samples from a probability distribution. The intuition behind this algorithm is that it chooses proposal probabilities so that after the process has converged we are generating draws from the desired distribution. A further discussion can be found on page 610 of Section 11.3 of the Evans and Rosenthal text, or on page 25 of Gelman et al.

We find the acceptance probability a(x, y) in terms of two densities, f(y) and q(x,y) (a proposal density, in our example, normal with specified mean and unit variance) so that


a(x,y) = min(1, [c*f(y)*q(y,x)]/[c*f(x)*q(x,y)]
= min(1, [e^(-y^4+x^4)(1+|y|)^3]/[(1+|x|)^3]


where we omit some algebra.

Begin by picking an arbitrary value for X_1. The Metropolis--Hastings algorithm then proceeds by computing the value X_{n+1} as follows:


1. Generate y from a Normal(X_n, 1).
2. Compute a(x, y) as above.
3. With probability a(x, y), let X_{n+1} = y
(i.e., use proposal value).
Otherwise, with probability 1-a(x, y), let X_{n+1} = X_n
(i.e., keep previous value).


The code allows for a burn-in period, which we set at 50,000 iterations, and a desired sample from the target distribution, which we make 5,000. To reduce auto-correlation, we take only every twentieth variate. In a later entry, we'll compare the resulting variates with the true distribution.

SAS

The SAS code is fairly straightforward.


data mh;
burnin = 50000; numvals = 5000; thin = 20;
x = normal(0);
do i = 1 to (burnin + (numvals * thin));
y = normal(0) + x;
switchprob = min(1, exp(-y**4 + x**4) *
(1 + abs(y))**3 * (1 + abs(x))**(-3));
if uniform(0) lt switchprob then x = y;
* if we don't change x, the previous value is kept--
no code needed;
if (i gt burnin) and mod(i-burnin, thin) = 0 then output;
* only start saving if we're past the burn-n period,
then thin out;
end;
run;



R

In R we first define a function to compute a(x,y):


alphafun = function(x, y) {
return(exp(-y^4+x^4)*(1+abs(y))^3*
(1+abs(x))^-3)
}


Then we proceed as in the SAS example.


numvals = 5000; burnin = 50000; thin = 20
res = numeric(numvals)
i = 1
xn = rnorm(1) # arbitrary value to start
for (i in 1:(burnin + numvals * thin)) {
propy = rnorm(1, xn, 1)
alpha = min(1, alphafun(xn, propy))
xn = sample(c(propy, xn), 1, prob=c(alpha,1-alpha))
if ((i > burnin) & ((i-burnin) %% thin == 0))
res[(i-burnin)/thin] = xn
}


The resulting draws from the distribution are available in the res vector.