Monday, October 15, 2012

Example 10.6: Should Poisson regression ever be used? Negative binomial vs. Poisson regression

In practice, we often find that count data is not well modeled by Poisson regression, though Poisson models are often presented as the natural approach for such data. In contrast, the negative binomial regression model is much more flexible and is therefore likely to fit better, if the data are not Poisson. In example 8.30 we compared the probability mass functions of the two distributions, and found that for a given mean, the negative binomial closely approximates the Poisson, as the scale parameter increases. But how does this affect the choice of regression model? How might another alternative, the overdispersed, or quasi-Poisson model compete with these? Today we generate a rudimentary toolkit for assessing the effects of Poisson, negative binomial, and quasi-Poisson models, assuming data are truly generated by one or the other process.

SAS
We'll begin by simulating Poisson and negative binomial data. Note that we also rely on the poismean_nb function that we created in example 8.30-- this is needed because SAS only accepts the natural parameters of the distribution, while the mean is a (simple) function of the two parameters. As is typical in such settings, we'll begin by generating data under the null of no association between, in this case, the normal covariate and the count outcome. The proportion of rejections should be no greater than alpha (5%, here). However, we'll include code to easily simulate data under the alternative as well. This will facilitate assessing the relative power of the models, later.
data nbp;
do ds = 1 to 10000;
  do i = 0 to 250;
    x = normal(0);
    mean = exp(-.25 + (0 * x));
    pois = rand("POISSON",mean);
    nb1 = rand("NEGBINOMIAL", poismean_nb(mean, 1), 1);
    output;
    end;
  end;
run;
The models will be fit in proc genmod. (See sections 4.1.3, 4.1.5, table 4.1.) It would be good to write a little macro to change the distribution and the output names, but it's not necessary. To save space here, the repetitive lines are omitted. The naming convention is that the true distribution (p or nb) is listed first, followed by the fit model (p, nb, or pod, for overdispersed).
 
ods select none;
ods output parameterestimates = ppests;
proc genmod data = nbp;
by ds;
model pois = x /dist=poisson;
run;

ods output parameterestimates = ppodests;
model pois = x /dist=poisson scale = p;

ods output parameterestimates = pnbests;
model pois = x /dist=negbin;

ods output parameterestimates = nbnbests;
model nb1 = x /dist=negbin;

ods output parameterestimates = nbpests;
model nb1 = x /dist=poisson;

ods output parameterestimates = nbpodests;
model nb1 = x /dist=poisson scale=p;
ods select all;
For analysis, we'll bring all the results together using the merge statement (section 1.5.7). Note that the output data sets contain the Wald CI limits as well as the estimates themselves; all have to be renamed in the merge, or they will overwrite each other.
data results;
merge
ppests (rename = (estimate = pp_est lowerwaldcl = pp_l 
  upperwaldcl = pp_u))
ppodests (rename = (estimate = ppod_est lowerwaldcl = ppod_l 
  upperwaldcl = ppod_u))
pnbests (rename = (estimate = pnb_est lowerwaldcl = pnb_l 
  upperwaldcl = pnb_u))
nbnbests (rename = (estimate = nbnb_est lowerwaldcl = nbnb_l 
  upperwaldcl = nbnb_u))
nbpests (rename = (estimate = nbp_est lowerwaldcl = nbp_l 
  upperwaldcl = nbp_u))
nbpodests (rename = (estimate = nbpod_est lowerwaldcl = nbpod_l 
  upperwaldcl = nbpod_u));
where parameter eq "x";
target = 0;
pprej =   ((pp_l gt target) or (pp_u lt target));
ppodrej =   ((ppod_l gt target) or (ppod_u lt target));
pnbrej =  ((pnb_l gt target) or (pnb_u lt target));
nbnbrej = ((nbnb_l gt target) or (nbnb_u lt target));
nbprej =  ((nbp_l gt target) or (nbp_u lt target));
nbpodrej =  ((nbpod_l gt target) or (nbpod_u lt target));
run;
The indicators of CI that exclude the null are calculated with appropriate names using logical tests that are 1 if true (rejections) and 0 if false. (See, e.g., section 1.4.9.) The final results can be obtained from proc means
proc means data = results; 
var pp_est ppod_est pnb_est nbnb_est nbp_est nbpod_est 
    pprej ppodrej pnbrej nbnbrej nbprej nbpodrej; 
run;

                     Variable             Mean
                     -------------------------
                     pp_est       -0.000349738
                     ppod_est     -0.000349738
                     pnb_est      -0.000344668
                     nbnb_est        0.0013738
                     nbp_est         0.0013588
                     nbpod_est       0.0013588
                     pprej           0.0505000
                     ppodrej         0.0501000
                     pnbrej          0.0468000
                     nbnbrej         0.0535000
                     nbprej          0.1427000
                     nbpodrej        0.0555000
                     -------------------------


All of the estimates appear to be unbiased. However, Poisson regression, when applied to the truly negative binomial data, appears to be dramatically anticonservative, rejecting the null (i.e., with CI excluding the null value) 14% of the time. The overdispersed model may be slightly biased as well. The estimated proportion of rejections is 5.55%, or 555 of 10,000 experiments. An exact CI for the proportion excludes 5%, here, although the anticonservative bias appears to be slight. To test other effect sizes, we'd change the mean, set in the first data step and the target in the results data. It would also be valuable to change the scale parameter for the negative binomial.


R
We begin by defining two simple functions: one to extract the standard errors from a model, and the second to assess whether Wald-type CI for parameter estimates exclude some value. It's a bit confusing that a standard error extracting function is not part of R. Or perhaps it is, and someone will point out the obvious function in the comments. It's useful to use the standard errors and construct the Wald CI in the current setting because the obvious alternative for constructing CI, the confint() function, uses profile likelihoods, which would be too time-consuming in a simulation setting. The second function accepts the parameter estimate, its standard error, and a fixed value which we want to know is in or out of the CI. Both functions are actually single expressions, but having them in hand will reduce the typing in the main function.
# this will work for any model object that works with vcov()
# the test for positive variance should be unnecessary but can't hurt
stderrs = function(model) {
  ifelse(min(diag(vcov(model))) > 0, sqrt(diag(vcov(model))), NA)  
}

# short and sweet: 1 if target is out of Wald CI, 0 if in
ciout = function(est, se, target){
  ifelse( (est - 1.96*se > target) | (est + 1.96*se < target), 1,0)
}
With these ingredients prepared, we're ready to write a function to fit the three models to the two sets of observed data. The function will accept a number of observations per data set and a true beta. The Poisson and overdispersed Poisson are fit with the glm() function (section 4.1.3, table 4.1) but the negative binomial uses the glm.nb() function found in the MASS package (section 4.1.5).
testpnb = function(n, beta) {
# make data
n = 250
x = rnorm(n)
mean = exp(-.25 + (beta * x))
pois = rpois(n,mean)
nb1 = rnbinom(n, size=1, mu=mean)

# fit models to Poisson data
pp = glm(pois ~x, family="poisson")
ppod = glm(pois ~x, family="quasipoisson")
pnb = glm.nb(pois~x)

# fit models to nb data
nbnb = glm.nb(nb1 ~x)
nbp = glm(nb1 ~x, family="poisson")
nbpod = glm(nb1 ~x, family="quasipoisson")

# extract parameter estimates using the coef() function
est = as.numeric(c(coef(pp)[2], coef(ppod)[2], coef(pnb)[2], coef(nbnb)[2], coef(nbp)[2], coef(nbpod)[2]))
# use our two new functions to get the SE and the CI indicator
se = c(stderrs(pp), stderrs(ppod), stderrs(pnb), stderrs(nbnb), stderrs(nbp), stderrs(nbpod))
ci = ciout(est, se, rep(beta,6))
return(matrix(c(est,se,ci),ncol=3))
}
Now we can use the convenient replicate() function to call the experiment many times. Since the output of testnb() is a matrix, the result of replicate() is a three-dimensional matrix, R * C * sheet, where sheet here corresponds to each experimental replicate. To summarize the results, we can use the rowMeans() function to get the proportion of rejections or the mean of the estimates.
mainout = replicate(10000,testpnb(250,0))

# the [,3,] below means "all rows, column 3, all sheets"
> rowMeans(mainout[,3,])
[1] 0.0490 0.0514 0.0463 0.0490 0.1403 0.0493

> rowMeans(mainout[,1,])
[1]  0.0003482834  0.0003482834  0.0003558526 -0.0004123949 -0.0003972441 -0.0003972441
The results agree completely with the SAS results discussed above. The naive Poisson regression would appear a bad idea--if the data are negative binomial, tests don't have the nominal size. It would be valuable to replicate the experiment with some other distribution for the real data as well. One approach to modeling count data would be to fit the Poisson and assess the quality of the fit, which can be done in several ways. However, this iterative fitting also jeopardizes the size of the test, in theory. Perhaps we'll explore the practical impact of this in a future entry. Fortunately, at least in this limited example, a nice alternative exists: We can just fit the negative binomial by default. The costs of this in terms of power could be assessed with a thorough simulation study, but are likely to be small, since only one additional parameter is estimated. And the size of the test is hardly affected at all. The quasi-Poisson model could also be recommended, but has the drawback of relying on what is actually not a viable distribution for the data. Some sources suggest that it may be even more flexible than the negative binomial, however.

An unrelated note about aggregators: We love aggregators! Aggregators collect blogs that have similar coverage for the convenience of readers, and for blog authors they offer a way to reach new audiences. SAS and R is aggregated by R-bloggers, PROC-X, and statsblogs with our permission, and by at least 2 other aggregating services which have never contacted us. If you read this on an aggregator that does not credit the blogs it incorporates, please come visit us at SAS and R. We answer comments there and offer direct subscriptions if you like our content. In addition, no one is allowed to profit by this work under our license; if you see advertisements on this page, the aggregator is violating the terms by which we publish our work.

2 comments:

Chris Andrews said...

Sorry I can't point out the obvious function.

I have always used coef(summary(model))[,2] to get standard errors.

sqrt(diag(vcov(model))) is probably more efficient if you don't need the rest of the stuff in summary(model).

Ken Kleinman said...

I think there are still other ways to skin this particular cat in R. But I can't fathom why there wouldn't be a built-in function to do it.

FTR, I think I used the summary() approach in an earlier entry, but I prefer this one, since it's based on a function that could easily be used for other purposes, and doesn't rely as heavily on R syntax.