Tuesday, September 28, 2010

Example 8.7: Hosmer and Lemeshow goodness-of-fit

The Hosmer and Lemeshow goodness of fit (GOF) test is a way to assess whether there is evidence for lack of fit in a logistic regression model. Simply put, the test compares the expected and observed number of events in bins defined by the predicted probability of the outcome. This can be calculated in R and SAS.

R

In R, we write a simple function to calculate the statistic and a p-value, based on vectors of observed and predicted probabilities. We use the cut() function (1.4.10) in concert with the quantile() function (2.1.5) to make the bins, then calculate the observed and expected counts, the chi-square statistic, and finally the associated p-value (Table 1.1). The function allows the user to define the number of bins but uses the common default of 10.

hosmerlem = function(y, yhat, g=10) {
cutyhat = cut(yhat,
breaks = quantile(yhat, probs=seq(0,
1, 1/g)), include.lowest=TRUE)
obs = xtabs(cbind(1 - y, y) ~ cutyhat)
expect = xtabs(cbind(1 - yhat, yhat) ~ cutyhat)
chisq = sum((obs - expect)^2/expect)
P = 1 - pchisq(chisq, g - 2)
return(list(chisq=chisq,p.value=P))
}

We'll run it with some of the HELP data (available at the book web site). Note that fitted(object) returns the predicted probabilities, if the object is the result of a call to glm() with family=binomial.

ds = read.csv("http://www.math.smith.edu/r/data/help.csv")
attach(ds)
logreg = glm(homeless ~ female + i1 + cesd + age + substance,
family=binomial)
hosmerlem(y=homeless, yhat=fitted(logreg))

This returns the following output:

$chisq
[1] 8.495386
$p.value
[1] 0.3866328

The Design package, by Frank Harrell, includes the le Cessie and Houwelingen test (another goodness-of-fit test, Biometrics 1991 47:1267) and is also easy to run, though it requires using the package's function for logistic regression.

library(Design)
mod = lrm(homeless ~ female + i1 + cesd + age + substance,
x=TRUE, y=TRUE, data=ds)
resid(mod, 'gof')


Sum of squared errors Expected value|H0 SD
104.1091804 103.9602955 0.1655883
Z P
0.8991269 0.3685851

The two tests are reassuringly similar.

SAS

In SAS, the Hosmer and Lemeshow goodness of fit test is generated with the lackfit option to the model statement in proc logistic (section 4.1.1). (We select out the results using the ODS system.)

ods select lackfitpartition lackfitchisq;
proc logistic data="c:\book\help.sas7bdat";
class substance female;
model homeless = female i1 cesd age substance / lackfit;
run;

This generates the following output:

Partition for the Hosmer and Lemeshow Test

HOMELESS = 1 HOMELESS = 0
Group Total Observed Expected Observed Expected

1 45 10 12.16 35 32.84
2 45 12 14.60 33 30.40
3 45 15 15.99 30 29.01
4 45 17 17.20 28 27.80
5 45 27 18.77 18 26.23
6 45 20 20.28 25 24.72
7 45 23 22.35 22 22.65
8 45 25 25.04 20 19.96
9 45 28 27.67 17 17.33
10 48 32 34.95 16 13.05

Hosmer and Lemeshow Goodness-of-Fit Test

Chi-Square DF Pr > ChiSq

8.4786 8 0.3882

The partition table shows the observed and expected count or events in each decile of the predicted probabilities.

The discrepancy between the SAS and R results is likely due to the odd binning SAS uses; the test is unstable in the presence of ties to the extent that some authorities suggest avoiding it. In general, with continuous predictors, the objections are not germane.

23 comments:

jose said...

It's great. Your hosmerlem function in R is just I need for my research. I investigate how to adjust logistic regression in R with many packages and write a Spanish manual.

Thanks.

awishart said...

What library do I need to install to run this? Sorry I am just learning

Ken Kleinman said...

No library needed. If us have the usual installation, including the stats package, this should work. Good luck!

JC said...

I am trying to run your hosmerlem function but I get this message:

Error in cut.default(yhat, breaks = quantile(yhat, probs = seq(0, 1, 1/g)), :
'breaks' are not unique

Any suggestions to get around this issue? Thanks!

Ken Kleinman said...

The issue, I think, is that you have too few covariate values in your data set ==> lots of ties in the predicted probabilities. This problem is one reason to not use the H-L test.

You could attempt to treat the problem by using fewer cut-points, i.e.: hosmerlem(...,g=5).

Anonymous said...

Do you know if there is a way to change the number of partitions SAS uses from ten to any other number?

Ken Kleinman said...

Not a clue. Good question to ask SAS directly, or poke around in "Using SAS in your operating environment in the SAS help window.

Anonymous said...

I am having some trouble.. I am working with Rcmdr and it says that he couldn't find the function "hosmerlem". I was also wondering if I need any special package.

Ken Kleinman said...

I don't use Rcmdr, so I can't say why it wouldn't find the hosmerlem() function if you paste it in from the above code. Good luck!

Anonymous said...

Hosmer-lemeshow measures the conformity to the logit curve right? So it get's kind of goofy sometiems. A random number for instance is a lousy predictor (c-statistic = .51) but does well in Hosmer_lemeshow. So sometimes the predictive power of your model is at war with pure adherence to the logit curve. Since the logit is an arbitrary mathematical shape we have chosen to map our results to, I guess we should expect that. Nature is not a logit. Additional massaging of the data will help, but I'm thinking interpretation is important here too sometimes. If your model over projects a little in the lower deciles and under projects in the upper, is that a bad thing or just an added safety factor. Manytimes it depends on the application.

Anonymous said...

I don't think the H-L test has anything to do with the logit. It just measures how well the expected probability under the model conforms to the observed proportion. An uncorrelated predictor will have a beta near 0 for any link function you like (with sufficient N), and the expected probability will be about the marginal proportion for any link function, for all the bins, however many we use in constructing the H-L test. This will result in a relatively small H-L statistic. I don't see how this means the predictive power is at war with the adherence to any particular curve. It's just a fact that the model calibration and the predictive value of the model need not agree with one another.

In addition to the situation you point out, it's possible to have poorly calibrated models that have good predictive ability-- that's also not a suggestion of any underlying disagreement, I don't think.

Anonymous said...

Ken,
Lately when I Google/RSeek something, about 1 time in 10 I end up with your blog, and exactly the answer I need! Thanks, again.

Rebecca S

Ken Kleinman said...

I'm so happy to hear that, Rebecca! Thanks for letting me know.

Anonymous said...

Thanks a lot!

H-L test is described in the nice book by Hosmer D.W., Lemeshow S. Applied logistic regression. I always was wondering why that test is absent in stats package.

Olga

jessie abbate said...
This comment has been removed by the author.
jessie abbate said...

Hi - I get really different values when I run the HL test on both R and SAS - using the codes you've supplied here. p=0.998 vs. p=0.15. Is this likely to happen?

Nick Horton said...

Can you try flipping the values of your outcome variable (model P(Y=0) rather than P(Y=1)) in both systems and see how different the results are? The HL test is sensitive to where you cut the deciles.

jessie abbate said...

Nick - thanks for your response. No, changing this does alter the responses slightly, but not the big differences I see. I think the problem is that my glm (in R) is using the proportion (Y) weighted by the total number of trials at each value of X, instead of classical 1s and 0s. This gives the exact same response for the model as if I coded success/failures, but maybe it causes problems with the HL test. I will re-code it as success/failures and see if that helps. I'll be in touch... But any other suggestions in case this doesn't work would help! Thanks!!

Scott Bartell said...

There's be a discrepancy in the computation of chisq--SAS (and the HL formula I was taught) uses n*yhat*(1-yhat) in the denominator instead of the expected value (see http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_logistic_sect039.htm). Interestingly, the classic GOF test uses the expectation in the denominator as you did (but should have df=g-k-1).

Scott Bartell said...

Another difference from SAS is that your calculation of chisq sums over 20 terms, but SAS only sums over 10 terms (basically the right-hand columns of obs and expect). For the SAS formula I think you'd want to replace the relevant lines of your HL function with something like this:

obs = xtabs(y ~ cutyhat)
expect = xtabs(yhat ~ cutyhat)
V = xtabs(yhat*(1-yhat) ~ cutyhat)
chisq = sum((obs - expect)^2/V)

agnese said...

Hello, is it possible to visualize the contingency table for the Hosmer Lemeshow Test in the R output, in order to check the expected frequency values? Thank you.

Summit said...

Hello, thanks for your useful post! I'm wondering is that possible to perform the H-L test in SAS with the same model but different parameter estimates? Thanks!

Anonymous said...

Thanks for the R function. Is there a way to get the same table of observation as we get in SAS,for Hosmer Lemeshow test?