Showing posts with label measures of association. Show all posts
Showing posts with label measures of association. Show all posts

Friday, June 3, 2011

Example 8.39: calculating Cramer's V

Cramer's V is a measure of association for nominal variables. Effectively it is the Pearson chi-square statistic rescaled to have values between 0 and 1, as follows:

V = sqrt(X^2 / [nobs * (min(ncols, nrows) - 1)])

where X^2 is the Pearson chi-square, nobs represents the number of observations included in the table, and where ncols and nrows are the number of rows and columns in the table, respectively. For a 2 by 2 table, of course, this is just the square root of chi-square divided by the number of observations, which is also known as the phi coefficient.

As an example, we'll revisit the table of homelessness vs. gender we present in Section 2.6.3.

SAS
In SAS, Cramer's V is provided when the chisq option to the tables statement is used, in proc freq.

proc freq data = "c:\book\help.sas7bdat";
tables female*homeless / chisq;
run;

resulting in

Statistics for Table of FEMALE by HOMELESS
Statistic DF Value Prob
------------------------------------------------------
Chi-Square 1 4.3196 0.0377
Likelihood Ratio Chi-Square 1 4.3654 0.0367
Continuity Adj. Chi-Square 1 3.8708 0.0491
Mantel-Haenszel Chi-Square 1 4.3101 0.0379
Phi Coefficient -0.0977
Contingency Coefficient 0.0972
Cramer's V -0.0977

where (as usual) several additional values are also included. The negative value shown for Cramer's V is odd-- it's unclear what rationale should be used for using the negative root. According to the documentation, this is only a possibility for 2 by 2 tables.

R
As far as we know, Cramer's V is not included in base R. Of course, it is easy to assemble directly. We found one version on line. However, this requires a table as input, so we've rewritten it here to accept vector input instead.

Here's the function, which uses unique() (section 1.4.16) to extract the values of the rows and columns and length() (Section 1.4.15) to find their number and the number of observations. A more bullet-proof version of the function would check to ensure the two vectors are of equal length (or allow the input in a variety of formats).

cv.test = function(x,y) {
CV = sqrt(chisq.test(x, y, correct=FALSE)$statistic /
(length(x) * (min(length(unique(x)),length(unique(y))) - 1)))
print.noquote("Cramér V / Phi:")
return(as.numeric(CV))
}

So we can get Cramer's V as

helpdata = read.csv("http://www.math.smith.edu/r/data/help.csv")
with(helpdata, cv.test(female, homeless)
[1] Cramér V / Phi:
[1] 0.09765063

Monday, March 7, 2011

Example 8.29: Risk ratios and odds ratios



When can you safely think of an odds ratio as being similar to a risk ratio?

Many people find odds ratios hard to interpret, and thus would prefer to have risk ratios. In response to this, you can find several papers that purport to convert an odds ratio (from a logistic regression) into a risk ratio.

Conventional wisdom has it that "odds ratios can be interpreted as risk ratios, as long as the event is rare." Is this true? To what degree?

Let's write a function to examine what the risk ratio is for a given odds ratio. To concretize slightly, suppose we're examining the odds ratio and risk ratio of a "case" given the presence or absence of an "exposure." For a given odds ratio, the risk ratio will vary depending on the baseline probability (the probability of a case in the absence of the exposure). So we'll make the output a plot with the baseline probability as the x-axis. To aid interpretation, we'll add vertical reference lines at baseline probabilities with default placement at 0.1 and 0.2-- two values that we might think of as small enough to be able to interpret the odds ratio as a risk ratio.

SAS

In SAS, we'll write a macro to generate the plot. First, we need to calculate the odds when the exposure is absent, then the odds when it is present (using the designated odds ratio), the probability implied by this exposed odds (for when the exposure is present), and finally the risk ratio. The coding is nothing special, with the possible exception of the loop through the base probabilities (see section 1.11.1).


%macro orvsrr(OR=2,ref1=.1,ref2=.2);
data matt;
do p_base = 0.001, .01 to .99 by 0.01, .999;
odds_base = p_base/(1-p_base);
odds_case = &or * odds_base;
p_case = odds_case / (1 + odds_case);
RR = p_case/p_base;
output;
end;
run;

title "RR if OR = &or by base probability";
symbol1 i = j l = 1 c = blue v = none;
proc gplot data = matt;
plot RR * p_base / href = &ref1, &ref2;
label p_base = "Probability in reference group" RR = "Risk ratio";
run; quit;
%mend orvsrr;

The result of %orvsrr(3,.05,.1); is shown above.

R

In R, the function to replicate this is remarkably similar. In fact, the calculation of the odds, probabilities, and risk ratio is identical with the SAS version. The c() and seq functions replace the do loop used in SAS.


orvsrr = function (or=2, ref1=.1, ref2=.2) {
p_base = c(.001, seq(.01,.99, by = .01), .999)
odds_base = p_base/(1-p_base)
odds_case = or * odds_base
p_case = odds_case / (1 + odds_case)
RR = p_case/p_base

plot(p_base, RR, xlim=c(0, 1), ylim=c(1, or), xaxs="i", yaxs="i",
xlab="Probability in reference group", ylab="Risk ratio", type = "n")
title(main = paste("RR if OR =", or, "by base probability"))
lines(p_base, RR, col="blue")
abline(v=c(ref1, ref2))
}

The result of orvsrr(2.5,.1,.3) is shown below.

The conventional wisdom is correct, up to a point, as you can find by playing with the function or the macro. If the baseline probability is very low (less than 0.05) and the odds ratio is smallish (less than 3 or so) then the odds ratio overestimates the risk ratio by 10% or less. Larger odds ratios or baseline probabilities result in greater overestimation of the risk ratio.