To celebrate, here's a new example. Parenthetically, I was fortunate to be able to present my course: R Boot Camp for SAS users at Boston University last week. One attendee cornered me after the course. She said: "Ken, R looks great, but you use SAS for all your real work, don't you?" Today's example might help a SAS diehard to see why it might be helpful to know R.
OK, the example: A colleague contacted me with a typical "5-minute" question. She needed to write a convincing power calculation for the sensitivity-- the probability that a test returns a positive result when the disease is present, for a fixed number of cases with the disease. I don't know how well this has been explored in the peer-reviewed literature, but I suggested the following process:
1. Guess at the true underlying sensitivity
2. Name a lower bound (less than the truth) which we would like the observed CI to exclude
3. Use basic probability results to report the probability of exclusion, marginally across the unknown number of observed positive tests.
This is not actually a power calculation, of course, but it provides some information about the kinds of statements that it's likely to be possible to make.
R
In R, this is almost trivial. We can get the probability of observing x positive tests simply, using the dbinom() function applied to a vector of numerators and the fixed denominator. Finding the confidence limits is a little trickier. Well, finding them is easy, using lapply() on binom.test(), but extracting them requires using sapply() on the results from lapply(). Then it's trivial to generate a logical vector indicating whether the value we want to exclude is in the CI or not, and the sum of the probabilities we see a number of positive tests where we include this value is our desired result.
> truesense = .9 > exclude = .6 > npos = 20 > probobs = dbinom(0:npos,npos,truesense) > cis = t(sapply(lapply(0:npos,binom.test, n=npos), function(bt) return(bt$conf.int))) > included = cis[,1] < exclude & cis[,2] > exclude > myprob = sum(probobs*included) > myprob [1] 0.1329533(Note that I calculated the inclusion probability, not the exclusion probability.)
Of course, the real beauty and power of R is how simple it is to turn this into a function:
> probinc = function(truesense, exclude, npos) { probobs = dbinom(0:npos,npos,truesense) cis = t(sapply(lapply(0:npos,binom.test, n=npos), function(bt) return(bt$conf.int))) included = cis[,1] < exclude & cis[,2] > exclude return(sum(probobs*included)) } > probinc(.9,.6,20) [1] 0.1329533
SAS
My SAS process took about 4 times as long to write.
I begin by making a data set with a variable recording both the number of events (positive tests) and non-events (false negatives) for each possible value. These serve as weights in the proc freq I use to generate the confidence limits.
%let truesense = .9; %let exclude = .6; %let npos = 20; data rej; do i = 1 to &npos; w = i; event = 1; output; w = &npos - i; event = 0; output; end; run; ods output binomialprop = rej2; proc freq data = rej; by i; tables event /binomial(level='1'); weight w; run;Note that I repeat the proc freq for each number of events using the by statement. After saving the results with the ODS system, I have to use proc transpose to make a table with one row for each number of positive tests-- before this, every statistic in the output has its own row.
proc transpose data = rej2 out = rej3; where name1 eq "XL_BIN" or name1 eq "XU_BIN"; by i; id name1; var nvalue1; run;In my fourth data set, I can find the probability of observing each number of events and multiply this with my logical test of whether the CI included my target value or not. But here there is another twist. The proc freq approach won't generate a CI for both the situation where there are 0 positive tests and the setting where all are positive in the same run. My solution to this was to omit the case with 0 positives from my for loop above, but now I need to account for that possibility. Here I use the end=option to the set statement to figure out when I've reached the case with all positive (sensitivity =1). Then I can use the reflexive property to find the confidence limits for the case with 0 events. Then I'm finally ready to sum up the probabilities associated with the number of positive tests where the CI includes the target value.
data rej4; set rej3 end = eof; prob = pdf('BINOMIAL',i,&truesense,&npos); prob_include = prob * ((xl_bin < &exclude) and (xu_bin > &exclude)); output; if eof then do; prob = pdf('BINOMIAL',0,&truesense,&npos); prob_include = prob * (((1 - xu_bin) < &exclude) and ((1 - xl_bin) > &exclude)); output; end; run; proc means data = rej4 sum; var prob_include; run;Elegance is a subjective thing, I suppose, but to my eye, the R solution is simple and graceful, while the SAS solution is rather awkward. And I didn't even make a macro out of it yet!
An unrelated note about aggregators: We love aggregators! Aggregators collect blogs that have similar coverage for the convenience of readers, and for blog authors they offer a way to reach new audiences. SAS and R is aggregated by R-bloggers, PROC-X, and statsblogs with our permission, and by at least 2 other aggregating services which have never contacted us. If you read this on an aggregator that does not credit the blogs it incorporates, please come visit us at SAS and R. We answer comments there and offer direct subscriptions if you like our content. In addition, no one is allowed to profit by this work under our license; if you see advertisements on this page, the aggregator is violating the terms by which we publish our work.
Ken, Welcome back to blogging. I understand your comment about elegance in R vs SAS, which is why I encourage you to read my statistical programming blog which features SAS/IML programs. As you know, the SAS/IML language is a matrix language with syntax similar to R. As you infer, having a high-level language enables you to write compact code.
ReplyDeleteElegance is not only depends on the tools that one uses but also on the correct options that software provides. In the case of your SAS example, your code is unnecessarily complicated because you use ODS OUTPUT instead of the OUTPUT statement in PROC FREQ. If you use the OUTPUT statement, you do not need to use PROC TRANSPOSE, nor do you need the complications of testing for EOF in the DATA step. The code looks something like this (hope the comment box doesn't eat my code!):
proc freq data = rej noprint;
by i;
tables event /binomial(level='1');
weight w;
output out=rej3 binomial;
run;
data rej4;
set rej3;
prob = pdf('BINOMIAL',...);
prob_include = ...;
run;
Again, welcome back. I look forward to reading future columns.
Thanks, Rick-- Your approach is much cleaner.
ReplyDeleteAs a rule, I prefer general tools, like ODS output data sets, over idiosyncratic ones, like the various output options available in some procedures. Knowing how to use ODS data sets means that you can always(?) gain access to displayed output. But it's often the case that idiosyncratic tools are quicker or less painful to use.