Showing posts with label end =. Show all posts
Showing posts with label end =. Show all posts

Tuesday, January 14, 2014

Example 2014.1: "Power" for a binomial probability, plus: News!

Hello, folks! I'm pleased to report that Nick and I have turned in the manuscript for the second edition of SAS and R: Data Management, Statistical Analysis, and Graphics. It should be available this summer. New material includes some of our more popular blog posts, plus reproducible analysis, RStudio, and more.

To celebrate, here's a new example. Parenthetically, I was fortunate to be able to present my course: R Boot Camp for SAS users at Boston University last week. One attendee cornered me after the course. She said: "Ken, R looks great, but you use SAS for all your real work, don't you?" Today's example might help a SAS diehard to see why it might be helpful to know R.

OK, the example: A colleague contacted me with a typical "5-minute" question. She needed to write a convincing power calculation for the sensitivity-- the probability that a test returns a positive result when the disease is present, for a fixed number of cases with the disease. I don't know how well this has been explored in the peer-reviewed literature, but I suggested the following process:
1. Guess at the true underlying sensitivity
2. Name a lower bound (less than the truth) which we would like the observed CI to exclude
3. Use basic probability results to report the probability of exclusion, marginally across the unknown number of observed positive tests.

This is not actually a power calculation, of course, but it provides some information about the kinds of statements that it's likely to be possible to make.

R

In R, this is almost trivial. We can get the probability of observing x positive tests simply, using the dbinom() function applied to a vector of numerators and the fixed denominator. Finding the confidence limits is a little trickier. Well, finding them is easy, using lapply() on binom.test(), but extracting them requires using sapply() on the results from lapply(). Then it's trivial to generate a logical vector indicating whether the value we want to exclude is in the CI or not, and the sum of the probabilities we see a number of positive tests where we include this value is our desired result.
> truesense = .9
> exclude = .6
> npos = 20
> probobs = dbinom(0:npos,npos,truesense)
> cis = t(sapply(lapply(0:npos,binom.test, n=npos), 
               function(bt) return(bt$conf.int)))
> included = cis[,1] < exclude & cis[,2] > exclude
> myprob = sum(probobs*included)
> myprob
[1] 0.1329533
(Note that I calculated the inclusion probability, not the exclusion probability.)

Of course, the real beauty and power of R is how simple it is to turn this into a function:
> probinc = function(truesense, exclude, npos) {
  probobs = dbinom(0:npos,npos,truesense)
  cis = t(sapply(lapply(0:npos,binom.test, n=npos), 
               function(bt) return(bt$conf.int)))
   included = cis[,1] < exclude & cis[,2] > exclude
   return(sum(probobs*included))
}
> probinc(.9,.6,20)
[1] 0.1329533


SAS

My SAS process took about 4 times as long to write.
I begin by making a data set with a variable recording both the number of events (positive tests) and non-events (false negatives) for each possible value. These serve as weights in the proc freq I use to generate the confidence limits.
%let truesense = .9;
%let exclude = .6;
%let npos = 20;

data rej;
do i = 1 to &npos;
  w = i; event = 1; output;
  w = &npos - i; event = 0; output;
  end;
run;

ods output binomialprop = rej2;
proc freq data = rej;
by i;
tables event /binomial(level='1');
weight w;
run;
Note that I repeat the proc freq for each number of events using the by statement. After saving the results with the ODS system, I have to use proc transpose to make a table with one row for each number of positive tests-- before this, every statistic in the output has its own row.
proc transpose data = rej2 out = rej3;
where name1 eq "XL_BIN" or name1 eq "XU_BIN";
by i;
id name1;
var nvalue1;
run;
In my fourth data set, I can find the probability of observing each number of events and multiply this with my logical test of whether the CI included my target value or not. But here there is another twist. The proc freq approach won't generate a CI for both the situation where there are 0 positive tests and the setting where all are positive in the same run. My solution to this was to omit the case with 0 positives from my for loop above, but now I need to account for that possibility. Here I use the end=option to the set statement to figure out when I've reached the case with all positive (sensitivity =1). Then I can use the reflexive property to find the confidence limits for the case with 0 events. Then I'm finally ready to sum up the probabilities associated with the number of positive tests where the CI includes the target value.
data rej4;
set rej3 end = eof;
prob = pdf('BINOMIAL',i,&truesense,&npos);
prob_include = prob * ((xl_bin < &exclude) and (xu_bin > &exclude));
output;
if eof then do;
   prob = pdf('BINOMIAL',0,&truesense,&npos);
   prob_include = prob * (((1 - xu_bin) < &exclude) and ((1 - xl_bin) > &exclude));
   output;
   end;
run;

proc means data = rej4 sum;
var prob_include;
run;
Elegance is a subjective thing, I suppose, but to my eye, the R solution is simple and graceful, while the SAS solution is rather awkward. And I didn't even make a macro out of it yet!

An unrelated note about aggregators: We love aggregators! Aggregators collect blogs that have similar coverage for the convenience of readers, and for blog authors they offer a way to reach new audiences. SAS and R is aggregated by R-bloggers, PROC-X, and statsblogs with our permission, and by at least 2 other aggregating services which have never contacted us. If you read this on an aggregator that does not credit the blogs it incorporates, please come visit us at SAS and R. We answer comments there and offer direct subscriptions if you like our content. In addition, no one is allowed to profit by this work under our license; if you see advertisements on this page, the aggregator is violating the terms by which we publish our work.

Tuesday, April 12, 2011

Example 8.34: lack of robustness of t test with small n

Tim Hesterberg has effectively argued for a larger role for resampling based inference in introductory statistics courses (and statistical practice more generally). While the Central Limit Theorem is a glorious result, and the Student t-test remarkably robust, there are subtleties that Hesterberg, Jones and others have pointed out that are not always well understood.

In this entry, we explore the robustness of the Student one sample t-test in terms of coverage probabilities where we look whether the alpha level is split evenly between the two tails.

R
We begin by defining a function to carry out the simulation study.

runsim = function(FUNCTION=rnorm, n=20, numsim=100000,
label="normal")
{
missupper = numeric(numsim)
misslower = numeric(numsim)
for (i in 1:numsim) {
x = FUNCTION(n)
testval = t.test(x)
missupper[i] = testval$conf.int[2] < 0
misslower[i] = testval$conf.int[1] > 0
}
cat("n=", n,", number of simulations=", numsim, sep="")
cat(", (true data are ", label, ")\n", sep="")
cat(round(100*sum(missupper)/numsim, 1),
"% of simulations had CI below the true value.\n", sep="")
cat(round(100*sum(misslower)/numsim, 1),
"% of simulations had CI above the true value.\n", sep="")
}

We can define three functions to explore: one where the data are actually normally distributed (so the central limit theorem doesn't need to be invoked), a symmetric distribution (where the central limit theorem can be used with smaller sample sizes) and a skewed distribution (where the central limit theorem requires very large sample sizes).

reallynormal = function(n) { return(rnorm(n)) }
symmetric = function(n) { return(runif(n, -1, 1)) }
skewed = function(n) { return(rexp(n, 1) - 1) }

The results (at least to us) are somewhat startling. To get the tail probabilities correct when the underlying distribution is actually skewed, we need a huge sample size:

> runsim(reallynormal, n=20, label="normal")
n=20, number of simulations=1e+05, (true data are normal)
2.5% of simulations had CI below the true value.
2.5% of simulations had CI above the true value.

> runsim(symmetric, n=20, label="symmetric")
n=20, number of simulations=1e+05, (true data are symmetric)
2.6% of simulations had CI below the true value.
2.5% of simulations had CI above the true value.

> runsim(skewed, n=20, label="skewed")
n=20, number of simulations=1e+05, (true data are skewed)
7.7% of simulations had CI below the true value.
0.6% of simulations had CI above the true value.

> runsim(skewed, n=100, label="skewed")
n=100, number of simulations=1e+05, (true data are skewed)
4.5% of simulations had CI below the true value.
1.2% of simulations had CI above the true value.

> runsim(skewed, n=500, label="skewed")
n=500, number of simulations=1e+05, (true data are skewed)
3.4% of simulations had CI below the true value.
1.7% of simulations had CI above the true value.


SAS

In the SAS version, we'll write a macro that first generates all of the data, then uses by processing to perform the t tests, and finally tallies the results and prints them to the output.

%macro simt(n=20, numsim=100000, myrand=normal(0), label=normal);
data test;
do sim = 1 to &numsim;
do i = 1 to &n;
x = &myrand;
output;
end;
end;
run;

ods select none;
ods output conflimits=ttestcl;
proc ttest data=test;
by sim;
var x;
run;
ods select all;

data _null_;
set ttestcl end=last;
retain missupper 0 misslower 0;
missupper = missupper + (upperclmean lt 0);
misslower = misslower + (lowerclmean gt 0);
if last then do;
missupper_pct = compress(round(100 * missupper/&numsim,.1));
misslower_pct = compress(round(100 * misslower/&numsim,.1));
file print;
put "n=&n, number of simulations = &numsim, true data are &label";
put missupper_pct +(-1) "% of simulations had CI below the true value";
put misslower_pct +(-1) "% of simulations had CI above the true value";
end;
run;
%mend simt;

Two or three features of this code bear commenting. First, the retain statement and end option allow us to count the number of misses without an additional data step and/or procedure. Next, the file print statement redirects the results from a put statement to the output, rather than the log. While this is not really needed, the output is where we expect the results to appear. Finally, the odd +(-1) in the final two put statements moves the column pointer back one space. This allows the percent sign to immediately follow the number.

The various distributions and sample sizes used in the R code can be called as follows:

%simt(numsim=100000, n = 20, myrand = normal(0));
%simt(numsim=100000, n = 20, myrand = (2 * uniform(0)) - 1, label=symmetric);
%simt(numsim=100000, n = 20, myrand = (ranexp(0) - 1), label=skewed);
%simt(numsim=100000, n = 100, myrand = (ranexp(0) - 1), label=skewed);
%simt(numsim=100000, n = 500, myrand = (ranexp(0) - 1), label=skewed);

where the code to generate the desired data is passed to the macro as a character string. The results of
%simt(numsim=100000, n = 100, myrand = (ranexp(0) - 1), label=skewed);
are

n=100, number of simulations = 100000, true data are skewed
4.5% of simulations had CI below the true value
1.2% of simulations had CI above the true value

which agrees nicely with the R results above.

We look forward to Tim's talk at the JSM.