## Saturday, October 24, 2009

### Example 7.16: assess robustness of permutation test to violations of exchangeability assumption

Permutation tests (section 2.4.3) are a form of resampling based inference that can be used to compare two groups. A simple univariate two-group permutation test requires that the group labels for the observations are exchangeable under the null hypothesis of equal distributions, but allows relaxation of specific distributional assumptions required by parametric procedures such as the t-test (section 2.4.1).

Here we demonstrate how to carry out a small simulation study of the robustness of the Type I error rate of a permutation test under two scenarios that violate exchangeability by having different shapes under the null hypothesis. One might encounter such a situation in practice in a study of growth in which unequal follow-up time is allotted to the groups. Our study is analogous to assessing the t-test assuming equal variances, when the variances are known to be unequal and/or the distributions non-normal under the null.

Scenario I: group 1 is distributed as a Normal random variable (section 1.10.5) with mean 0 and standard deviation 1, while group 2 is distributed as a Normal random variable with mean 0 and standard deviation 6. Here the distributions are both symmetric, but have markedly different variances.

Scenario II: group 1 is distributed as a Normal random variable with mean 6 and standard deviation 1, while group 2 is distributed as an exponential random variable (section 1.10.7) with mean 6 and standard deviation 6. Here the distributions have markedly different skewness, as well as variance.

We use an alpha level of 0.05, set the sample size in each group to 10, and generate 10,000 simulated data sets. Instead of the exact permutation test, we use the asymptotically equivalent Monte Carlo Hypothesis Test (Dwass, 1957), generating 1,000 samples from the permutation distribution for each simulation (thank heavens for fast computers!)

R
In R, we set up some constants and a vector for results that we will use. The group vector can be generated once, with the outcome generated inside the loop. The oneway_test() function from the coin library is used to run the permutation test, and return the p-value (the first element in the list provided by pvalue()). Finally, the prop.test() function (section 2.1.9) is used to calculate the 95% confidence interval for the level of the test.

For scenario I, the code is:
`numsim <- 10000numperm <- 1000n <- 10res <- numeric(numsim)x <- c(rep(0, n), rep(1, n))for (i in 1:numsim) {   y1 <- rnorm(n, 0, 1)   y2 <- rnorm(n, 0, 6)   y <- c(y1, y2)   res[i] <- pvalue(oneway_test(y ~ as.factor(x),       distribution=approximate(numperm)))[]}`

The desired results can be generated as in section 2.1.9 by summarizing the res vector:
`prop.test(sum(res<=0.05), numsim)`

The result follows:
`        1-sample proportions test with continuity correctiondata:  sum(res <= 0.05) out of numsim, null probability 0.5 X-squared = 7574.221, df = 1, p-value < 2.2e-16alternative hypothesis: true p is not equal to 0.5 95 percent confidence interval: 0.06009200 0.06984569 sample estimates:     p 0.0648 `

We see that the permutation test is somewhat anti-conservative given such an extreme difference in variances.

For scenario II, the code is the same with the exception of:
`   y1 <- rnorm(n, 6, 1)   y2 <- rexp(n, 1/6)`

The yields the following results:
`> prop.test(sum(res<=0.05), numsim) 1-sample proportions test with continuity correctiondata:  sum(res <= 0.05) out of numsim, null probability 0.5 X-squared = 5973.744, df = 1, p-value < 2.2e-16alternative hypothesis: true p is not equal to 0.5 95 percent confidence interval: 0.1073820 0.1199172 sample estimates:     p 0.1135 `

The nominal Type I error rate of 0.05 is not preserved in this situation.

SAS
In SAS, we begin by making data for scenario 1, using the looping tools described in section 1.11.1.

`data test;do numsim = 1 to 10000;   do group = 0,1;      do i = 1 to 10;         if group = 0 then y = rand('NORMAL',0,1);         else y = rand('NORMAL',0,6);         output;      end;   end;end;run;`

Next, we perform the permutation test for equal means as shown in section 2.4.3. The by statement repeats the test for each simnum, while the ods statements, as demonstrated throughout the examples in the book, prevent copious output and save relevant results. This was found using the ods trace on statement, as discussed in section A.7.1.

`ods select none;ods output datascoresmc = kkout;proc npar1way data = test;by numsim;   class group;   var y;   exact scores = data / mc n = 1000;run;ods select all;`

Next, we make a new data set from the saved output, saving just the lines we need, and generating a new variable which records whether the test was rejected or not. The names and values of the variables were found using a proc print statement, which is not shown.

`data props;set kkout;where (name1 = "MCP2_DATA"); reject = (nvalue1 lt .05);run;`

The estimated rejection probability in this setting is found as discussed in section 2.1.9:

`proc freq data = props;tables reject/binomial(level='1');run;`

The resulting output follows:
`                                   Cumulative    Cumulativereject    Frequency     Percent     Frequency      Percent___________________________________________________________     0        9339       93.39          9339        93.39     1         661        6.61         10000       100.00                  Proportion                0.0661                  ASE                       0.0025                  95% Lower Conf Limit      0.0612                  95% Upper Conf Limit      0.0710                  Exact Conf Limits                  95% Lower Conf Limit      0.0613                  95% Upper Conf Limit      0.0711                  ASE under H0              0.0050                  Z                       -86.7800                  One-sided Pr <  Z         <.0001                  Two-sided Pr > |Z|        <.0001                               Sample Size = 10000`

These results agree substantially with the results from R.

To produce scenario 2, we replace
`         if group = 0 then y = rand('NORMAL',0,1);         else y = rand('NORMAL',0,6);`

with
`         if group = 0 then y = rand('NORMAL',6,1);         else y = rand('EXPONENTIAL') * 6;`

with the following results:
`                                   Cumulative    Cumulativereject    Frequency     Percent     Frequency      Percent___________________________________________________________     0        8877       88.77          8877        88.77     1        1123       11.23         10000       100.00                  Proportion                0.1123                  ASE                       0.0032                  95% Lower Conf Limit      0.1061                  95% Upper Conf Limit      0.1185                  Exact Conf Limits                  95% Lower Conf Limit      0.1062                  95% Upper Conf Limit      0.1187                  ASE under H0              0.0050                  Z                       -77.5400                  One-sided Pr <  Z         <.0001                  Two-sided Pr > |Z|        <.0001                               Sample Size = 10000`

Again, these results are comfortably similar to those found in R and demonstrate that this extreme violation of the null results in a noticeable anti-conservative bias.

However, note that with additional observations, the anti-conservative bias diminishes considerably. With 100 observations per group, (replacing do i = 1 to 10 with do i = 1 to 100) the following results are generated:
`                                    Cumulative    Cumulative reject    Frequency     Percent     Frequency      Percent ___________________________________________________________      0        9452       94.52          9452        94.52      1         548        5.48         10000       100.00                Proportion                0.0548                ASE                       0.0023                95% Lower Conf Limit      0.0503                95% Upper Conf Limit      0.0593                Exact Conf Limits                95% Lower Conf Limit      0.0504                95% Upper Conf Limit      0.0594                ASE under H0              0.0050                Z                       -89.0400                One-sided Pr <  Z         <.0001                Two-sided Pr > |Z|        <.0001                             Sample Size = 10000`

revealing a negligible bias.

Our conclusion: while the permutation test is somewhat robust to violations of exchangeability, when shapes and variances differ radically the results may be anti-conservatively biased.

It bears noting here that if there were no reason to expect different distributional shapes under the null, but we just happened to observed the large differences seen here, we would interpret these results as demonstrating the poor power of a permutation test based on the means to detect differences in shape alone.

Dwass M. Modified randomization tests for nonparametric hypotheses. Annals of Mathematical Statistics, 28:181-187, 1957.