Showing posts with label probability distributions. Show all posts
Showing posts with label probability distributions. Show all posts

Monday, February 28, 2011

Example 8.28: should we buy snowstorm insurance?

It's been a long winter so far in New England, with many a snow storm. In this entry, we consider a simulation to complement the analytic solution for a probability problem concerning snow.

Consider a company that buys a policy to insure its revenue in the event of major snowstorms that shut down business. The policy pays nothing for the first such snowstorm of the year and $10,000 for each one thereafter, until the end of the year. The number of major snowstorms per year that shut down business is assumed to have a Poisson distribution with mean 1.5. What is the expected amount paid to the company under this policy during a one-year period?

Let SNOW be the number of snowstorms, and pay the amount paid out by the insurance. The following chart may be useful in discerning the patttern:

SNOW PAY 10000*(snow-1)
0 0 -10000
1 0 0
2 10000 10000
3 20000 20000

The analytic solution is straightforward, but involves a truncation of the first snowstorm. Since we can assume that the random variable SNOW ~ Poisson(1.5) we know that E[SNOW] = 1.5 and E[10000*(SNOW-1)] = 10000*E[snow] - 10000 = 15000 - 10000 = 5000.

E[PAY] is equal to E[10000*(SNOW-1]) + 10000*P(SNOW=0) so the exact answer is

10000*P(snow=0) + 15000 - 10000 =
10000*exp(-1.5) + 15000 - 10000 = $7231

Here the advantage of simulation is that it may provide a useful check on the results, as well as a ready measure of variability. In this situation, the code is quite simple, but the approach is powerful.

R

numsim = 1000000
snow = rpois(numsim, 1.5)
pay = snow - 1 # subtract one
pay[snow==0] = 0 # deal with the pesky P(snow=0)
sim = mean(pay*10000)
analytic = 10000*(dpois(0, 3/2) + 3/2 - 1)

Yielding the following:

> sim
[1] 7249.55
> analytic
[1] 7231.302


SAS
The simulation and analytic solutions are also straightforward in SAS. Here the analytic result is only calculated once


data snow_insurance;
do i = 1 to 1000000;
nsnow = ranpoi(0, 1.5);
payout = max(nsnow -1, 0) * 10000;
output;
end;
analytic = 10000 * (cdf("POISSON", 0, 1.5) + 1.5 -1);
output;
run;

proc means data=snow_insurance mean;
var payout analytic;
run;

This results in the following output:

Variable Mean
------------------------
payout 7236.96
analytic 7231.30
------------------------

Thursday, November 12, 2009

Example 7.17: The Smith College diploma problem

Smith College is a residential women's liberal arts college in Northampton, MA that is steeped in tradition. One such tradition is to give each student at graduation a diploma at random (or more accurately, in a haphazard fashion). At the end of the ceremony, a diploma circle is formed, and students pass the diplomas that they receive to the person next to them, and step out once they've received their diploma.

This problem, also known as the "hat-check" problem, is featured in Fred Mosteller's "Fifty Challenging Problems in Probability (with solutions)". Variants provide great fodder for probability courses.

The analytic (closed-form) solution for the expected number of students who receive their diplomas when they first receive one is very straightforward. Let X_i be the event that the ith student receives their diploma. E[X_i] = 1/n for all i, since the diplomas are uniformly distributed. If T is defined as the sum of all of the events X_1 through X_n, E[T] = n * 1/n = 1 by the rules of expectations. It is sometimes surprising to students that this result does not depend on n. The variance is trickier, since the outcomes are not independent (if the first student receives their diploma, the probability that the others will increases ever so slightly).

For students, the use of empirical (simulation-based) problem solving is increasingly common as a means to complement and enhance analytic (closed-form) solutions. Here we illustrate how to simulate the expected number of students who receive their diploma as well as the standard deviation of that quantity. We assume that n=650.

R
In R, we set up some constants and a vector for results that we will use. The list of students vector can be generated once, with the permuted vector of diplomas generated inside the loop. The == operator (section B.4.2) is used to compare each of the elements of the vectors.

numsim = 100000
n = 650
res = numeric(numsim)
students = 1:n
for (i in 1:numsim) {
diploma = sample(students, n)
res[i] = sum(students==diploma)
}


This generates the output:

> table(res)
0 1 2 3 4 5 6 7 8
36568 36866 18545 6085 1590 295 40 9 2
> mean(res)
[1] 1.00365
> sd(res)
[1] 0.9995232

The expected value and standard deviation of the number of students who receive their diplomas on the first try are both 1.


SAS

In SAS we prepare by making a data set with all 650 * 100,000 diplomas by looping through the 100,000 simulations, creating 650 diplomas each time. We also assign a random number to each diploma. Then we sort on the random value within each simulation.

data dips;
do sim = 1 to 100000;
do diploma = 1 to 650;
randorder = uniform(0);
output;
end;
end;
run;

proc sort data=dips;
by sim randorder;
run;

Next, we create a student identifier based on position in the data set and mark whether the reordered diploma matches the student ID. Finally, we count how many times the diploma matches the student using proc summary (section 2.1). It would also be relatively easy to count this manually within the data step.

data match;
set dips;
student = mod(_n_, 650) + 1;
match = (student eq diploma);
run;

proc summary data=match;
by sim;
var match;
output out=summatch sum=totalmatches;
run;

We can generate a table of the results using proc freq (section 2.3), and the mean and standard deviation using proc means.


proc freq data=summatch;
tables totalmatches / nocum;
run;


totalmatches Frequency Percent
0 36726 36.73
1 36734 36.73
2 18359 18.36
3 6241 6.24
4 1578 1.58
5 301 0.30
6 57 0.06
7 4 0.00


proc means data=summatch mean std;
var totalmatches;
run;


Analysis Variable : totalmatches
Mean Std Dev
1.0036200 1.0031734

Saturday, July 25, 2009

Example 7.7: Tabulate binomial probabilities

Suppose we wanted to assess the probability P(X=x) for a binomial random variate with n = 10 and with p = .81, .84, ..., .99. This could be helpful, for example, in various game settings.

In SAS, we find the probability that X=x using differences in the CDF calculated via the cdf function (1.10.1). We loop through the various binomial probabilities and values of x using the do ... end structure (section 1.11.1). Finally, we store the probabilities in implicitly named variables via an array statement (section 1.11.5).

SAS

data table (drop=j);
array probs [11] prob0 prob1 - prob10;
do p = .81 to .99 by .03;
do j = 0 to 10;
if j eq 1 then probs[j+1] = cdf("BINOMIAL", 0, p, 10);
else probs[j+1] = cdf("BINOMIAL", j, p, 10)
- cdf("BINOMIAL", j-1, p, 10);
end;
output;
end;
run;


The results are printed with two decimal places using the format statement (section 1.2.4). The options statement (section A.4) uses the ls option to narrow the output width to be compatible with Blogger.


options ls=64;
proc print data=table noobs;
var p prob0 prob1 - prob10;
format _numeric_ 3.2;
run;


And the results are:

p
p p p p p p p p p p r
r r r r r r r r r r o
o o o o o o o o o o b
b b b b b b b b b b 1
p 0 1 2 3 4 5 6 7 8 9 0
.81 .00 .00 .00 .00 .00 .02 .08 .19 .30 .29 .12
.84 .00 .00 .00 .00 .00 .01 .05 .15 .29 .33 .17
.87 .00 .00 .00 .00 .00 .00 .03 .10 .25 .37 .25
.90 .00 .00 .00 .00 .00 .00 .01 .06 .19 .39 .35
.93 .00 .00 .00 .00 .00 .00 .00 .02 .12 .36 .48
.96 .00 .00 .00 .00 .00 .00 .00 .01 .05 .28 .66
.99 .00 .00 .00 .00 .00 .00 .00 .00 .00 .09 .90


R
In R we start by making a vector of the binomial probabilities, using the : operator (section 1.11.3) to generate a sequence of integers. After creating a matrix (section B.4.4) to hold the table results, we loop (section 1.11.1) through the binomial probabilities, calling the dbinom() function (section 1.1) to find the probability that X=x. This calculation is nested within the round() function (section 1.2.5) to reduce the digits displayed. Finally, we glue the vector of binomial probabilities to the results.


p <- .78 + (3 * 1:7)/100
allprobs <- matrix(nrow=length(p), ncol=11)
for (i in 1:length(p)) {
allprobs[i,] <- round(dbinom(0:10, 10, p[i]),2)
}
table <- cbind(p, allprobs)
table


With the result:

p
[1,] 0.81 0 0 0 0 0 0.02 0.08 0.19 0.30 0.29 0.12
[2,] 0.84 0 0 0 0 0 0.01 0.05 0.15 0.29 0.33 0.17
[3,] 0.87 0 0 0 0 0 0.00 0.03 0.10 0.25 0.37 0.25
[4,] 0.90 0 0 0 0 0 0.00 0.01 0.06 0.19 0.39 0.35
[5,] 0.93 0 0 0 0 0 0.00 0.00 0.02 0.12 0.36 0.48
[6,] 0.96 0 0 0 0 0 0.00 0.00 0.01 0.05 0.28 0.66
[7,] 0.99 0 0 0 0 0 0.00 0.00 0.00 0.00 0.09 0.90


As with the example of jittered scatterplots for dichotomous y by continuous x, (Example 7.3, Example 7.4, and Example 7.5) we will revisit this example for improvement in later entries.

Saturday, July 18, 2009

Book excerpts now posted

We've posted excerpts from the book on the book website. The excerpts include Chapter 3 (regression and ANOVA) in its entirety. This demonstrates how the entries (the generic descriptions of software functions) and the worked examples reinforce each other.

We've also posted selected entries from each of the other chapters, describing probability distributions and random variables, two sample comparisons (T-tests, Wilcoxon rank-sum test, Kolmogorov-Smirnov test, median test), linear mixed models for correlated data (including general correlation, random intercepts and slopes, and more complex structures), graphics (scatterplots, histograms, saving plots as PDF), and power calculations (analytic and Monte Carlo).