Showing posts with label read complex data files. Show all posts
Showing posts with label read complex data files. Show all posts

Friday, February 25, 2011

Example 8.27: using regular expressions to read data with variable number of words in a field

A more or less anonymous reader commented on our last post, where we were reading data from a file with a varying number of fields. The format of the file was:


1 Las Vegas, NV --- 53.3 --- --- 1
2 Sacramento, CA --- 42.3 --- --- 2

The complication in the number of fields related to spaces in the city field (which could vary from one to three words).

The reader's elegant solution took full advantage of R's regular expressions: a powerful and concise language for processing text.

file <- readLines("http://www.math.smith.edu/r/data/ella.txt")
file <- gsub("^([0-9]* )(.*),( .*)$", "\\1'\\2'\\3", file)
tc <- textConnection(file)
processed <- read.table(tc, sep=" ", na.string="---")
close(tc)

The main work is done by the gsub() function, which processes each line of the input file and puts the city values in quotes (so that it is seen as a single field when read.table() is run.

While not straightforward to parse, the regular expression pattern can be broken into parts. The string ^([0-9]* ) matches any numbers (characters 0-9) at the beginning of the line (which is indicated by the "^"), followed by a space. The "*" means that there may be more than one such 0-9 character included. The string (.*), matches any number of characters followed by a comma, while the last pattern matches any characters after the next space to the end of the line. After the comma (between the quotes) the user gives the characters to replace the found character strings with. To replicate the data found between the parens, we can use the "\\n" syntax; the fact that the comma in the second clause "(.*)," is outside the parens means that it is not replicated.

It may be slightly easier to understand the code if we note that the third clause is unnecessary and split the remaining clauses into two separate gsub() commands, as follows.

file <- readLines("http://www.math.smith.edu/r/data/ella.txt")
file <- gsub("^([0-9]* )", "\\1'", file)
file <- gsub("(.*),", "\\1'", file)
tc <- textConnection(file)
processed <- read.table(tc, sep=" ", na.string="---")
close(tc)

The first two elements of the file vector become:

"1 'Las Vegas' NV --- 53.3 --- --- 1"
"2 'Sacramento' CA --- 42.3 --- --- 2"

The use of the na.string option to read.table() is a more appropriate approach to recoding the missing values than we used previously. Overall, we're impressed with the commenter's use of regular expressions in this example, and are thinking more about Nolan and Temple Lang's focus on them as part of a modern statistical computing curriculum.

Tuesday, July 6, 2010

Example 8.1: Digits of Pi

Do the digits of Pi appear in a random order? If so, the trillions of digits of Pi calculated can serve as a useful random number generator. This post was inspired by this entry on Matt Asher's blog.

Generating pseudo-random numbers is a key piece of much of modern statistical practice, whether for Markov chain Monte Carlo applications or simpler simulations of what raw data would look like under given circumstances.

Generating sufficiently random pseudo-random numbers is not trivial and many methods exist for doing so. Unfortunately, there's no way to prove a series of pseudo-random numbers is indistinguishable from a truly random series of numbers. Instead, there are simply what amount to ad hoc tests, one of which might be failed by insufficiently random series of numbers.

The first 10 million digits of Pi can be downloaded from here. Here, we explore couple of simple ad hoc checks of randomness. We'll try something a little trickier in the next entry.


SAS

We start by reading in the data. The file contains one logical line with a length of 10,000,000. We tell SAS to read a digit-long variable and hold the place in the line. For later use, we'll also create a variable with the order of each digit, using the _n_ implied variable (section 1.4.15).


data test;
infile "c:\ken\pi-10million.txt" lrecl=10000000;
input digit 1. @@;
order = _n_;
run;


We can do a simple one-way chi-square test for equal probability of each digit in proc freq.


proc freq data = test;
tables digit/ chisq;
run;

Chi-Square 2.7838
DF 9
Pr > ChiSq 0.9723
Sample Size = 10000000


We didn't display the counts for each digit, but none was more than 0.11% away from the expected 1,000,000 occurrences.

Another simple check would be to assess autoregression. We can do this in proc autoreg. The dw=2 option calculates the Durbin-Watson statistic for adjacent and alternating residuals. We limit the observations to 1,000,000 digits for compatibility with R.


proc autoreg data=test (obs = 1000000);
model digit = / dw=2 dwprob;
run;
Durbin-Watson Statistics

Order DW Pr < DW Pr > DW
1 2.0028 0.9175 0.0825
2 1.9996 0.4130 0.5870

We might want to replicate this set of tests for series of 4 digits instead. To do this, we just tell the data step to read the line line in 4-digit chunks.

data test4;
infile "c:\ken\pi-10million.txt" lrecl=10000000;
input digit4 4. @@;
order = _n_;
run;

proc freq data = test4;
tables digit4/ chisq;
run;
Chi-Square 9882.9520
DF 9999
Pr > ChiSq 0.7936

Sample Size = 2500000

proc autoreg data=test4 (obs = 1000000);
model digit = / dw=3 dwprob;
run;
Durbin-Watson Statistics

Order DW Pr < DW Pr > DW
1 2.0014 0.7527 0.2473
2 1.9976 0.1181 0.8819
3 2.0007 0.6397 0.3603

So far, we see no evidence of a lack of randomness.

R

In R, we use the readLines() function to create a 10,000,000-digit scalar object. In the following line we split the digits using the strsplit() function (as in section 6.4.1). This results in a list object, to which the as.numeric() function (which forces the digit characters to be read as numeric, section 1.4.2) cannot be applied. The unlist() function converts the list into a vector first, so that strsplit() will work. Then the chisq.test() function performs the one-way chi-squared test.


mypi = readLines("c:/ken/pi-10million.txt", warn=FALSE)
piby1 = as.numeric(unlist(strsplit(mypi,"")))
chisq.test(table(piby1), p=rep(0.1, 10))

This generates the following output:

Chi-squared test for given probabilities

data: table(piby1)
X-squared = 2.7838, df = 9, p-value = 0.9723


Alternatively, it's trivial to write a function to automatically test for equal probabilities of all categories.


onewaychi = function(datavector){
datatable = table(datavector)
expect = rep(length(datavector)/length(datatable),length(datatable))
chi2 = sum(((datatable - expect)^2)/expect)
p = 1- pchisq(chi2,length(datatable)-1)
return(p)
}

> onewaychi(piby1)
[1] 0.972252



The Durbin-Watson test can be generated by the dwtest function, from the lmtest package. Using all 10,000,000 digits causes an error, so we use only the first 1,000,000.


> library(lmtest)
> dwtest(lm(piby1[1:1000000] ~ 1))

Durbin-Watson test

data: lm(piby1[1:1e+06] ~ 1)
DW = 2.0028, p-value = 0.9176
alternative hypothesis: true autocorrelation is greater than 0


To examine the digits in groups of 4, we read the digit vector as a matrix with 4 columns, then multiply each digit and add the columns together. Alternatively, we could use the paste() function (section 1.4.5) to glue the digits together as a character string, the use the as.numeric() to convert back to numbers.


> pimat = matrix(piby1, ncol = 4,byrow=TRUE)
> head(pimat)
[,1] [,2] [,3] [,4]
[1,] 1 4 1 5
[2,] 9 2 6 5
[3,] 3 5 8 9
[4,] 7 9 3 2
[5,] 3 8 4 6
[6,] 2 6 4 3

> piby4 = pimat[,1] * 1000 + pimat[,2] * 100 +
+ pimat[,3] * 10 + pimat[,4]
> head(piby4)
[1] 1415 9265 3589 7932 3846 2643

# alternate approach
# piby4_v2 = as.numeric(paste(pimat[,1], pimat[,2],
# pimat[,3], pimat[,4], sep=""))

> onewaychi(piby4)
[1] 0.7936358

> dwtest(lm(piby4[1:1000000] ~ 1))
Durbin-Watson test

data: lm(piby4[1:1e+06] ~ 1)
DW = 2.0014, p-value = 0.753
alternative hypothesis: true autocorrelation is greater than 0

Thursday, September 24, 2009

Example 7.13: Read a file with two lines per observation

In example 7.6 we showed how to retrieve the Amazon sales rank of a book. A cron job on one of our machines grabs the sales rank hourly. We’d like to use this data to satisfy our curiosity about when and how often a book sells.

A complication is that the result of the cron job is a file with the time of the sales rank retrieval on one line and the rank retrieved on the following line. Here are some example lines frome the file:

Sun Aug 9 14:38:13 EDT 2009
salesrank= 141913
Sun Aug 9 14:40:04 EDT 2009
salesrank= 141913
Sun Aug 9 15:00:05 EDT 2009
salesrank= 149556
Sun Aug 9 16:00:05 EDT 2009
salesrank= 158132
Sun Aug 9 17:00:05 EDT 2009
salesrank= 166742

In the example below, we show how to read this data into SAS or R so that there is one observation (or row) per time. In a later entry, we'll build a graphic to display it.

SAS
In SAS, we use the infile statement (section 1.1.2) to read the data, as this affords the most control. As in section 6.4.1, we read a key character from the start of the line, holding that line with the trailing ”@” character. Then, dependent on the value of that character, we use different input statements to read in the date and time values on the one hand, or the rank. If the line does not contain the rank, we read in character values containing the day and time in formats which SAS can interpret, then convert this value to a SAS date-time value using an informat (section A.6.4) applied to a new character string made up of the date and time information. We keep these variable values using the retain statement (as in section 6.4.1). If the line has a rank, we read in the rank and write out the line using the output statement. Note that we do not use either the day of week or the time zone. Finally, we check the data by printing a few lines.


data sales;
infile "c:\data\sales.dat";
retain day month date time edt year;
input @1 type $1 @;
if type ne 's' then do;
input @1 day $ Month $ year time $ edt $ year;
end;
else do;
input @12 rank;
datetime = compress(date||month||year||"/"||time);
salestime = input(datetime,datetime18.);
output;
end;
run;

proc print data=sales (obs=6);
var datetime salestime rank;
run;

Obs datetime salestime rank

1 9Aug2009/14:38:13 1565447893 141913
2 9Aug2009/14:40:04 1565448004 141913
3 9Aug2009/15:00:05 1565449205 149556
4 9Aug2009/16:00:05 1565452805 158132
5 9Aug2009/17:00:05 1565456405 166742
6 9Aug2009/18:00:05 1565460005 175812


R
In R, we begin by reading the file (this is done relative to the current working directory (see getwd(), section 1.7.3). We then calculate the number of entries by dividing the file's length by two.

Next, we create two empty vectors of the correct length and type to store the data in, once we extract it from the file.

Once this preparatory work is completed, we loop (section 1.1.11) through the file, reading in the odd-numbered lines (lines (i-1)*2+1), some date/time values from the Eastern US time zone, with daylight savings applied. The gsub() function (also used in section 6.4.1 and related to the grep() function in 1.4.6) is used to replace matches determined by regular expression matching. In this situation, it is used to remove the time zone from the line before this processing. These date/time values are read into the timeval vector. Even-numbered lines (lines i*2) are read into the rank vector, after removing the string "salesrank=" (again using gsub()).

Finally, we make a data frame (section B.4.5) from the two vectors and display the first few lines using the head() function (section 1.13.1).


file <- readLines("sales.dat")
n <- length(file)/2
rank <- numeric(n)
timeval <- as.POSIXlt(rank, origin="1960-01-01")
for (i in 1:n) {
timeval[i] <- as.POSIXlt(gsub('EST', '',
gsub('EDT', '', file[(i-1)*2+1])),
tz="EST5EDT", format="%a %b %d %H:%M:%S %Y")
rank[i] <- as.numeric(gsub('salesrank= ','',file[i*2]))
}
timerank <- data.frame(timeval, rank)

We can review the results:

> head(timerank)
timeval rank
1 2009-08-09 14:38:13 141913
2 2009-08-09 14:40:04 141913
3 2009-08-09 15:00:05 149556
4 2009-08-09 16:00:05 158132
5 2009-08-09 17:00:05 166742
6 2009-08-09 18:00:05 175812