Showing posts with label Read data in R. Show all posts
Showing posts with label Read data in R. Show all posts

Tuesday, February 22, 2011

Example 8.26: reading data with variable number of words in a field

A student came with a question about how to snag data from a PDF report for analysis. Once she'd copied things her text file looked like:

1 Las Vegas, NV --- 53.3 --- --- 1
2 Sacramento, CA --- 42.3 --- --- 2
3 Miami, FL --- 41.8 --- --- 3
4 Tucson, AZ --- 41.7 --- --- 4
5 Cleveland, OH --- 38.3 --- --- 5
6 Cincinnati, OH 15 36.4 --- --- 6
7 Colorado Springs, CO --- 36.1 --- --- 7
8 Memphis, TN --- 35.3 --- --- 8
8 New Orleans, LA --- 35.3 --- --- 8
10 Mesa, AZ --- 34.7 --- --- 10
11 Baltimore, MD --- 33.2 --- --- 11
12 Philadelphia, PA --- 31.7 --- --- 12
13 Salt Lake City, UT --- 31.9 17 --- 13

Here the --- means a missing value, and there's some complexity induced because some cities are made up of multiple words (so the number of spaced delimited fields varies). Unfortunately, she had hundreds of such datasets to process.

While section 1.1.3 (p. 3) of the book describes reading more complex files, neither it nor the entry on finding the Amazon sales rank are directly relevant here.

R

In R, we first craft a function that processes a line and converts each field other than the city name into a numeric variable. The function works backwards to snag the appropriate elements, then calculates what is left over to stash in the city variable.

readellaline = function (thisline) {
thislen = length(thisline)
id = as.numeric(thisline[1])
v1 = as.numeric(thisline[thislen-4])
v2 = as.numeric(thisline[thislen-3])
v3 = as.numeric(thisline[thislen-2])
v4 = as.numeric(thisline[thislen-1])
v5 = as.numeric(thisline[thislen])
city = paste(thisline[2:(thislen-5)], collapse=" ")
return(list(id=id,city=city,v1=v1,v2=v2,v3=v3,v4=v4,v5=v5))
}

However, before this function can work, it needs each line to be converted into a character vector containing each "word" (character strings divided by spaces) as a separate element. We'll do this by first reading each line, then split()ting it into words. This results in a list object, where the items in the list are the vectors of words. Then we can call the readellaline() function for each vector using an invocation of sapply() (section 1.3.2, p. 12), which avoids the need for a for loop. The resulting object can be transposed then coerced into a dataframe.

# read the input file
file = readLines("http://www.math.smith.edu/r/data/ella.txt")
split = strsplit(file, " ") # split up fields for each line
processed = as.data.frame(t(sapply(split, readellaline)))
processed

This generates the following output:

id city v1 v2 v3 v4 v5
1 1 Las Vegas, NV NA 53.3 NA NA 1
2 2 Sacramento, CA NA 42.3 NA NA 2
3 3 Miami, FL NA 41.8 NA NA 3
4 4 Tucson, AZ NA 41.7 NA NA 4
5 5 Cleveland, OH NA 38.3 NA NA 5
6 6 Cincinnati, OH 15 36.4 NA NA 6
7 7 Colorado Springs, CO NA 36.1 NA NA 7
8 8 Memphis,TN NA 35.3 NA NA 8
9 8 New Orleans, LA NA 35.3 NA NA 8
10 10 Mesa, AZ NA 34.7 NA NA 10
11 11 Baltimore, MD NA 33.2 NA NA 11
12 12 Philadelphia, PA NA 31.7 NA NA 12
13 13 Salt Lake City, UT NA 31.9 17 NA 13


SAS
The input tools in SAS can accommodate this data directly, without resorting to reading the data once and then processing it. However, two separate special tools are needed. These are: 1) the dlm option to the infile statement, to make both commas and spaces be treated as field delimiters, and 2) the & format modifier, which allows spaces within a variable that's being read in.

data ella4;
infile "c:\book\ella.txt" dlm=", ";
input id city & $20. state $2. v1 - v5;
run;

proc print data=ella; run;

In effect, the forgoing input statement instructs SAS that the field following id is to be named city, that it may have spaces in it, and that it is a character variable with a length of up to 20 characters; it will read into that variables for 20 spaces, or until the next non-space delimiter. The dlm option means that a comma is a delimiter.

Examining the log resulting from running the preceding code will reveal many notes regarding "invalid data", corresponding to the dashes. However, these result (correctly) in missing data codes, so they can safely be ignored.

Obs id city state v1 v2 v3 v4 v5

1 1 Las Vegas NV . 53.3 . . 1
2 2 Sacramento CA . 42.3 . . 2
3 3 Miami FL . 41.8 . . 3
4 4 Tucson AZ . 41.7 . . 4
5 5 Cleveland OH . 38.3 . . 5
6 6 Cincinnati OH 15 36.4 . . 6
7 7 Colorado Springs CO . 36.1 . . 7
8 8 Memphis TN . 35.3 . . 8
9 8 New Orleans LA . 35.3 . . 8
10 10 Mesa AZ . 34.7 . . 10
11 11 Baltimore MD . 33.2 . . 11
12 12 Philadelphia PA . 31.7 . . 12
13 13 Salt Lake City UT . 31.9 17 . 13

Thursday, September 24, 2009

Example 7.13: Read a file with two lines per observation

In example 7.6 we showed how to retrieve the Amazon sales rank of a book. A cron job on one of our machines grabs the sales rank hourly. We’d like to use this data to satisfy our curiosity about when and how often a book sells.

A complication is that the result of the cron job is a file with the time of the sales rank retrieval on one line and the rank retrieved on the following line. Here are some example lines frome the file:

Sun Aug 9 14:38:13 EDT 2009
salesrank= 141913
Sun Aug 9 14:40:04 EDT 2009
salesrank= 141913
Sun Aug 9 15:00:05 EDT 2009
salesrank= 149556
Sun Aug 9 16:00:05 EDT 2009
salesrank= 158132
Sun Aug 9 17:00:05 EDT 2009
salesrank= 166742

In the example below, we show how to read this data into SAS or R so that there is one observation (or row) per time. In a later entry, we'll build a graphic to display it.

SAS
In SAS, we use the infile statement (section 1.1.2) to read the data, as this affords the most control. As in section 6.4.1, we read a key character from the start of the line, holding that line with the trailing ”@” character. Then, dependent on the value of that character, we use different input statements to read in the date and time values on the one hand, or the rank. If the line does not contain the rank, we read in character values containing the day and time in formats which SAS can interpret, then convert this value to a SAS date-time value using an informat (section A.6.4) applied to a new character string made up of the date and time information. We keep these variable values using the retain statement (as in section 6.4.1). If the line has a rank, we read in the rank and write out the line using the output statement. Note that we do not use either the day of week or the time zone. Finally, we check the data by printing a few lines.


data sales;
infile "c:\data\sales.dat";
retain day month date time edt year;
input @1 type $1 @;
if type ne 's' then do;
input @1 day $ Month $ year time $ edt $ year;
end;
else do;
input @12 rank;
datetime = compress(date||month||year||"/"||time);
salestime = input(datetime,datetime18.);
output;
end;
run;

proc print data=sales (obs=6);
var datetime salestime rank;
run;

Obs datetime salestime rank

1 9Aug2009/14:38:13 1565447893 141913
2 9Aug2009/14:40:04 1565448004 141913
3 9Aug2009/15:00:05 1565449205 149556
4 9Aug2009/16:00:05 1565452805 158132
5 9Aug2009/17:00:05 1565456405 166742
6 9Aug2009/18:00:05 1565460005 175812


R
In R, we begin by reading the file (this is done relative to the current working directory (see getwd(), section 1.7.3). We then calculate the number of entries by dividing the file's length by two.

Next, we create two empty vectors of the correct length and type to store the data in, once we extract it from the file.

Once this preparatory work is completed, we loop (section 1.1.11) through the file, reading in the odd-numbered lines (lines (i-1)*2+1), some date/time values from the Eastern US time zone, with daylight savings applied. The gsub() function (also used in section 6.4.1 and related to the grep() function in 1.4.6) is used to replace matches determined by regular expression matching. In this situation, it is used to remove the time zone from the line before this processing. These date/time values are read into the timeval vector. Even-numbered lines (lines i*2) are read into the rank vector, after removing the string "salesrank=" (again using gsub()).

Finally, we make a data frame (section B.4.5) from the two vectors and display the first few lines using the head() function (section 1.13.1).


file <- readLines("sales.dat")
n <- length(file)/2
rank <- numeric(n)
timeval <- as.POSIXlt(rank, origin="1960-01-01")
for (i in 1:n) {
timeval[i] <- as.POSIXlt(gsub('EST', '',
gsub('EDT', '', file[(i-1)*2+1])),
tz="EST5EDT", format="%a %b %d %H:%M:%S %Y")
rank[i] <- as.numeric(gsub('salesrank= ','',file[i*2]))
}
timerank <- data.frame(timeval, rank)

We can review the results:

> head(timerank)
timeval rank
1 2009-08-09 14:38:13 141913
2 2009-08-09 14:40:04 141913
3 2009-08-09 15:00:05 149556
4 2009-08-09 16:00:05 158132
5 2009-08-09 17:00:05 166742
6 2009-08-09 18:00:05 175812

Saturday, August 1, 2009

Example 7.8: Plot two empirical cumulative density functions using available tools

The empirical cumulative density function (CDF) (section 5.1.16) is a useful way to compare distributions between populations. The Kolmogorov-Smirnov (section 2.4.2) statistic D is the value of x with the maximum distance between the two curves. As an example, we compare the male and female distributions of pcs from the HELP data set described in the book. Here, we use built-in tools to plot the graph; in later entries we will build it from scratch for greater control.

We begin by reading in the data (section 1.1.14) as a comma separated file from the book web site (section 1.1.6).

SAS

filename myurl
url 'http://www.math.smith.edu/sasr/datasets/help.csv'
lrecl=704;

proc import
datafile=myurl out=ds dbms=dlm;
delimiter=',';
getnames=yes;
run;

SAS proc univariate can do this plot automatically (section 5.1.15). It is designed to compare two groups within the data set, using the class statement (section 3.1.3).


proc univariate data=ds;
var pcs;
class female;
cdfplot pcs / overlay;
run;



In R, the plot() function accepts ecdf() objects (section 5.1.15) as input. Applying this to pcs, conditional on including only the rows when female is 1 (section B.4.2) creates the first empirical CDF as well as the axes. The lines() function (section 5.2.1) also accepts ecdf() objects as input, and applying this to pcs when female is 0 adds the second empirical CDF to the existing plot. A legend (section 5.2.14) is added to show which curve is which. (Note that the Blogger software prevents displaying this image large enough to see the difference here, but it will be visible when run locally.

R

> ds <- read.csv(
"http://www.math.smith.edu/sasr/datasets/helpmiss.csv")
> attach(ds)
> plot(ecdf(pcs[female==1]), verticals=TRUE, pch=46)
> lines(ecdf(pcs[female==0]), verticals=TRUE, pch=46)
> legend(20, 0.8, legend=c("Women", "Men"), lwd=1:3)


Click the graphic below for a more legible image of the output.


Monday, July 20, 2009

Example 7.6: Find Amazon sales rank for a book

In honor of Amazon's official release date for the book, we offer this blog entry.

Both SAS and R can be used to find the Amazon Sales Rank for a book by downloading the desired web page and ferreting out the appropriate line. This code is likely to break if Amazon’s page format is changed (but it worked as of October, 2010). [Note: as of spring 2010 Amazon changed the format for their webpages, and the appropriate text to search for changed from "Amazon.com Sales Rank" to "Amazon Bestsellers Rank". We've updated the blog code with this string. As of October 9, 2010 they added a number of blank lines to the web page, which we also now address.]

In this example, we find the sales rank for our book. Some interesting information about interpreting the rank can be found here or here.

Both SAS and R code below rely on section 1.1.3, ”Reading more complex text files.” Note that in the displayed SAS and R code, the long URL has been broken onto several lines, while it would have to be entered on a single line to run correctly.



In SAS, we assign the URL an internal name (section 1.1.6), then input the file using a data step. We exclude all the lines which don’t contain the sales rank, using the count function (section 1.4.6). We then extract the number using the substr function (section 1.4.3), with the find function (section 1.4.6) employed to locate the number within the line. The last step is to turn the extracted text (which contains a comma) into a numeric variable.

SAS

filename amazon url "http://www.amazon.com/
SAS-Management-Statistical-Analysis-Graphics/
dp/1420070576/ref=sr_1_1?ie=UTF8&s=books
&qid=1242233418&sr=8-1";

data test;
infile amazon truncover;
input @1 line $256.;
if count(line, "Amazon Bestsellers Rank") ne 0;
rankchar = substr(line, find(line, "#")+1,
find(line, "in Books") - find(line, "#") - 2);
rank = input(rankchar, comma9.);
run;

proc print data=test noobs;
var rank;
run;



R

# grab contents of web page
urlcontents <- readLines("http://www.amazon.com/
SAS-Management-Statistical-Analysis-Graphics/
dp/1420070576/ref=sr_1_1?ie=UTF8&s=books
&qid=1242233418&sr=8-1")
# find line with sales rank
linenum <- suppressWarnings(grep("Amazon Bestsellers Rank:",
urlcontents))

newline = linenum + 1 # work around October 2010 blank spaces
while (urlcontents[newline] == "") {
newline = newline + 1
}

# split line into multiple elements
linevals <- strsplit(urlcontents[newline], ' ')[[1]]

# find element with sales rank number
entry <- grep("#", linevals)
# snag that entry
charrank <- linevals[entry]
# kill '#' at start
charrank <- substr(charrank, 2, nchar(charrank))
# remove commas
charrank <- gsub(',','', charrank)
# turn it into a numeric opject
salesrank <- as.numeric(charrank)
cat("salesrank=",salesrank,"\n")


The resulting output (on July 16, 2009) is

SAS

rank

23476


R

salesrank= 23467