Category Archives: R

dplyr is great…but

I have been loving Hadley Wickham’s new dplyr package for R. It creates a relatively small number of verbs to quickly and easily manipulate data. It is generally as fast as data.table but unless you’re already very familiar with data.table the syntax is much easier. There are a number of great introductions and tutorials, which I won’t attempt to recreate but here are some links:

The only challenge with going to dplyr has been dealing with the non-standard evaluation (see vignette(“nse”) and the new tbl_df class that gets created. In theory the resulting dataframes created from dplyr should act both as a data.frame object and as a new tbl_df object. There are some nice new features associated with tbl_df objects but also some idiosyncrasies that can break existing code.

For example, I had a function:

stdCovs <- function (x, y, var.names) {
    for (i in 1:length(var.names)) {
        x[, var.names[i]] <- (x[, var.names[i]] - mean(y[, var.names[i]], na.rm = T)) / 
            sd(y[, var.names[i]], na.rm = T)

that took the standardized covariates from one dataframe and applied them to standardize another dataframe for a particular set of covariates. This was particularly useful for doing predictions for new data using coefficients from a different dataset that had been standardized (y.standard). The code above worked when I was using data.frame objects. However, I have now started using dplyr for a variety of reasons but the code no longer works. I believe this is due to how tbl_df objects handle bracketing (e.g. y[ , var]). There is an issue on GitHub related to this.

In the end it turned out that double bracketing to extract the column works: Rather than the y[ , var] formulation it should be y[[var]]. Not too hard but the type of thing that wastes an hour when going to a new package. Despite this challenge and similar things, I highly recommend dplyr. Now hopefully the simplicity of the verbs and speed of computation can make up for the few hours spent learning and adjusting code.

Here’s my final code:

stdCovs <- function(x, y, var.names){
  for(i in 1:length(var.names)){
    x[ , var.names[i]] <- (x[ , var.names[i]] - mean(y[[var.names[i]]], na.rm = T)) / 
        sd(y[[var.names[i]]], na.rm = T)

One of the best features of dplyr is it’s use of magrittr for piping functions together with the %>% symbol. It’s not shown above but it’s worth learning. Previously, you would either have a bunch of useless intermediate objects created or a long string of nested functions that were hard (impossible at times) to decipher. The “then” command (%>%) does away with this as shown here by Revolution Analytics:

hourly_delay <- filter( 
      date, hour
    delay = mean(dep_delay), 
    n = n()
  n > 10 

Here’s the same code, but rather than nesting one function call inside the next, data is passed from one function to the next using the %>% operator:

hourly_delay <- flights %>% 
 filter(! %>% 
 group_by(date, hour) %>% 
   delay = mean(dep_delay), 
   n = n() ) %>% 
 filter(n > 10)

Beautiful, right. Pipes are fantastic and combined with dplyr’s simple verbs make coding a lot easier and more readable once you get the hang of it.

Blue Jay and Scrub Jay : Using rvertnet to check the distributions in R

Daniel Hocking:

This is a great post about a great package. Unlike some other broad vertebrate databases, VertNet has good QAQC. It’s great being able to access it directly in R where the data manipulation, analysis, and even plotting will occur. Thank you to the ROpenSci team for development of the rvertnet package and Vijay Barve for the clear, simple tutorial.

Originally posted on Vijay Barve:

As part of my Google Summer of Code, I am also working on another package for R called rvertnet. This package is a wrapper in R for VertNet websites. Vertnet is a vertebrate distributed database network consisting of FishNet2MaNISHerpNET, and ORNIS. Out of that currently Fishnet, HerpNET and ORNIS have their v2 portals serving data. rvertnet has functions now to access this data and import them into R data frames.

Some of my lab mates faced a difficulty in downloading data for Scrub Jay (Aphelocoma spp. ) due to large number of records (220k+) on ORNIS, so decided to try using rvertnet package which was still in development that time. The package really helpe and got the results quickly. So while ecploring that data I came up with this case study.

So here to get data for Blue Jay (Cyanocitta cristata) which is distributed in…

View original 330 more words

Teaching Scientific Computing: Peer Review

computingThis post is going out on a bit of a limb because I am not familiar with the pedagogical literature relating to teaching scientific computing. As such, I can only speak from my very limited experience. I’ve taken a couple short courses on scientific computing, but the only formal full-semester course I’ve taken was Introduction to C Programming for Engineers 15 years ago. In that course, the instructor spent 50 minutes 3 days a week writing code on the chalk board in front of us and we were expected to learn. Homework was to write increasingly large programs throughout the semester. If they didn’t work we got a 0%, if they produced the wrong output we got a 50%, and if they worked properly we got a 100%. Obviously it was a terrible course (although a number of my statistic courses that involved programming were not very different so this might be more common than I’d like to believe). Besides some of the conspicuous instructional problems, I was just thinking that scientific programming courses could learn from pedagogy in the humanities. The University of New Hampshire requires undergraduates to take a number of writing intensive courses. To qualify as writing intensive a course my meet 3 criteria:

  1. Students in the course should do substantial writing that enhances learning and demonstrates knowledge of the subject or the discipline. Writing should be an integral part of the course and should account for a significant part (approximately 50% or more) of the final grade.
  2. Writing should be assigned in such a manner as to require students to write regularly throughout the course. Major assignments should integrate the process of writing (prewriting, drafting, revision, editing). Students should be able to receive constructive feedback of some kind (peer response, workshop, professor, TA, etc.) during the drafting/revising process to help improve their writing.
  3. The course should include both formal (graded) and informal (heuristic) writing.  There should be papers written outside of class which are handed in for formal evaluation as well as informal assignments designed to promote learning, such as invention activities, in-class essays, reaction papers, journals, reading summaries, or other appropriate exercises.

I think these criteria could be applied or at least adapted for scientific computing courses. The 1st one is easy. The 2nd and 2rd are what I think computing courses could really take advantage of. From what I’ve seen, there is often not a lot of time spent of informal feedback from instructors and peers to help with revision. In programming, especially with flexible languages like R, there are often many solutions to the same problems. Useful assignments could be to critic the programs of peers, find ways to improve code efficiency, and provide alternative solutions to sections of code. This could include critics of the commenting and README files.

In introductory courses there is often an emphasis on cover content. Some people will balk at the idea of spending time of learning alternatives to simple options when there is clearly 1 best solution and so much material to cover to get students writing even simple scripts. However, it’s better in my opinion to learn a few things well than many things superficially. By evaluating, revising, and developing alternatives to code written by peers, students will learn how to program better. There is a reason that informal assessment, peer review, and revision is a required part of writing intensive courses. Those same reasons apply to scientific computing courses. Just as review and revision make us better writers, it will make us better programmers.

RStudio Presentations

After finally getting around to to posting about knitr and Markdown just yesterday, RStudio comes out with an update that makes it even easier:

P.S. I Love RStudio

Knitting beautiful documents in RStudio

Title: Knitting Beautiful Documents in RStudio

Header: Markdown

This document was written using Markdown in RStudio. RStudio is a wonderful IDE for writing R scripts and keeping track of variables, dataframes, history, packages and much more. Markdown is a simple formatting syntax for authoring web pages. It allows for easy formatting, for example

# Header 1
## Header 2
### Header 3
#### Header 4
#### Header 5

Header 1

Header 2

Header 3

Header 4

Header 5

Code can be embedded in documents with nice formatting using backticks (symbol below the tilda in the upper left of an American qwerty keyboard). For example, if you were to type the following (without the >)

>install.packages("lme4", repos = '')

you could run that block in R and install the lme4 package. In the markdown document it would show up as:

install.packages("lme4", repos = "")

## The downloaded binary packages are in
##  /var/folders/r5/l8t2qxpj3m1_gnlf6ys7fwdm0000gq/T//RtmpiqoAcq/downloaded_packages

You should try this with the knitr package [UPDATE: You will likely have to restart RStudio after installing and loading the knitr library because it changes options in RStudio]. knitr is a fantastic package for converting the R markdown file (.Rmd) into an HTML document. The design of knitr allows any input languages (e.g. R, Python and awk) and any output markup languages (e.g. LaTeX, HTML, Markdown, AsciiDoc, and reStructuredText). I find Markdown to be the easies language to use to create documents for coursework/assignments. knitr is dynamic, so it will run your code and print the results below it. The best part is if you get new data or find an error in your code, you just re-knit the file and it’s updated a few seconds later.

In RStudio, once you install knitr a button shows up that you can click to knit the markdown file (produce your html document). When you click the Knit HTML button a web page will be generated that includes both content as well as the output of any embedded R code chunks within the document. You can embed an R code chunk like this:


##      speed           dist    
##  Min.   : 4.0   Min.   :  2  
##  1st Qu.:12.0   1st Qu.: 26  
##  Median :15.0   Median : 36  
##  Mean   :15.4   Mean   : 43  
##  3rd Qu.:19.0   3rd Qu.: 56  
##  Max.   :25.0   Max.   :120

These will be embedded and run in the document, but you can also run single lines or blocks (chunks) of code in RStudio to make sure they work before knitting the document. You can also embed plots, for example:



This is great but after kniting you are left with an HTML document. What if you want a PDF or Word Document? There may be other ways to do this, but I use pandoc to covert the HTML file into other forms such as PDF or MS Word Documents. After installing pandoc, you use it via the command line. I use a Mac, so I just open the Terminal to get to my Bash Shell and cd to the folder containing my document, then just run

pandoc -o 

and I have a PDF. Unfortunately, the pandoc default is to have huge margins that look ridiculous. To correct for that you can save a style file in the folder with the document you’re converting. To do that you just create a text file with the line:


and save it as something like margins.sty. Then in the command line you run

pandoc -o 
Knitting_Beautiful_Documents_in_RStudio.pdf -H margins.sty

Now when you view the PDF it should have 1 inch margins and look much nicer. That is about it for making attractive, dynamic documents using R, RStudio, knitr, and pandoc. It seems like a lot but after doing it once or twice it becomes very quick and easy. I have yet to use it but there is also a package pander, which allows you to skip the command line step and do the file conversion right in R (RStudio).

Using Markdown in RStudio with knitr is wonderful for creating reports, class assignments, and homework solutions (as a student, you’d really impress your instructor), but other people take it even further. Karthik Ram of rOpenSci fame has a great blog post of how to exend this system to include citations and bibliographies that allow you to ditch word.

For a nice Markdown reference list, check out this useful cheat sheet

Finally, if you want to see my Markdown code and everything that went into making this, you can find in my GitHub repository. Specifically, you can find the .Rmd, .md, margins.sty, and built PDF files.

[UPDATE: On a Mac R/RStudio/pandoc may have trouble finding your LaTeX distribution to use to build the PDF (even though using Markdown not LaTeX directly). A potential solution I found when building an R package was to specify the path to LaTeX:]


[UPDATE 2: From I found a nice way to create a PDF without leaving R (calling pandoc from R)]

# Create .md, .html, and .pdf files
markdownToHTML('', 'File.html', options=c("use_xhml"))
system("pandoc -s File.html -o File.pdf")

Get every new post delivered to your Inbox.

Join 55 other followers