Dan Ovando

40 minute read

These are materials from a workshop I taught for UC Santa Barbara’s eco-data-science group to get people familiar with using purrr for their data-wrangling and modeling needs. This post covers

  1. A general introduction to the workings of purrr

  2. Using purrr to wrangle lists

  3. Using purrr and modelr for data analysis and modeling

The .Rmd for this document can be found here

Full credit to Jenny Bryan’s excellent purrr tutorial for helping me learn purrr and providing the basis for the list-wrangling examples here , along with Hadley Wickham & Garret Grolemund’s R for Data Science. My goal is to walk you through some of the concepts outlined in these (much better) resources, and expand on some particular applications that have been useful to me.

What is purrr?

purrr is a part of the tidyverse, taking on the tasks accomplished by the apply suite of functions in base R (and a whole lot more). Its mail ability is applying operations across many dimensions of your data, improving your ability to keep even complex analyses “tidy”. At its simplest, it’s basically an alternative to using the apply suite of packages. At its most complex, it allows you to easily move around in and manipulate multi-dimensional (and multi-type) data, making for example running factorial combinations of models and data a tidy and easy task.

There are a whole suite of functions involved with purrr, but the goal of this tutorial is to get the fundamentals down so that you can start incorporating purrr into your own code and explore higher-level abilities on your own.

The Basics of purrr

To get started, map is the workhorse function of the purrr package. map and apply basically take on the tasks of a for loop. Suppose we wanted to accomplish the following task

shades <- colors()[1:10]

for (i in seq_along(shades)){
  
  print(shades[i])
  
}
## [1] "white"
## [1] "aliceblue"
## [1] "antiquewhite"
## [1] "antiquewhite1"
## [1] "antiquewhite2"
## [1] "antiquewhite3"
## [1] "antiquewhite4"
## [1] "aquamarine"
## [1] "aquamarine1"
## [1] "aquamarine2"

At its core, what we are doing here is applying the function print over each of the elements in shades

Rather than use a loop, we could accomplish the same task using lapply

a <-  lapply(shades, print)
## [1] "white"
## [1] "aliceblue"
## [1] "antiquewhite"
## [1] "antiquewhite1"
## [1] "antiquewhite2"
## [1] "antiquewhite3"
## [1] "antiquewhite4"
## [1] "aquamarine"
## [1] "aquamarine1"
## [1] "aquamarine2"

And lastly using map from purrr

a <-  map(shades, print)
## [1] "white"
## [1] "aliceblue"
## [1] "antiquewhite"
## [1] "antiquewhite1"
## [1] "antiquewhite2"
## [1] "antiquewhite3"
## [1] "antiquewhite4"
## [1] "aquamarine"
## [1] "aquamarine1"
## [1] "aquamarine2"

This is obviously a trivial example, but you get the idea: these are three ways applying a function to a vector/list of things.

Key purrr verbs

Now that you have an idea of what map does, let’s dig into it a bit more.

map is the workhorse of the purrr family. It is basically apply

  • The basic syntax works in the manner

  • map("Lists to apply function to","Function to apply across lists","Additional parameters")

Since a dataframe is basically a special case of a list in which all entries have the same number of rows, we can map through each element (column in this case) of say mtcars

map(mtcars, mean, na.rm = T)
## $mpg
## [1] 20.09062
## 
## $cyl
## [1] 6.1875
## 
## $disp
## [1] 230.7219
## 
## $hp
## [1] 146.6875
## 
## $drat
## [1] 3.596563
## 
## $wt
## [1] 3.21725
## 
## $qsec
## [1] 17.84875
## 
## $vs
## [1] 0.4375
## 
## $am
## [1] 0.40625
## 
## $gear
## [1] 3.6875
## 
## $carb
## [1] 2.8125

So you see here we’re taking the mean of each element in the dataframe (list) mtcars, and passing the additional option na.rm = T to the function.

If we save the output of the above run to an object, we see that it is now a list, instead of a dataframe.

mtcars_means <- map(mtcars, mean, na.rm = T)

class(mtcars_means)
## [1] "list"

map by default returns a list. One nice feature of map and purrr is that we can specify the kind of output we want.

  • map_TYPE returns an object of class TYPE, e.g.

    • map_lgl returns logical objects

    • map_df returns data frames, etc.

Specifying type makes it easier to wrangle different types of outputs suppose that we want a dataframe of the mean of each column in mtcars

map_df(mtcars, mean, na.rm = T)
## # A tibble: 1 x 11
##     mpg   cyl  disp    hp  drat    wt  qsec    vs    am  gear  carb
##   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1  20.1  6.19  231.  147.  3.60  3.22  17.8 0.438 0.406  3.69  2.81

map can also be extended to deal with multiple input lists

  • map applies a function over one list.

  • map2 applies a function over combinations of two lists in the form

    • map2(list1, list2, ~function(.x,.y), ...)
map2_chr(c('one','two','red','blue'), c('fish'), paste)
## [1] "one fish"  "two fish"  "red fish"  "blue fish"

In this case, we are mapping paste over each combination of these two lists.

Programming with Functions

Notice here that purrr guessed what I was trying to do here (paste the two elements together). That works for very simply functions, but a lot of the time that won’t work and you have to be more specific in specifying the function we are going to use and how the data we are passing to it are used. In those cases, we’ll want to specify our own functions for use in purrr.

purrr is designed to help with “functional programming”, which you can take broadly as trying to use functions (preferably “pure” ones) to accomplish most of your complex and repetitive tasks (don’t copy and paste more then 3 times - H. Wickham)

As a very quick reintroduction to functions:

Functions in R take any number of named inputs, do operations on them, and returns the last thing produced inside the function (or whatever you specify using return)

z <- 10

foo <- function(x,y) {

  z <- x + y

  return(z)
}

foo(x = 2,y = 3)
## [1] 5
z
## [1] 10

Notice that z wasn’t affected by the z inside the function. Operations inside functions happen in the local environment of that function. “What happens in the function, stays in the function (except what you share via return)”

Note though that functions can “see” objects in the global environment

a <- 10

foo <- function(x,y) {

z <- x + y + a

return(z)
}

foo(x = 2, y = 3)
## [1] 15

I strongly recommend you avoid using global variables inside functions: it can very easily cause unintended and sneaky behavior (I’ve been burned before).

Functions can be as complicated as you want them to be, but a good rule is to try and make sure each function is good at doing one thing. That doesn’t mean that that “one thing” can’t be a complex thing (e.g. a population model), but the objective of the function should be to do that one thing well (produce trajectories of biomass, not biomass, diagnostics, and a knitted report all in one function).

You can also use “anonymous” functions in purrr. This is basically a shortcut for when you don’t want to take up the space of writing and saving a whole function somewhere. You make anonymous functions with ~

Say we want to be more specific about our call to paste from the above example. We could use ~ to write

map2_chr(c('one','two','red','blue'), c('fish'), ~paste(.x,.y))
## [1] "one fish"  "two fish"  "red fish"  "blue fish"

I’ve used ~ to define one-sided formula on the fly, which is handy for simple things like this. We’ll see later how to write and use longer functions here. Note that by default the first argument passed to map is identified by .x, and the second .y.

We can also write custom functions for use. Say you want the coefficient of variation (standard deviation divided by the mean) of each of the variables in mtcars

cvfoo <- function(x){

  sd(x) / mean(x)

}

map(mtcars, cvfoo)
## $mpg
## [1] 0.2999881
## 
## $cyl
## [1] 0.2886338
## 
## $disp
## [1] 0.5371779
## 
## $hp
## [1] 0.4674077
## 
## $drat
## [1] 0.1486638
## 
## $wt
## [1] 0.3041285
## 
## $qsec
## [1] 0.1001159
## 
## $vs
## [1] 1.152037
## 
## $am
## [1] 1.228285
## 
## $gear
## [1] 0.2000825
## 
## $carb
## [1] 0.5742933

Can be accomplished using

map(mtcars, cvfoo)
## $mpg
## [1] 0.2999881
## 
## $cyl
## [1] 0.2886338
## 
## $disp
## [1] 0.5371779
## 
## $hp
## [1] 0.4674077
## 
## $drat
## [1] 0.1486638
## 
## $wt
## [1] 0.3041285
## 
## $qsec
## [1] 0.1001159
## 
## $vs
## [1] 1.152037
## 
## $am
## [1] 1.228285
## 
## $gear
## [1] 0.2000825
## 
## $carb
## [1] 0.5742933

Multiple lists using pmap

Anything above two lists is handled by pmap. pmap allows you to pass an arbitrary number of objects to map, where each object is a named element in a list, and the function takes matching elements of those lists as entries

pmap(list(list1,list2,list3), function(list1, list2, list3),...)
dmonds <- diamonds %>% 
  slice(1:4)

pmap_foo <- function(list1, list2 , list3){
  
  paste0("Diamond #", list1, " sold for $", list2," and was ", list3, " carats")
  
}

pmap(list(list1 = 1:nrow(dmonds), list2 = dmonds$price, list3 = dmonds$carat), pmap_foo)
## [[1]]
## [1] "Diamond #1 sold for $326 and was 0.23 carats"
## 
## [[2]]
## [1] "Diamond #2 sold for $326 and was 0.21 carats"
## 
## [[3]]
## [1] "Diamond #3 sold for $327 and was 0.23 carats"
## 
## [[4]]
## [1] "Diamond #4 sold for $334 and was 0.29 carats"

To bring all this together together, here are three ways of doing the exact same thing; checking to see if both of the columns in a row are NA (note this is not the most efficient way to do this task, this is just to show how to use map2 and pmap, anonymous functions, and custom functions)

x <- c(1,1,NA,NA)

y <-  c(1, NA, 1, NA)

z <-  data_frame(x = x, y = y)
## Warning: `data_frame()` was deprecated in tibble 1.1.0.
## Please use `tibble()` instead.
z %>%
mutate(both_na = map2_lgl(x,y, ~ is.na(.x) & is.na(.y)))
## # A tibble: 4 x 3
##       x     y both_na
##   <dbl> <dbl> <lgl>  
## 1     1     1 FALSE  
## 2     1    NA FALSE  
## 3    NA     1 FALSE  
## 4    NA    NA TRUE
nafoo <- function(x,y){
  
out <- (is.na(x) & is.na(y))

}

z %>%
mutate(both_na = map2_lgl(x,y,nafoo))
## # A tibble: 4 x 3
##       x     y both_na
##   <dbl> <dbl> <lgl>  
## 1     1     1 FALSE  
## 2     1    NA FALSE  
## 3    NA     1 FALSE  
## 4    NA    NA TRUE
z %>%
mutate(both_na = pmap_lgl(list(x = x,y = y), nafoo))
## # A tibble: 4 x 3
##       x     y both_na
##   <dbl> <dbl> <lgl>  
## 1     1     1 FALSE  
## 2     1    NA FALSE  
## 3    NA     1 FALSE  
## 4    NA    NA TRUE

Wrangling lists

Now that we have an idea of how we can use purrr, let’s get back to actually using purrr in practice.

Lists are powerful objects that allow you to store all kinds of information in one place.

They can also be a pain to deal with, since we are no longer in the nice 2-D structure of a traditional dataframe, which is much closer to how most of us probably learned to deal with data.

purrr has lots of useful tools for helping you quickly and efficiently poke around inside lists. Let’s start with the Game of Thrones database in the repurrrsive package (thanks again to Jenny Bryan). got_chars is a list containing a bunch of information on GoT characters with a “point of view” chapter in the first few books.

The str function is a great way to get a first glimpse at a list’s structure

str(got_chars, list.len =  3)
## List of 30
##  $ :List of 18
##   ..$ url        : chr "https://www.anapioficeandfire.com/api/characters/1022"
##   ..$ id         : int 1022
##   ..$ name       : chr "Theon Greyjoy"
##   .. [list output truncated]
##  $ :List of 18
##   ..$ url        : chr "https://www.anapioficeandfire.com/api/characters/1052"
##   ..$ id         : int 1052
##   ..$ name       : chr "Tyrion Lannister"
##   .. [list output truncated]
##  $ :List of 18
##   ..$ url        : chr "https://www.anapioficeandfire.com/api/characters/1074"
##   ..$ id         : int 1074
##   ..$ name       : chr "Victarion Greyjoy"
##   .. [list output truncated]
##   [list output truncated]

For those of you who prefer a more interactive approach, you can also use the jsonedit function in the listviewer package if you’re working in a notebook or an html document

listviewer::jsonedit(got_chars)

So, how do we start poking around in this database?

Suppose we wanted only the first 5 characters in the list

got_chars[1:5] %>% 
  str(max.level = 1)
## List of 5
##  $ :List of 18
##  $ :List of 18
##  $ :List of 18
##  $ :List of 18
##  $ :List of 18

Now, suppose that we jut want to look at the name of the first 5 characters. Who remembers how to do this in base R?

You might think that got_chars[[1:5]]$name would do the trick…

got_chars[[1:5]]$name
## Error in got_chars[[1:5]]: recursive indexing failed at level 3

Nope.

So we could do

names <- vector(mode = "character",5)

for (i in 1:5){
  
  names[i] <- got_chars[[i]]$name
}

names
## [1] "Theon Greyjoy"     "Tyrion Lannister"  "Victarion Greyjoy"
## [4] "Will"              "Areo Hotah"

That works, but certainly not ideal.

Enter purrr

got_chars[1:5] %>%
  map_chr('name')
## [1] "Theon Greyjoy"     "Tyrion Lannister"  "Victarion Greyjoy"
## [4] "Will"              "Areo Hotah"

Nice! map figures that when you do this, you’re looking for that list entry. I actually find some of the “helps” a bit confusing when you’re learning, since they play by slightly different rules than the purrr functions usually do. Case in point, given the above example, how do we think we might get the ‘name’ and ‘allegiances’ columns?

got_chars[1:5] %>%
  map(c('name','allegiances')) 
## [[1]]
## NULL
## 
## [[2]]
## NULL
## 
## [[3]]
## NULL
## 
## [[4]]
## NULL
## 
## [[5]]
## NULL

Huh, why didn’t that work? Passing a string to map actually tells purrr to dive down into recursive layers of a list (we’ll see this next) In this case, if I want to extract the “name” and “allegiances” variables, I can use [

got_chars[1:5] %>%
  map(`[`, c('name', 'allegiances'))
## [[1]]
## [[1]]$name
## [1] "Theon Greyjoy"
## 
## [[1]]$allegiances
## [1] "House Greyjoy of Pyke"
## 
## 
## [[2]]
## [[2]]$name
## [1] "Tyrion Lannister"
## 
## [[2]]$allegiances
## [1] "House Lannister of Casterly Rock"
## 
## 
## [[3]]
## [[3]]$name
## [1] "Victarion Greyjoy"
## 
## [[3]]$allegiances
## [1] "House Greyjoy of Pyke"
## 
## 
## [[4]]
## [[4]]$name
## [1] "Will"
## 
## [[4]]$allegiances
## list()
## 
## 
## [[5]]
## [[5]]$name
## [1] "Areo Hotah"
## 
## [[5]]$allegiances
## [1] "House Nymeros Martell of Sunspear"

In this case, [ is saying use [] as a function.

Let’s say you’ve got a list that goes a little deeper. Suppose that we want to extract element w from the list thing as characters. We can spell out the path that we want purrr to go through using c("z","w"), which tells purrr to first to to z, then element w (which is inside z), and return w.

thing <- list(list(y = 2, z = list(w = 'hello')),
              list(y = 2, z = list(w = 'world')))

map_chr(thing, c('z','w'))
## [1] "hello" "world"

If you’re like me, the numeric indexing of each of the entries is currently driving you nuts: I’d rather have each element in the list be named by the character it refers to. You can use set_names to accomplish this

got_chars[1:5] %>%
  rlang::set_names(map_chr(.,'name')) %>%
  listviewer::jsonedit()

Much better (remember that . refers to the objected passed to function through the pipe)!

Now, let’s say that I want to get all the Lanisters, so I can see which people to root against (or for if that’s your jam).

This is where a lot of the power of purrr starts to come in, allowing you to easily apply functions across nested layers of a list

got_chars %>%
  set_names(map_chr(.,'name')) %>%
  map(`[`,c('name','allegiances')) %>%
  purrr::keep(~stringr::str_detect(.$name, 'Lannister')) %>%
  listviewer::jsonedit()

Now, suppose that we want anyone who’s allied with the Starks

got_chars[1:4] %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>% 
map(~str_detect(.$allegiances, 'Stark'))
## $`Theon Greyjoy`
## [1] FALSE
## 
## $`Tyrion Lannister`
## [1] FALSE
## 
## $`Victarion Greyjoy`
## [1] FALSE
## 
## $Will
## logical(0)

Hmmm, that doesn’t look good, what’s up with Will? What happens if I try and use keep (list filter) here?

got_chars %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
keep(~str_detect(.$allegiances, 'Stark'))

Nope that still doesn’t work. What’s going on? The problem here is that our friend Will has no allegiances, and worse yet, the allegiances entry doesn’t say “none”, it’s just an empty array. Here’s one way to solve this

got_chars %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
keep(~ifelse(length(.$allegiances) > 0, str_detect(.$allegiances, 'Stark'),FALSE)) %>%
listviewer::jsonedit()

There’s almost certainly a better way, but this just shows that things get a little more complicated when you’re trying to apply functions across list objects; things like dimensions, types, NULLs, can cause problems. If I’m trying something new, I’ll usually try and develop the methods on a subset of the list that I know is “ideal”, make sure it works there, and then try the operation on progressively more complicated lists. That allows me to separate errors in my functions vs. problems reading in “odd” data types.

As Cersei likes to remind us, anyone who’s not a Lannister is an enemy to the Lannisters. Let’s look at all the POV characters that aren’t allied to the Lannisters

got_chars %>%
  set_names(map_chr(.,'name'))  %>%
  map(`[`,c('name','allegiances')) %>%
  discard(~ifelse(length(.$allegiances) > 0, str_detect(.$allegiances, 'Lannister'),FALSE)) %>%
  listviewer::jsonedit()

You can also use map together with your own custom functions. Suppose we wanted to figure out how many aliases and alliances each character has, as well as where they were born. We can use pmap to apply a function over each of these attributes

got_list <- got_chars %>%
  map(`[`, c('name','aliases','allegiances','born'))

got_list <-  got_chars %>% {
  list(
    name = map_chr(.,'name'),
    aliases = map(.,'aliases'),
    allegiances = map(.,'allegiances'),
    born = map_chr(.,'born')
  )
}

str(got_list, list.len = 3)
## List of 4
##  $ name       : chr [1:30] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy" "Will" ...
##  $ aliases    :List of 30
##   ..$ : chr [1:4] "Prince of Fools" "Theon Turncloak" "Reek" "Theon Kinslayer"
##   ..$ : chr [1:11] "The Imp" "Halfman" "The boyman" "Giant of Lannister" ...
##   ..$ : chr "The Iron Captain"
##   .. [list output truncated]
##  $ allegiances:List of 30
##   ..$ : chr "House Greyjoy of Pyke"
##   ..$ : chr "House Lannister of Casterly Rock"
##   ..$ : chr "House Greyjoy of Pyke"
##   .. [list output truncated]
##   [list output truncated]
got_foo <- function(name, aliases, allegiances,born){

  paste(name, 'has', length(aliases), 'aliases and', length(allegiances),
        'allegiances, and was born in', born)

}

got_list %>%
  pmap_chr(got_foo) %>%
  head()
## [1] "Theon Greyjoy has 4 aliases and 1 allegiances, and was born in In 278 AC or 279 AC, at Pyke"    
## [2] "Tyrion Lannister has 11 aliases and 1 allegiances, and was born in In 273 AC, at Casterly Rock" 
## [3] "Victarion Greyjoy has 1 aliases and 1 allegiances, and was born in In 268 AC or before, at Pyke"
## [4] "Will has 1 aliases and 0 allegiances, and was born in "                                         
## [5] "Areo Hotah has 1 aliases and 1 allegiances, and was born in In 257 AC or before, at Norvos"     
## [6] "Chett has 1 aliases and 0 allegiances, and was born in At Hag's Mire"

Things obviously get a lot more complicated than this, but hopefully that gives you an idea of how to manipulate lists using purrr

got_chars %>%
  set_names(map_chr(.,'name'))  %>%
  map(`[`,c('name','allegiances')) %>%
  listviewer::jsonedit()

Analysis with purrr and modelr

So far, purrr has basically helped us apply functions across and poke around in lists. That’s nice, but its real power comes in helping with analysis. Let’s look at the gapminder data set

head(gapminder)
## # A tibble: 6 x 6
##   country     continent  year lifeExp      pop gdpPercap
##   <fct>       <fct>     <int>   <dbl>    <int>     <dbl>
## 1 Afghanistan Asia       1952    28.8  8425333      779.
## 2 Afghanistan Asia       1957    30.3  9240934      821.
## 3 Afghanistan Asia       1962    32.0 10267083      853.
## 4 Afghanistan Asia       1967    34.0 11537966      836.
## 5 Afghanistan Asia       1972    36.1 13079460      740.
## 6 Afghanistan Asia       1977    38.4 14880372      786.

gapminder provides data on life expectancy, economics, and population for countries across the world.

Now, suppose we want to build up a model trying to predict life expectancy as a function of covariates, starting with a simple one: life expectancy as a function of population and per capita GDP

gapminder <- gapminder %>%
  set_names(colnames(.) %>% tolower())

life_mod <- lm(lifeexp ~ pop + gdppercap, data = gapminder)
Dependent variable:
lifeexp
pop 0.000***
(0.000)
gdppercap 0.001***
(0.00003)
Constant 53.648***
(0.322)
Observations 1,704
R2 0.347
Adjusted R2 0.346
Residual Std. Error 10.443 (df = 1701)
F Statistic 452.151*** (df = 2; 1701)
Note: p<0.1; p<0.05; p<0.01

So now we have a very simple model, but how do we know if this is the model we want to use? Let’s use AIC to compare a few different model structures (note, this is not an endorsement for AIC mining!)

models <- list(
  simple = 'lifeexp ~ pop + gdppercap',

medium = 'lifeexp ~ pop + gdppercap + continent + year',

more = 'lifeexp ~ pop + gdppercap + country + year',

woah = 'lifeexp ~ pop + gdppercap + year*country'
)

Now, since this is a simple three model example, we could just use a loop, or even copy and paste a few times. But, let’s see how we can use purrr to help us do some diagnostics on these models.

Let’s start by getting our models and data into a data frame, using list-columns

model_frame <- data_frame(model = models) %>%
  mutate(model_name = names(model)) 

Now, let’s use purrr to convert each of these character strings into a model

model_frame <- model_frame %>% 
    mutate(model = map(model, as.formula))

model_frame
## # A tibble: 4 x 2
##   model        model_name
##   <named list> <chr>     
## 1 <formula>    simple    
## 2 <formula>    medium    
## 3 <formula>    more      
## 4 <formula>    woah
model_frame <- model_frame %>%
  mutate(fit = lm(model, data = gapminder))

Hmmm, why didn’t that work? mutate by itself doesn’t know how to evaluate this, but map can help us out

model_frame <- model_frame %>%
  mutate(fit = map(model, ~lm(., data = gapminder), gapminder = gapminder))

model_frame
## # A tibble: 4 x 3
##   model        model_name fit         
##   <named list> <chr>      <named list>
## 1 <formula>    simple     <lm>        
## 2 <formula>    medium     <lm>        
## 3 <formula>    more       <lm>        
## 4 <formula>    woah       <lm>

We’re now going to start integrating some methods from the modelr package to diagnose our regression

model_frame <- model_frame %>%
mutate(r2 = map_dbl(fit, ~modelr::rsquare(., data = gapminder)),
aic = map_dbl(fit, ~AIC(.))) %>% 
  arrange(aic)

model_frame
## # A tibble: 4 x 5
##   model        model_name fit             r2    aic
##   <named list> <chr>      <named list> <dbl>  <dbl>
## 1 <formula>    woah       <lm>         0.976  7752.
## 2 <formula>    more       <lm>         0.932  9268.
## 3 <formula>    medium     <lm>         0.717 11420.
## 4 <formula>    simple     <lm>         0.347 12836.

So, AIC tells us that our most complext model is still the most parsimonious (of the ones we’ve explored here). Let’s dig into this a bit further, by explicitly testing the out of sample predictive ability of each of the models. “Overfit” models are commonly really good at describing the data that they are fit to, but perform poorly out of sample.

We’ll start by using the modelr package to create a bunch of training-test combination data sets using 10-fold cross validation.

validate <- gapminder %>%
 rsample::vfold_cv(10)

test_data <- list(test_training = list(validate), model_name = model_frame$model_name)  
  
test_data <- cross_df(test_data) %>%
  unnest(.id = "model_number") %>% 
  left_join(model_frame %>% select(model_name, model, fit), by = "model_name")
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(test_training)`
## Warning: The `.id` argument of `unnest()` is deprecated as of tidyr 1.0.0.
## Manually create column of names instead.
test_data
## # A tibble: 40 x 5
##    splits             id     model_name model        fit         
##    <list>             <chr>  <chr>      <named list> <named list>
##  1 <split [1533/171]> Fold01 woah       <formula>    <lm>        
##  2 <split [1533/171]> Fold02 woah       <formula>    <lm>        
##  3 <split [1533/171]> Fold03 woah       <formula>    <lm>        
##  4 <split [1533/171]> Fold04 woah       <formula>    <lm>        
##  5 <split [1534/170]> Fold05 woah       <formula>    <lm>        
##  6 <split [1534/170]> Fold06 woah       <formula>    <lm>        
##  7 <split [1534/170]> Fold07 woah       <formula>    <lm>        
##  8 <split [1534/170]> Fold08 woah       <formula>    <lm>        
##  9 <split [1534/170]> Fold09 woah       <formula>    <lm>        
## 10 <split [1534/170]> Fold10 woah       <formula>    <lm>        
## # … with 30 more rows

In a few lines of code, we now have “tidy” cross validation routine across multiple models, not bad.

test_data <- test_data %>%
mutate(fit = map2(model, splits, ~lm(.x, data = rsample::analysis(.y)))) %>%
mutate(root_mean_sq_error = map2_dbl(fit, splits, ~modelr::rmse(.x,rsample::assessment(.y))))
test_data %>%
  ggplot(aes(root_mean_sq_error, fill = model_name)) +
  geom_density(alpha = 0.75) +
  labs(x = "Root Mean Squared Error", title = "Cross-validated distribution of RMSE")

Judging by out of sample RMSE, the most complicated model (woah) is still our best choice. And just like that in a few lines of code we’ve used modelr and purrr to easily compare a number of different model structures.

Out-of-sample RMSE is a useful metric, but there are lots of other diagnostics we might want to run. Suppose that we want to examine the fitted vs. residuals plots for each model

gen_fit_v_resid <- function(model){
  
  aug_model <- broom::augment()
  
}


test_data <- test_data %>% 
  mutate(aug_model = map(fit, broom::augment))

fit_plot <- test_data %>% 
  select(model_name,aug_model) %>% 
  unnest() %>% 
  ggplot(aes(.fitted, .resid)) + 
  geom_hline(aes(yintercept = 0), linetype = 2, color = "red") +
  geom_point(alpha = 0.5) + 
  facet_wrap(~model_name, scales = "free_y")
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(aug_model)`
fit_plot

Hmmm that doesn’t look good (we want the black points to fall more of less on the red dashed 1:1 line), we clearly need to spend more time with model specification (AKA this very simple model is, surprise surprise, not a good way to model life expectancy). But now, we see how we can use purrr and modelr to easily construct and compare numerous new model hypotheses in our hunt for the best one.

Parallel purrr

The nature of purrr really lends itself to parallel processing. At it’s core, purrr is doing a “split-apply-combine” routine, meaning that for must use cases you have a bunch of independent operations that you need your computer to run (i.e., the results of one step in map call do not affect the next step). This means that if you want to speed things up, you could farm those processes out to different cores on your computer. For example, if you ran an operation in parallel on four cores, you could in theory run four tasks in about the time it takes to run one task normally (it’s not quite as linear as that due to startup and maintanance costs).

WARNING: running things in parallel can get complicated across different platforms (especially moving from Linux/OS X to Windows). Be preparred to do some work on this.

As of now, purrr does not have built in parallel functionality (though it may be in the works). There are a few options out there though.

One is to simply step outside of purrr for a moment: one of the great things about the open-source world is finding the right package for the right problem, and in this case there are other options that work great.

My preferred solution to date has been the foreach and doParallel packages. The nice thing is that once you’ve formatted your data and model to be run through purrr (i.e. made things tidy), you’re already set up to use tools

  n_cores <- floor(parallel::detectCores()/2)

  doParallel::registerDoParallel(cores = n_cores)

fits <-  foreach::foreach(i = 1:nrow(test_data)) %dopar% {
  
  lm(test_data$model[[i]], data = analysis(test_data$splits[[i]]))
  
}

test_data$fit <- fits

There is also a new package called furrr which looks really promising. This allows you to run things in parallel by simply setting up a cluster and appending future_ to your map call (leveraging the future package).

library(furrr)
## Loading required package: future
future::plan(multiprocess(workers = 1))
## Warning in supportsMulticoreAndRStudio(...): [ONE-TIME WARNING] Forked
## processing ('multicore') is not supported when running R from RStudio
## because it is considered unstable. For more details, how to control forked
## processing or not, and how to silence this warning in future R sessions, see ?
## parallelly::supportsMulticore
start <- Sys.time()

test_data <- test_data %>% 
  mutate(par_fits = future_map2(model, splits, ~lm(.x, data = rsample::analysis(.y)),.progress = T)
)


Sys.time() - start
## Time difference of 1.431695 secs

Miscellaneos purrr

That’s a broad tour of the key features of purrr. Here’s a few more examples of miscellaneous things you can do with purrr

Debugging using safely

One annoying thing about using map (or apply) in place of loops is that it can make debugging much harder to deal with. With a loop, it’s easy to see where exactly an error occurred and your loop failed (e.g. look at the index of the loop when the error occurred). With map, it can be much harder to figure out where the problem is, especially if you have a very large list that you’re mapping over.

The safely function lets us solve this problem.

Suppose that you’ve got a bunch of csv’s of fish lengths from a field site. The field techs are supposed to enter the length in one column, and the units in a second column. As tends to happen though, some techs put the units next to the length (e.g. 26cm), instead of in separate lengths and units columns. Suppose then that we want to pull in our lengths and log transform them, since we suspect that the lengths are log-normally distributed and we’d like to run an OLS regression on them.

To simulate our data…

fish_foo <- function() {
  bad_tech <- ifelse(runif(1, 0, 10) > 2, FALSE, TRUE)

  if (bad_tech == F) {
    lengths <- rnorm(10, 25, 5) %>% signif(3)

    units <- "cm"
  } else {
    lengths <- paste0(rnorm(10, 25, 5) %>% signif(3), "cm")

    units <- "what's this column for?"
  }

  out <- data_frame(lengths = lengths, units = units)
  return(out)
}

length_data <- rerun(100, fish_foo()) %>%
  set_names(paste0("tech", 1:100))

listviewer::jsonedit(length_data)

Our goal is to put all of these observations together, log transform the lengths, and plot. Using map, we know that we can use

length_data %>% 
  map(~log(.x$lengths))

Yep, that doesn’t work, since map hit an error somewhere in there (it doesn’t know how to take the log of (25cm). Now, we could go through all 100 entries and see which ones are bad, or concatenate them earlier and look for NAs after conversion to type numeric, but let’s see how we can use purrr to deal with this.

safe_log <-  safely(log)

diagnose_length <- length_data %>% 
  map(~safe_log(.$lengths))

head(diagnose_length,2)
## $tech1
## $tech1$result
## NULL
## 
## $tech1$error
## <simpleError in .Primitive("log")(x, base): non-numeric argument to mathematical function>
## 
## 
## $tech2
## $tech2$result
##  [1] 2.975530 3.299534 3.117950 3.198673 3.367296 3.230804 3.086487 3.063391
##  [9] 3.280911 3.284664
## 
## $tech2$error
## NULL

Great, now we at least have something to work with. safely gives us two objects per entry: the data if it worked, and a log of the error messages if it didn’t.

How do we figure out which tech’s are the problem? We can use map_lgl to help us out here

bad_lengths <- map_lgl(diagnose_length, ~is.null(.x$error) == F)

bad_techs <- diagnose_length %>% 
  keep(bad_lengths)

names(bad_techs)
##  [1] "tech1"  "tech13" "tech31" "tech41" "tech50" "tech52" "tech59" "tech61"
##  [9] "tech77" "tech80" "tech90"

I leave it to your imagination to think of how to resolve this problem, but at least we now know where the problem is. One strategy is to use the handy possibly function from purrr. This basically says try a function and return its value if it works, otherwise return something else.

possibly_log <-  possibly(log, otherwise = NA)


diagnose_length <- length_data %>% 
  map(~possibly_log(.$lengths))

listviewer::jsonedit(diagnose_length)

Find and Convert Factors

Factors can creep into your data, which can cause problem sometimes. There’s lot’s of ways to solve this, but you can use purrr to efficiently check for factors, and convert them to characters in your data frame.

Let’s take a look at the gapminder dataset, from the gapminder package.

gapminder
## # A tibble: 1,704 x 6
##    country     continent  year lifeexp      pop gdppercap
##    <fct>       <fct>     <int>   <dbl>    <int>     <dbl>
##  1 Afghanistan Asia       1952    28.8  8425333      779.
##  2 Afghanistan Asia       1957    30.3  9240934      821.
##  3 Afghanistan Asia       1962    32.0 10267083      853.
##  4 Afghanistan Asia       1967    34.0 11537966      836.
##  5 Afghanistan Asia       1972    36.1 13079460      740.
##  6 Afghanistan Asia       1977    38.4 14880372      786.
##  7 Afghanistan Asia       1982    39.9 12881816      978.
##  8 Afghanistan Asia       1987    40.8 13867957      852.
##  9 Afghanistan Asia       1992    41.7 16317921      649.
## 10 Afghanistan Asia       1997    41.8 22227415      635.
## # … with 1,694 more rows

Yep, look at that, country and continent are both factors. Useful for regression, but a little dangerous to have in your raw data.

We can use purrr to find all the factors in our data

gapminder %>%
map_lgl(is.factor)
##   country continent      year   lifeexp       pop gdppercap 
##      TRUE      TRUE     FALSE     FALSE     FALSE     FALSE

And to convert each column that is a factor into a character, we could try map_if, which applies a conditional statement to each element in the list, and applies the function if the test is passed

gapminder %>%
map_if(is.factor, as.character) %>% 
  str(2)
## List of 6
##  $ country  : chr [1:1704] "Afghanistan" "Afghanistan" "Afghanistan" "Afghanistan" ...
##  $ continent: chr [1:1704] "Asia" "Asia" "Asia" "Asia" ...
##  $ year     : int [1:1704] 1952 1957 1962 1967 1972 1977 1982 1987 1992 1997 ...
##  $ lifeexp  : num [1:1704] 28.8 30.3 32 34 36.1 ...
##  $ pop      : int [1:1704] 8425333 9240934 10267083 11537966 13079460 14880372 12881816 13867957 16317921 22227415 ...
##  $ gdppercap: num [1:1704] 779 821 853 836 740 ...

Huh well that worked, but something is weird. Our nice gapminder dataframe is now a list. How can we do this and keep things as a dataframe? We can use the purrrlyr package to do this, which has handy parallels of the functions in purrr, but designed to deal with and give back dataframes.

gapminder %>%
purrrlyr::dmap_if(is.factor, as.character) %>% 
  head()
## # A tibble: 6 x 6
##   country     continent  year lifeexp      pop gdppercap
##   <chr>       <chr>     <int>   <dbl>    <int>     <dbl>
## 1 Afghanistan Asia       1952    28.8  8425333      779.
## 2 Afghanistan Asia       1957    30.3  9240934      821.
## 3 Afghanistan Asia       1962    32.0 10267083      853.
## 4 Afghanistan Asia       1967    34.0 11537966      836.
## 5 Afghanistan Asia       1972    36.1 13079460      740.
## 6 Afghanistan Asia       1977    38.4 14880372      786.

Much better, those pesky factors are now characters and we still have a dataframe.

Partial

I just really like this one. Suppose you’ve got something that you are copy and pasting a lot, like getting interquartile range of something.

gapminder %>%
  summarise(
    mean_gdp = mean(gdppercap),
    lower_gdp = quantile(gdppercap, 0.25),
    upper_gdp = quantile(gdppercap, 0.75),
    mean_life = mean(lifeexp),
    lower_life = quantile(lifeexp, 0.25),
    upper_life = quantile(lifeexp, 0.75)
  )
## # A tibble: 1 x 6
##   mean_gdp lower_gdp upper_gdp mean_life lower_life upper_life
##      <dbl>     <dbl>     <dbl>     <dbl>      <dbl>      <dbl>
## 1    7215.     1202.     9325.      59.5       48.2       70.8

Works, and in this case not hard, but still annoying to retype!

lower = partial(quantile, probs = 0.25)

upper = partial(quantile, probs = 0.75)

gapminder %>%
  summarise(
    mean_gdp = mean(gdppercap),
    lower_gdp = lower(gdppercap),
    upper_gdp = upper(gdppercap),
    mean_life = mean(lifeexp),
    lower_life = lower(lifeexp),
    upper_life = upper(lifeexp)
  )
## # A tibble: 1 x 6
##   mean_gdp lower_gdp upper_gdp mean_life lower_life upper_life
##      <dbl>     <dbl>     <dbl>     <dbl>      <dbl>      <dbl>
## 1    7215.     1202.     9325.      59.5       48.2       70.8

And that’s about it, hopefully this helps you get started incorporated purrr into your programming.

comments powered by Disqus