why are the results of readcsv larger than those of readcsv?

Selecting the right functions is of course very important for writing efficient code. The degree of optimization present in different functions and packages will impact how objects are stored, their size, and the speed of operations running on them. Please consider the following.

library(data.table)
a <- c(1:1000000)
b <- rnorm(1000000)
mat <- as.matrix(cbind(a, b))
df <- data.frame(a, b)
dt <- data.table::as.data.table(mat)
cat(paste0("Matrix size: ",object.size(mat), "\ndf size: ", object.size(df), " (",round(object.size(df)/object.size(mat),2) ,")\ndt size: ", object.size(dt), " (",round(object.size(dt)/object.size(mat),2),")" ))
Matrix size: 16000568
df size: 12000848 (0.75)
dt size: 4001152 (0.25)

So here already you see that data.table stores the same data using 4 times less space than your old matrix does, and 3 times less than data.frame. Now about operations speed:

> microbenchmark(df[df$a*df$b>500,], mat[mat[,1]*mat[,2]>500,], dt[a*b>500])
Unit: milliseconds
                             expr       min        lq     mean   median        uq      max neval
          df[df$a * df$b > 500, ] 23.766201 24.136201 26.49715 24.34380 30.243300  32.7245   100
 mat[mat[, 1] * mat[, 2] > 500, ] 13.010000 13.146301 17.18246 13.41555 20.105450 117.9497   100
                  dt[a * b > 500]  8.502102  8.644001 10.90873  8.72690  8.879352 112.7840   100

data.table does the filtering 1.7 times faster than base on data.frame, and 2.5 times faster than using a matrix.

And that’s not all, for almost any CSV import, using data.table::fread will change your life. Give it a try instead of read.csv or read_csv.

IMHO data.table doesn’t get half the love it deserves, the best all-round package for performance and a very concise syntax. The following vignettes should put you on your way quickly, and that is worth the effort, trust me.

For further performance improvements Rfast contains many Rcpp implementations of popular functions and problems, such as rowSort() for example.


EDIT: fread‘s speed is due to optimizations done at C-code level involving the use of pointers for memory mapping, and coerce-as-you-go techniques, which frankly are beyond my knowledge to explain. This post contains some explanations by the author Matt Dowle, as well as an interesting, if short, piece of discussion between him and the author of dplyr, Hadley Wickham.

CLICK HERE to find out more related problems solutions.

Leave a Comment

Your email address will not be published.

Scroll to Top