Slide 1

Slide 1 text

R Stories from the Trenches Szilárd Pafka, PhD Chief Scientist, Epoch Budapest R Meetup August 2015

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

T-25 ~ 1990

Slide 9

Slide 9 text

T-20 ~ 1996

Slide 10

Slide 10 text

T-15 ~ 2001

Slide 11

Slide 11 text

T-10 ~ 2006 - cost was not an issue! - data.frame - 800 packages

Slide 12

Slide 12 text

~ 2009 aka data mining, aka (today) data science

Slide 13

Slide 13 text

T ~ 2014 Data Science

Slide 14

Slide 14 text

1999 CRISP Data Mining

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

2006

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

5 yrs

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

(2009)

Slide 40

Slide 40 text

(2009)

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

(382 RSVPs)

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

high-level API fast environment reproducibility

Slide 50

Slide 50 text

high-level API fast environment reproducibility

Slide 51

Slide 51 text

high-level API fast environment reproducibility

Slide 52

Slide 52 text

high-level API fast environment reproducibility

Slide 53

Slide 53 text

high-level API fast environment reproducibility

Slide 54

Slide 54 text

Data frames: “in-memory table” with (fast) bulk operations (“vectorized”) thousands of packages (providing high-level API) R, Python (pandas), Spark best way to work with structured data

Slide 55

Slide 55 text

Data frames: “in-memory table” with (fast) bulk operations (“vectorized”) thousands of packages (providing high-level API) R, Python (pandas), Spark best way to work with structured data

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

R data.table (on one server!) aggregation 100M rows 1M groups 1.3 sec join 100M rows x 1M rows 1.7sec

Slide 58

Slide 58 text

aggregation 100M rows 1M groups join 100M rows x 1M rows

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

A colleague from work, asked me to investigate about Spark and R. So the most obvious thing to was to investigate about SparkR I came across a piece of code that reads lines from a file and count how many lines have a "a" and how many lines have a "b". I prepared a file with 5 columns and 1 million records.

Slide 61

Slide 61 text

A colleague from work, asked me to investigate about Spark and R. So the most obvious thing to was to investigate about SparkR I came across a piece of code that reads lines from a file and count how many lines have a "a" and how many lines have a "b". I prepared a file with 5 columns and 1 million records. Spark: 26.45734 seconds for a million records? Nice job -:) R: 48.31641 seconds? Look like Spark was almost twice as fast this time...and this is a pretty simple example...I'm sure that when complexity arises...the gap is even bigger…

Slide 62

Slide 62 text

A colleague from work, asked me to investigate about Spark and R. So the most obvious thing to was to investigate about SparkR I came across a piece of code that reads lines from a file and count how many lines have a "a" and how many lines have a "b". I prepared a file with 5 columns and 1 million records. Spark: 26.45734 seconds for a million records? Nice job -:) R: 48.31641 seconds? Look like Spark was almost twice as fast this time...and this is a pretty simple example...I'm sure that when complexity arises...the gap is even bigger… HOLLY CRAP UPDATE! Markus gave me this code on the comments... [R: 0.1791632 seconds]. I just added a couple of things to make complaint...but...damn...I wish I could code like that in R

Slide 63

Slide 63 text

No content

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

No content

Slide 67

Slide 67 text

No content

Slide 68

Slide 68 text

No content