Slide 1

Slide 1 text

Beyond Shuffling tips & tricks for scaling Apache Spark Vancouver Spark 2015

Slide 2

Slide 2 text

Who am I? ● My name is Holden Karau ● Prefered pronouns are she/her ● I’m a Software Engineer at IBM ● previously Alpine, Databricks, Google, Foursquare & Amazon ● co-author of Learning Spark & Fast Data processing with Spark ○ co-author of a new book focused on Spark performance coming out next year* ● @holdenkarau ● Slide share http://www.slideshare.net/hkarau ● Linkedin https://www.linkedin.com/in/holdenkarau ● Github https://github.com/holdenk ● Spark Videos http://bit.ly/holdenSparkVideos

Slide 3

Slide 3 text

What is going to be covered: ● What I think I might know about you ● RDD re-use (caching, persistence levels, and checkpointing) ● Working with key/value data ○ Why group key is evil and what we can do about it ● Best practices for Spark accumulators* ● When Spark SQL can be amazing and wonderful ● A quick detour into some future performance work in Spark MLLib

Slide 4

Slide 4 text

Who I think you wonderful humans are? ● Nice* people ● Know some Apache Spark ● Want to scale your Apache Spark jobs Lori Erickson

Slide 5

Slide 5 text

Cat photo from http://galato901.deviantart.com/art/Cat-on-Work-Break-173043455 Photo from Cocoa Dream

Slide 6

Slide 6 text

RDD re-use - sadly not magic ● If we know we are going to re-use the RDD what should we do? ○ If it fits nicely in memory caching in memory ○ persisting at another level ■ MEMORY, MEMORY_ONLY_SER, MEMORY_AND_DISK, MEMORY_AND_DISK_SER ○ checkpointing ● Noisey clusters ○ _2 & checkpointing can help Richard Gillin

Slide 7

Slide 7 text

Considerations for Key/Value Data ● What does the distribution of keys look like? ● What type of aggregations do we need to do? ● Do we want our data in any particular order? ● Are we joining with another RDD? ● Whats our partitioner? ○ If we don’t have an explicit one: what is the partition structure? eleda 1

Slide 8

Slide 8 text

What is key skew and why do we care? ● Keys aren’t evenly distributed ○ Sales by zip code, or records by city, etc. ● groupByKey will explode (but it's pretty easy to break) ● We can have really unbalanced partitions ○ If we have enough key skew sortByKey could even fail ○ Stragglers (uneven sharding can make some tasks take much longer) Mitchell Joyce

Slide 9

Slide 9 text

groupByKey - just how evil is it? ● Pretty evil ● Groups all of the records with the same key into a single record ○ Even if we immediately reduce it (e.g. sum it or similar) ○ This can be too big to fit in memory, then our job fails ● Unless we are in SQL then happy pandas PROgeckoam

Slide 10

Slide 10 text

So what does that look like? (94110, A, B) (94110, A, C) (10003, D, E) (94110, E, F) (94110, A, R) (10003, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (67843, T, R) (94110, T, R) (94110, T, R) (67843, T, R) (10003, A, R) (94110, [(A, B), (A, C), (E, F), (A, R), (D, R), (E, R), (E, R), (T, R) (T, R)]

Slide 11

Slide 11 text

Let’s revisit wordcount with groupByKey val words = rdd.flatMap(_.split(" ")) val wordPairs = words.map((_, 1)) val grouped = wordPairs.groupByKey() grouped.mapValues(_.sum)

Slide 12

Slide 12 text

And now back to the “normal” version val words = rdd.flatMap(_.split(" ")) val wordPairs = words.map((_, 1)) val wordCounts = wordPairs.reduceByKey(_ + _) wordCounts

Slide 13

Slide 13 text

Let’s see what it looks like when we run the two Quick pastebin of the code for the two: http://pastebin.com/CKn0bsqp val rdd = sc.textFile("python/pyspark/*.py", 20) // Make sure we have many partitions // Evil group by key version val words = rdd.flatMap(_.split(" ")) val wordPairs = words.map((_, 1)) val grouped = wordPairs.groupByKey() val evilWordCounts = grouped.mapValues(_.sum) evilWordCounts.take(5) // Less evil version val wordCounts = wordPairs.reduceByKey(_ + _) wordCounts.take(5)

Slide 14

Slide 14 text

GroupByKey

Slide 15

Slide 15 text

reduceByKey

Slide 16

Slide 16 text

So what did we do instead? ● reduceByKey ○ Works when the types are the same (e.g. in our summing version) ● aggregateByKey ○ Doesn’t require the types to be the same (e.g. computing stats model or similar) Allows Spark to pipeline the reduction & skip making the list We also got a map-side reduction (note the difference in shuffled read)

Slide 17

Slide 17 text

So why did we read in python/*.py If we just read in the standard README.md file there aren’t enough duplicated keys for the reduceByKey & groupByKey difference to be really apparent Which is why groupByKey can be safe sometimes

Slide 18

Slide 18 text

Can just the shuffle cause problems? ● Sorting by key can put all of the records in the same partition ● We can run into partition size limits (around 2GB) ● Or just get bad performance ● So we can handle data like the above we can add some “junk” to our key (94110, A, B) (94110, A, C) (10003, D, E) (94110, E, F) (94110, A, R) (10003, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (67843, T, R) (94110, T, R) (94110, T, R) PROTodd Klassy

Slide 19

Slide 19 text

Shuffle explosions :( (94110, A, B) (94110, A, C) (10003, D, E) (94110, E, F) (94110, A, R) (10003, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (67843, T, R) (94110, T, R) (94110, T, R) (94110, A, B) (94110, A, C) (94110, E, F) (94110, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (94110, T, R) (94110, T, R) (67843, T, R) (10003, A, R)

Slide 20

Slide 20 text

Spark accumulators ● Really “great” way for keeping track of failed records ● Double counting makes things really tricky ○ Jobs which worked “fine” don’t continue to work “fine” when minor changes happen ● Relative rules can save us* under certain conditions Found Animals Foundation Follow

Slide 21

Slide 21 text

Using an accumulator for validation: val (ok, bad) = (sc.accumulator(0), sc.accumulator(0)) val records = input.map{ x => if (isValid(x)) ok +=1 else bad += 1 // Actual parse logic here } // An action (e.g. count, save, etc.) if (bad.value > 0.1* ok.value) { throw Exception("bad data - do not use results") // Optional cleanup } // Mark as safe P.S: If you are interested in this check out spark-validator (still early stages). Found Animals Foundation Follow

Slide 22

Slide 22 text

Using a library: simple historic validation Photo by Dvortygirl val vc = new ValidationConf(jobHistoryPath, "1", true, List[ValidationRule](new AvgRule("acc", 0.001, Some(200)))) val v = Validation(sc, vc) // Some job logic // Register an accumulator (optional) val acc = sc.accumulator(0) v.registerAccumulator(acc, "acc") // More Job logic goes here if (v.validate(jobId)) { // Success logic goes here } else sadness()

Slide 23

Slide 23 text

With a Spark internal counter... val vc = new ValidationConf(tempPath, "1", true, List[ValidationRule]( new AbsoluteSparkCounterValidationRule("recordsRead", Some(30), Some (1000))) ) val sqlCtx = new SQLContext(sc) val v = Validation(sc, sqlCtx, vc) //Do work here.... assert(v.validate(5) === true) } Photo by Dvortygirl

Slide 24

Slide 24 text

Where can Spark SQL benefit perf? ● Structured or semi-structured data ● OK with having less* complex operations available to us ● We may only need to operate on a subset of the data ○ The fastest data to process isn’t even read ● Remember that non-magic cat? Its got some magic** now ○ In part from peeking inside of boxes ● non-JVM (aka Python & R) users: saved from double serialization cost! :) **Magic may cause stack overflow. Not valid in all states. Consult local magic bureau before attempting magic Matti Mattila

Slide 25

Slide 25 text

Why is Spark SQL good for those things? ● Space efficient columnar cached representation ● Able to push down operations to the data store ● Optimizer is able to look inside of our operations ○ Regular spark can’t see inside our operations to spot the difference between (min(_, _)) and (append(_, _)) Matti Mattila

Slide 26

Slide 26 text

Preview: bringing codegen to Spark ML ● Based on Spark SQL’s code generation ○ First draft using quasiquotes ○ Switch to janino for Java compilation ● Initial draft for Gradient Boosted Trees ○ Based on DB’s work ○ First draft with QuasiQuotes ■ Moved to Java for speed ○ See SPARK-10387 for the details Jon

Slide 27

Slide 27 text

@Override public double call(Vector input) throws Exception { if (input.apply(1) <= 1.0) { return 0.1; } else { if (input.apply(0) <= 0.5) { return 0.0; } else { return 2.0; } } } (1, 1.0) 0.1 (0, 0.5) 0.0 2.0 What the generated code looks like: Glenn Simmons

Slide 28

Slide 28 text

Everyone* needs reduce, let’s make it faster! ● reduce & aggregate have “tree” versions ● we already had free map-side reduction ● but now we can get even better!** **And we might be able to make even cooler versions

Slide 29

Slide 29 text

Additional Resources ● Programming guide (along with JavaDoc, PyDoc, ScalaDoc, etc.) ○ http://spark.apache.org/docs/latest/ ● Books ● Videos ● Denny’s meetup on Wednesday :) ● Spark Office Hours ○ follow me on twitter for future ones - https://twitter.com/holdenkarau ○ fill out this survey to choose the next date - http://bit.ly/spOffice1 raider of gin

Slide 30

Slide 30 text

Learning Spark Fast Data Processing with Spark (Out of Date) Fast Data Processing with Spark (2nd edition) Advanced Analytics with Spark Coming soon: Spark in Action

Slide 31

Slide 31 text

And the next book….. Still being written - signup to be notified when it is available: ● http://www.highperformancespark.com ● https://twitter.com/highperfspark

Slide 32

Slide 32 text

Q&A OR A quick detour into spark testing? ● It's like a choose your own adventure novel, but with voting ● But more like the voting in High School since if we are running out of time we might just skip it

Slide 33

Slide 33 text

Spark Videos ● Apache Spark Youtube Channel ● My Spark videos on YouTube - ○ http://bit.ly/holdenSparkVideos ● Spark Summit 2014 training ● Paco’s Introduction to Apache Spark

Slide 34

Slide 34 text

Cat wave photo by Quinn Dombrowski k thnx bye! If you care about Spark testing and don’t hate surveys: http://bit. ly/holdenTestingSpark Will tweet results “eventually” @holdenkarau