Slide 1

Slide 1 text

Petabyte Scale Datalake Table Management with Ray, Arrow, Parquet, and S3 Applying Table Inserts, Updates, and Deletes with The Flash Compactor

Slide 2

Slide 2 text

Related Projects • 50PB+ Oracle Data Warehouse Migration • https://aws.amazon.com/solutions/case-studies/amazon-migration-analytics/ • Built a distributed compute platform to copy tables from Oracle to S3 • S3 Datalake Quickly Grew to 200PB+ w/ 3000+ Redshift and EMR clusters Processing Data • Serverless Datalake Table Administration • Built on Top of Our In-House Oracle Table Migration Service • Schema Modifications (Add, Drop, and Update Columns) • Table Creation, Repartitioning, Movement, and Copying • Table Repair and Rollback • Serverless Datalake Table Update, Insert, and Delete Application (Compaction) • Built on Apache Spark on EMR • Result is Written Back to the Datalake • Support and Optimize Table Consumption • Required for GDPR Compliance

Slide 3

Slide 3 text

Current Pain Points • Cost/Byte Compacted • Prohibitively High to Support Compaction on All Datalake Tables • Scalability • Insufficient Horizontal-Scalability to Compact All Datalake Tables • SLA Guarantees • Wide and Unpredictable Variance in Compaction Job Run Latency • Unanticipated Failures

Slide 4

Slide 4 text

Why Ray? • General Purpose • Unify Compaction, Table Administration, ML, and ETL Jobs Under One Compute Platform • Idiomatic Distributed Python • Ideal Interface for Our Growing Scientific Customer Base • Actors • Reduce Complexity and Cost of Maintaining Distributed State • Scheduler • Low-Latency • Horizontally-Scalable • Bottom-up Distributed Scheduling • Plasma Object Store • Zero-copy Data Exchange • AWS Cluster Launcher and Autoscaler • Efficient, Automatic, Heterogenous Cluster Scaling • Reuse Existing EC2 Instance Pools • Easily Run Jobs on Isolated Clusters

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

No content

Slide 40

Slide 40 text

No content

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

No content

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

No content

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

No content

Slide 61

Slide 61 text

No content

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

No content

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

No content

Slide 67

Slide 67 text

No content

Slide 68

Slide 68 text

No content

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

No content

Slide 80

Slide 80 text

No content

Slide 81

Slide 81 text

No content

Slide 82

Slide 82 text

No content

Slide 83

Slide 83 text

No content

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

No content

Slide 86

Slide 86 text

No content

Slide 87

Slide 87 text

No content

Slide 88

Slide 88 text

No content

Slide 89

Slide 89 text

No content

Slide 90

Slide 90 text

No content

Slide 91

Slide 91 text

No content

Slide 92

Slide 92 text

No content

Slide 93

Slide 93 text

No content

Slide 94

Slide 94 text

No content

Slide 95

Slide 95 text

No content

Slide 96

Slide 96 text

No content

Slide 97

Slide 97 text

Results • Performance - Maximum Throughput Rate: 1300TiB/hr (1.42PB/hr, 23.8 TB/min) on a 250 node r5n-8xlarge cluster (8000 vCPUs) compacting 117TiB of decompressed Parquet input data. • ~13X our max Spark throughput rate • Scalability - Largest Partition Stream Compacted in 1 Session: 1.05PiB at 353TiB/hour (18,115 rows/s-core) on a 110 node r5n-8xlarge cluster (3520 vCPUs, 27.5 TiB memory). • ~12X the largest partition stream compacted on an equivalent Spark EMR cluster • Efficiency - Best Cluster Utilization Rate: $0.24/TB or 61,477 rows/s-core (91 MiB/s-core) on an 11 node r5n- 8xlarge (352 vCPUs) EC2 On-Demand cluster compacting 20TiB of decompressed input Parquet data. • Average efficiency of $0.46/TB provides a 91% cost reduction vs. Spark on EMR • Resilience - Success Rate: 99.3% across 402 compaction job runs with each compaction session processing ~100TiB of input delta files. • 3% improvement over our trailing 1-year success rate of 96.36% with Spark • SLA Adherence: 99.3% or, in other words, compaction job run adherence to its initial expected SLA is limited by job run success rate (i.e. job runs typically only fail to meet their expected SLA due to an unexpected crash or deadlock). • Our Spark compactor does not offer SLA guarantees.

Slide 98

Slide 98 text

Input Size Scaling: Hash Bucket Counts Input Rows to Compact per Session (input deltas + prior compacted) Approximate Input Bytes to Compact per Session Recommended Hash Buckets Recommended Memory Per CPU <= 46.875 Billion 16 - 80 TiB 313 8 GiB/CPU <= 93.75 Billion 32 - 160 TiB 625 8 GiB/CPU <= 187.5 Billion 64 - 320 TiB 1250 8 GiB/CPU <= 375 Billion 128 - 640 TiB 2500 8 GiB/CPU <= 750 Billion 0.25 - 1.25 PiB 5000 8 GiB/CPU <= 1.5 Trillion 0.5 - 2.5 PiB 10000 8 GiB/CPU <= 3 Trillion 1 - 5 PiB 20000 8 GiB/CPU <= 6 Trillion 2 - 10 PiB 40000 8 GiB/CPU

Slide 99

Slide 99 text

Input Size Scaling: Efficiency 61,477 51,238 44,970 37,730 11,834 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 352 672 5000 10,000 50,000 RECORDS/S-CORE HASH BUCKETS Throughput Rate (records/s-core)

Slide 100

Slide 100 text

Horizontal Scaling: Performance 1844 3072 5684 4694 1434 8926 6486 6554 11026 11708 15139 13688 13159 14319 13671 22187 22119 21590 22051 21402 352 672 5000 10,000 50,000 2560 10,000 10,000 3,520 3,072 625 375 960 500 750 750 1,000 1,250 625 500 0 10000 20000 30000 40000 50000 60000 352 672 1376 1376 1376 2560 2880 3520 3520 4000 4000 4000 4000 4000 4000 8000 8000 8000 8000 8000 CLUSTER VCPUS Throughput Rate (GiB/minute) Hash Buckets

Slide 101

Slide 101 text

Horizontal Scaling: Latency 667 721 637 668 475 324 10,954 352 672 1376 2560 4000 8000 3520 0 2000 4000 6000 8000 10000 12000 20 36 59 97 117 117 1075 INPUT SIZE (TIB) Job Run Latency (Seconds) Cluster Worker vCPUs

Slide 102

Slide 102 text

Input Size and Horizontal Scaling: Cost $2,487.67 $2,852.04 $3,148.18 $3,814.97 $3,735.93 $5,775.87 $6,993.86 $4,156.79 $4,448.32 $3,441.29 $3,803.53 $3,955.67 $3,636.90 $3,810.78 $4,694.65 $4,709.14 $4,825.05 $4,723.63 $4,868.52 352 672 5000 10,000 2560 10,000 10,000 3,520 3,072 625 375 960 500 750 750 1,000 1,250 625 500 $- $2,000.00 $4,000.00 $6,000.00 $8,000.00 $10,000.00 $12,000.00 352 672 1376 1376 2560 2880 3520 3520 4000 4000 4000 4000 4000 4000 8000 8000 8000 8000 8000 CLUSTER VCPUS Cost/10-PiB (EC2 On-Demand Pricing) Hash Buckets

Slide 103

Slide 103 text

Pitfalls • Too Many Object References Exchanged Between Steps • Workaround: Embed multiple individual object references inside an object reference to a larger datastructure, and let each task iterate over the embedded object references. • Ray Reference Counting and Garbage Collection • Very High Latency • GCS or Driver Heap Memory Exhaustion • Workaround: Hide embedded object refs from the reference counter via ray.cloudpickle.dumps(obj_ref) and ray.cloudpickle.loads(pickled_obj_ref). • These Results Took Time • ~13 Major Compactor Implementation Revisions

Slide 104

Slide 104 text

Next Steps • Publish The Flash Compactor Implementation Back to Open Source • https://github.com/amzn/amazon-ray • Follow-up Blog Posts • Progress Toward Compacting all Datalake Tables with Ray • Serverless Ray Jobs for Datalake Table Producers, Consumers, and Admins Thanks! Feel free to ping me on the Ray Community Slack!