Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Petabyte Scale Datalake Table Management with Ray, Arrow, Parquet, and S3 (Patrick Ames, Amazon)

Petabyte Scale Datalake Table Management with Ray, Arrow, Parquet, and S3 (Patrick Ames, Amazon)

Managing a data lake that your business depends on to continuously deliver critical insights can be a daunting task. From applying table upserts/deletes during log compaction to managing structural changes through schema evolution or repartitioning, there's a lot that can go wrong and countless trade-offs to weigh. Moreover, as the volume of data in individual tables grow to petabytes and beyond, the jobs that fulfill these tasks grow increasingly expensive, fail to complete on time, and entrench teams in operational burden. Scalability limits are reached and yesterday's corner cases become everyday realities. In this talk, we will discuss Amazon's progress toward resolving these issues in its S3-based data lake by leveraging Ray, Arrow, and Parquet. We will also review past approaches, subsequent lessons learned, goals met/missed, and anticipated future work.

Af07bbf978a0989644b039ae6b8904a5?s=128

Anyscale
PRO

July 14, 2021
Tweet

More Decks by Anyscale

Other Decks in Technology

Transcript

  1. Petabyte Scale Datalake Table Management with Ray, Arrow, Parquet, and

    S3 Applying Table Inserts, Updates, and Deletes with The Flash Compactor
  2. Related Projects • 50PB+ Oracle Data Warehouse Migration • https://aws.amazon.com/solutions/case-studies/amazon-migration-analytics/

    • Built a distributed compute platform to copy tables from Oracle to S3 • S3 Datalake Quickly Grew to 200PB+ w/ 3000+ Redshift and EMR clusters Processing Data • Serverless Datalake Table Administration • Built on Top of Our In-House Oracle Table Migration Service • Schema Modifications (Add, Drop, and Update Columns) • Table Creation, Repartitioning, Movement, and Copying • Table Repair and Rollback • Serverless Datalake Table Update, Insert, and Delete Application (Compaction) • Built on Apache Spark on EMR • Result is Written Back to the Datalake • Support and Optimize Table Consumption • Required for GDPR Compliance
  3. Current Pain Points • Cost/Byte Compacted • Prohibitively High to

    Support Compaction on All Datalake Tables • Scalability • Insufficient Horizontal-Scalability to Compact All Datalake Tables • SLA Guarantees • Wide and Unpredictable Variance in Compaction Job Run Latency • Unanticipated Failures
  4. Why Ray? • General Purpose • Unify Compaction, Table Administration,

    ML, and ETL Jobs Under One Compute Platform • Idiomatic Distributed Python • Ideal Interface for Our Growing Scientific Customer Base • Actors • Reduce Complexity and Cost of Maintaining Distributed State • Scheduler • Low-Latency • Horizontally-Scalable • Bottom-up Distributed Scheduling • Plasma Object Store • Zero-copy Data Exchange • AWS Cluster Launcher and Autoscaler • Efficient, Automatic, Heterogenous Cluster Scaling • Reuse Existing EC2 Instance Pools • Easily Run Jobs on Isolated Clusters
  5. None
  6. None
  7. None
  8. None
  9. None
  10. None
  11. None
  12. None
  13. None
  14. None
  15. None
  16. None
  17. None
  18. None
  19. None
  20. None
  21. None
  22. None
  23. None
  24. None
  25. None
  26. None
  27. None
  28. None
  29. None
  30. None
  31. None
  32. None
  33. None
  34. None
  35. None
  36. None
  37. None
  38. None
  39. None
  40. None
  41. None
  42. None
  43. None
  44. None
  45. None
  46. None
  47. None
  48. None
  49. None
  50. None
  51. None
  52. None
  53. None
  54. None
  55. None
  56. None
  57. None
  58. None
  59. None
  60. None
  61. None
  62. None
  63. None
  64. None
  65. None
  66. None
  67. None
  68. None
  69. None
  70. None
  71. None
  72. None
  73. None
  74. None
  75. None
  76. None
  77. None
  78. None
  79. None
  80. None
  81. None
  82. None
  83. None
  84. None
  85. None
  86. None
  87. None
  88. None
  89. None
  90. None
  91. None
  92. None
  93. None
  94. None
  95. None
  96. None
  97. Results • Performance - Maximum Throughput Rate: 1300TiB/hr (1.42PB/hr, 23.8

    TB/min) on a 250 node r5n-8xlarge cluster (8000 vCPUs) compacting 117TiB of decompressed Parquet input data. • ~13X our max Spark throughput rate • Scalability - Largest Partition Stream Compacted in 1 Session: 1.05PiB at 353TiB/hour (18,115 rows/s-core) on a 110 node r5n-8xlarge cluster (3520 vCPUs, 27.5 TiB memory). • ~12X the largest partition stream compacted on an equivalent Spark EMR cluster • Efficiency - Best Cluster Utilization Rate: $0.24/TB or 61,477 rows/s-core (91 MiB/s-core) on an 11 node r5n- 8xlarge (352 vCPUs) EC2 On-Demand cluster compacting 20TiB of decompressed input Parquet data. • Average efficiency of $0.46/TB provides a 91% cost reduction vs. Spark on EMR • Resilience - Success Rate: 99.3% across 402 compaction job runs with each compaction session processing ~100TiB of input delta files. • 3% improvement over our trailing 1-year success rate of 96.36% with Spark • SLA Adherence: 99.3% or, in other words, compaction job run adherence to its initial expected SLA is limited by job run success rate (i.e. job runs typically only fail to meet their expected SLA due to an unexpected crash or deadlock). • Our Spark compactor does not offer SLA guarantees.
  98. Input Size Scaling: Hash Bucket Counts Input Rows to Compact

    per Session (input deltas + prior compacted) Approximate Input Bytes to Compact per Session Recommended Hash Buckets Recommended Memory Per CPU <= 46.875 Billion 16 - 80 TiB 313 8 GiB/CPU <= 93.75 Billion 32 - 160 TiB 625 8 GiB/CPU <= 187.5 Billion 64 - 320 TiB 1250 8 GiB/CPU <= 375 Billion 128 - 640 TiB 2500 8 GiB/CPU <= 750 Billion 0.25 - 1.25 PiB 5000 8 GiB/CPU <= 1.5 Trillion 0.5 - 2.5 PiB 10000 8 GiB/CPU <= 3 Trillion 1 - 5 PiB 20000 8 GiB/CPU <= 6 Trillion 2 - 10 PiB 40000 8 GiB/CPU
  99. Input Size Scaling: Efficiency 61,477 51,238 44,970 37,730 11,834 0

    10,000 20,000 30,000 40,000 50,000 60,000 70,000 352 672 5000 10,000 50,000 RECORDS/S-CORE HASH BUCKETS Throughput Rate (records/s-core)
  100. Horizontal Scaling: Performance 1844 3072 5684 4694 1434 8926 6486

    6554 11026 11708 15139 13688 13159 14319 13671 22187 22119 21590 22051 21402 352 672 5000 10,000 50,000 2560 10,000 10,000 3,520 3,072 625 375 960 500 750 750 1,000 1,250 625 500 0 10000 20000 30000 40000 50000 60000 352 672 1376 1376 1376 2560 2880 3520 3520 4000 4000 4000 4000 4000 4000 8000 8000 8000 8000 8000 CLUSTER VCPUS Throughput Rate (GiB/minute) Hash Buckets
  101. Horizontal Scaling: Latency 667 721 637 668 475 324 10,954

    352 672 1376 2560 4000 8000 3520 0 2000 4000 6000 8000 10000 12000 20 36 59 97 117 117 1075 INPUT SIZE (TIB) Job Run Latency (Seconds) Cluster Worker vCPUs
  102. Input Size and Horizontal Scaling: Cost $2,487.67 $2,852.04 $3,148.18 $3,814.97

    $3,735.93 $5,775.87 $6,993.86 $4,156.79 $4,448.32 $3,441.29 $3,803.53 $3,955.67 $3,636.90 $3,810.78 $4,694.65 $4,709.14 $4,825.05 $4,723.63 $4,868.52 352 672 5000 10,000 2560 10,000 10,000 3,520 3,072 625 375 960 500 750 750 1,000 1,250 625 500 $- $2,000.00 $4,000.00 $6,000.00 $8,000.00 $10,000.00 $12,000.00 352 672 1376 1376 2560 2880 3520 3520 4000 4000 4000 4000 4000 4000 8000 8000 8000 8000 8000 CLUSTER VCPUS Cost/10-PiB (EC2 On-Demand Pricing) Hash Buckets
  103. Pitfalls • Too Many Object References Exchanged Between Steps •

    Workaround: Embed multiple individual object references inside an object reference to a larger datastructure, and let each task iterate over the embedded object references. • Ray Reference Counting and Garbage Collection • Very High Latency • GCS or Driver Heap Memory Exhaustion • Workaround: Hide embedded object refs from the reference counter via ray.cloudpickle.dumps(obj_ref) and ray.cloudpickle.loads(pickled_obj_ref). • These Results Took Time • ~13 Major Compactor Implementation Revisions
  104. Next Steps • Publish The Flash Compactor Implementation Back to

    Open Source • https://github.com/amzn/amazon-ray • Follow-up Blog Posts • Progress Toward Compacting all Datalake Tables with Ray • Serverless Ray Jobs for Datalake Table Producers, Consumers, and Admins Thanks! Feel free to ping me on the Ray Community Slack!