Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Pliops - IT Press Tour #49 March 2023

Pliops - IT Press Tour #49 March 2023

The IT Press Tour

April 01, 2023

More Decks by The IT Press Tour

Other Decks in Technology

Transcript

  1. IT Press Tour – March 29, 2023 Uri Beitler CEO

    & Founder Tony Afshary, Global VP Products & Marketing Infrastructure Acceleration & Scaling for all Multiply Infrastructure Scalability and Efficiency with Pliops XDP
  2. Pliops Overview Customer Challenges Pliops XDP Explained Pliops XDP Data

    Services Pliops XDP Use Cases Lab Tour Topics
  3. MISSION To massively accelerate performance and dramatically lower infrastructure costs

    for RDBMS databases, NoSQL databases, analytics, AI/ML, distributed file systems and software-defined storage, and more. CUSTOMERS SaaS Top 5 SaaS Provider: Core database reliability and user scaling IaaS Top 5 Hyperscaler: Deployed QLC for density and improved CSAT for high- performance elastic block storage HPC Top 500 HPC: File systems and native Key-Value application acceleration INVESTORS Social Media Top Content Community: Media Insights and Analytics scaling with Key-Value storage acceleration
  4. Expanding Demands and Constraints 5 User Growth New Services Data

    Growth Power & Cooling DC / Rackspace Budget Environmental Responsibility Maintaining Current Infrastructure Business Demands Cloud & Enterprise Data Centers Data Center Constraints Current solutions don’t adequately address Having the need to prolong current datacenter Infra Need to rethink data center architecture
  5. Challenges With Broad SSD Adoption 7 Amplified Data Software that

    uses SSDs amplifies reads and writes up to 100x, stored data up to 6x, crushing CPU storage and network efficiency Server Architectures Not Balanced SSDs’ 1000x increase in performance over HDD has not been matched by server advances System Reliability Compromised Traditional RAID is rarely used with NVMe SSDs due to the huge performance penalty, requires costly workarounds
  6. Application IO Amplification Challenge 4K 40x IO amplification 100 Bytes

    IO TLC indirection unit 1 2 • Impacts Network, Storage, SSD, CPU – Must Overprovision for this Extra Data Transfer and Processing • Improving IO Amplification consumes CPU, Cache, or Storage Space 64K 64x IO amplification 1KB IO QLC indirection unit
  7. 0 1 2 3 4 5 6 7 0% 10%

    20% 30% 40% 50% 60% 70% 80% 90% 100% Random Write Performance Used SSD Capacity XDP Delivers Ultra-Consistent Performance Even with Full Drives Datacenter SSDs Pliops XDP
  8. Encryption Line Rate, Per Volume AES-256 High Performance Drive Fail

    Protection w/ Fast Rebuild Compression Line-Rate ZSTD Merge Pack Sort Index Garbage Collection Key-Value Engine KV Library API RocksDB, NVMe-KV Compatible NVMe Block Interface With Advanced Thin Provisioning 13 Virtual Volumes / Functions QoS per Volume * Future Release - XDP 2.0 SDXI, TP-4091 Storage & Analytics Acceleration ARM Core Complex RTL XDP Functional Flow
  9. Pliops - Data Services Acceleration Platform XDP Data Services XDP-RAIDplus

    Best-In-Class Data Integrity and RAID+ Solution for SAS, SATA & NVMe and NVMeoF XDP-AccelDB Best-In-Class Universal Database & SDS Accelerator. XDP-AccelKV World's 1st HW Key-Value Accelerator for Real Time Analytics and ML Training
  10. XDP-RAIDplus Best-In-Class Data Integrity and RAID+ Solution for NVMe and

    NVMeoF Multiple Single Drive Failures Full NVMe Performance Fully PFAIL Protected Ultra-Fast SSD Rebuild Enhanced SSD Endurance
  11. 12x Faster Full RAID Performance Fastest Performing Data Protection Read

    (MB/s) Write (MB/s) Total (MB/s) MegaRAID XDP-RAIDplus Read (MB/s) Write (MB/s) Total (MB/s) MegaRAID XDP-RAIDplus Performance During Rebuild 23 X 23 X 23 X Sustained Performance 12 X 12 X 12 X Significant Performance Gains over HW RAID 5
  12. 5x Quicker Shorten SSD Rebuild Times Fastest Rebuild Times 35

    Min/TB 7 Min/TB 0 5 10 15 20 25 30 35 40 Rebuild Speed per Terabyte in Minutes 5X Min/TB MegaRAID Pliops XDP-RAIDplus Minimal Impact on QoS with Pliops During Rebuild Enables High-Density Storage Due to Faster Rebuilds
  13. 18x Better Enhanced SSD Endurance Longest SSD Drive Life QLC

    DWPD with Pliops exceeds that of TLC with RAID 5 TLC QLC 2 4 6 0 8 10 12 14 16 18X SSD DWPD TLC QLC MegaRAID Pliops Setup 4.7X XDP-RAIDplus Extends SSD Drive Life Beyond HW Refresh Cycle
  14. 6x More Capacity Expansion Most Optimized Capacity Usage & Savings

    6X Increase In Usable Storage Capacity MegaRAID 10 Drive-Failure Overhead Drive Fill Overhead – 25% 11TB Usable Space Pliops XDP-RAIDPlus Drive-Failure Overhead Drive Fill Overhead – 5% 67TBUsable Space 3X Compression Gains 20 30 40 50 0 60 70 80 90 Terabytes (TB) (30TB Physical Capacity) 100 XDP-RAIDplus Enables a Substantial Reduction in Cost/Terabyte
  15. • Capacity Expansion • TCO Savings • $/TB savings •

    SSD Endurance XDP-RAIDpLus RAID 10 HW RAID 5 (MegaRAID) RAID 0 Address Capacity Issues Address Performance & Capacity Issues Address Data Integrity Issues Why Migrating to XDP-RAIDplus
  16. XDP-AccelDB Best-In-Class Universal Database & SDS Accelerator. Pliops XDP-AccelDB Data

    Service is the Best-In-Class Database Accelerator for SQL applications such as MySQL, MariaDB & PostgreSQL. It is also able to accelerate NoSQL applications including MongoDB and Software Defined Storage solutions such as Ceph. 2.5x Higher Throughput 8x Latency Reduction 6x Better Capacity Expansion 30% Better CPU Utilization
  17. Oracle MySQL Accelerated by XDP-AccelDB 3.4X More Transactions Per Second

    Pliops XDP-AccelDB delivers exceptional MySQL performance and efficiency gains at significant cost savings. Massively Accelerate your MySQL Database A primary challenge for effective database management is optimizing database performance. When deployed with Oracle MySQL Enterprise Edition, Pliops XDP delivers 3.4x TPS performance boost vs. MySQL Community Edition – without any required changes. Do More with Your MySQL License With Pliops XDP-AccelDB, MySQL databases can experience the best of all worlds — accelerated performance, data protection, scalability, and ease of deployment – while also lowering your TCO. Oracle Solution Brief
  18. MongoDB Accelerated by XDP-AccelDB 2.3X More Operations Per Second Reduce

    the infrastructure costs by 50% without impacting performance or quality of service (QoS) Performance Scaling with Pliops Pliops XDP provides significant performance and latency benefits for economically scaling MongoDB applications from a few Terabytes to many Petabytes. The Pliops XDP-RAIDplus along with the built-in data compression features, enable enterprises to efficiently manage the data growth challenges without impacting performance and reliability. This solution also provides significant cost savings by lowering the cost per terabyte and freeing up CPU resources for user scalability. Mongo Solution Brief
  19. Capacity Benefits with Pliops 26 1.92 TB x 4 SSDs

    XDP Compression: 2.85X RAID 5+ MongoDB Pliops Setup MongoDB RAID 10 Setup MongoDB Total Data Storage Capacity: 9TB 1.92 TB x 4 SSDs HW RAID 10 Physical 3.84TB Capacity Physical 5.76 TB Capacity Snappy Compression: 2.6X MongoDB Total Data Storage Capacity: 12TB XDP-AccelDB
  20. Performance during SSD Failure & Rebuild Operations 0 20000 40000

    60000 80000 100000 120000 0 270 540 810 1080 1350 1620 1890 2160 2430 2700 2970 3240 3510 3780 4050 4320 4590 4860 5130 5400 5670 5940 6210 6480 6750 7020 7290 7560 7830 8100 8370 8640 8910 9180 9450 9720 9990 10260 10530 10800 11070 11340 11610 11880 12150 12420 12690 12960 13230 13500 Operations/Sec Time [seconds] MongoDB YCSB-A drive fail QPS smoothed QPS normal Faulty drive Rebuild Back to normal *Rebuild Tests carried out on a different system 27
  21. MongoDB TCO Benefits S1 S2 S3 S4 S5 S6 S7

    S8 S9 S10 ... S23 S24 S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S23 S24 ... 16TB Usable, XDP DFP Full Rack Software RAID vs Pliops XDP Customer Benefits Current Setup Pliops Setup MongoDB 3.84TB x 4 SSDs Db1 Dbn ... 3.84TB x 4 SSDs Db1 Dbn ... MongoDB 9 TB Usable, RAID10 1.5x ↑ Average Higher Performance SSD Endurance Savings 30% ↓ Reduction in TCO/TB User Data 46% ↑ Increase Multitenancy Database Instances based on 3 years Multitenancy Benefits 28
  22. XDP-AccelKV Best-In-Class Key Value accelerator, Application Key-Value APIs, Hosted Key-Value

    Reference Design 20x Higher Throughput 100x Latency Reduction 6x Better Capacity Expansion 10x Better CPU Utilization Pliops XDP-AccelKV Data Service is the Best-In-Class Key value accelerator solution for storage engines such as RocksDB, WiredTiger and other databases. Being a native hardware key value accelerator, it provides an order of magnitude higher performance.
  23. Key Value Storage revolutionized by XDP-Rocks 100X Latency Reduction 100X

    Tail latency reduction, 20X throughput improvement and 10X CPU reduction with XDP-Rocks XDP-Rocks overcomes software inefficiencies of RocksDB RocksDB, one of the top key value datastores, suffers from high read, write and space amplification leading to lower throughput, higher tail latency, storage overprovisioning and higher SSD wear. It is also bottlenecked by CPU usage due to the sorting, merging and compression needs of the storage engine. Pliops XDP-Rocks, a binary compatible RocksDB library, offloads the software operations to hardware thus bringing the read, write and space amplification to its theoretical minimum. Throughput, tail latency, SSD endurance and scalability of the solution are significantly enhanced. Redis KVRocks Rockset CEPH
  24. Multi RocksDB Write Workload Enhancements Average: 149.6 kops Stdev: 3.6

    kops (2.3%) Average: 22.9 kops Stdev: 7.3 kops (31.8%) 0 20 40 60 80 100 120 140 160 180 0 2 3 5 7 8 10 12 13 15 17 18 20 22 23 25 27 28 30 32 33 35 37 Throughput (kops) Time (min) XDP Rocks RocksDB 100% Put (Overwrite) Setting: 1 DB (~7TB data), 1 thread, 16B key, 1KB value 6.5x
  25. Pliops impact on Customer Infra efficiency 32 Setting: 32 DB’s

    (~5TB data), 16B key, 1KB value 0 10 20 30 40 50 60 0 500 1,000 1,500 2,000 2,500 99.99% Latency, ms Throughput, kops XDP-Rocks RocksDB 7.3x throughput 10 ms SLA Target Read Performance Gains 0 50 100 150 200 250 300 350 400 0 100 200 300 400 500 600 700 99.99% Latency, ms Throughput, kops XDP-Rocks RocksDB 50 ms SLA Target Mixed Workload Performance Gains
  26. DRAM like performance with SSD like economics with XDP-AccelKV 86%

    Lower TCO 7x higher throughput and 4x lower latency with 6x increase in endurance Redis on XDP XDP-Rocks 300GB 3TB Performance Latency Large Datasets TCO Savings Redis on Flash RocksDB 300GB 3TB Performance Latency Large Datasets TCO Savings Redis on DRAM 300GB Performance Latency Large Datasets TCO Savings Redis Solution Brief
  27. Extreme KV-Rocks performance gains with XDP-Rocks 30X Reduction of Tail

    Latency Experience increased scalability with amazing quality of service (QoS) improvements in KV-Rocks with XDP-AccelKV Overcoming KV-Rocks Scalability with XDP-Rocks KV-Rocks is a popular open-source distributed key-value NoSQL database that uses RocksDB as a storage engine and is compatible with the Redis protocol. One of the major challenges with scaling KV-Rocks is the CPU bottleneck. With XDP-Rocks, the scalability is almost linear up to 32 threads with 30x reduction in tail latency and 10x improvement in throughput. KVRocks Solution Brief
  28. - 200 400 600 800 1,000 1,200 - 100,000 200,000

    300,000 400,000 500,000 600,000 700,000 P999 LATENCY MS MIXED 50R:50W – TAIL LATENCY 4KB KVRocks with RocksDB KVRocks with XDP-Rocks KVRocks –Mixed Workloads 227 ms 71 ms
  29. '- 100,000 200,000 300,000 400,000 500,000 600,000 700,000 64 Threads

    32 Threads 8 Threads 4 Threads 2 Threads 1 Thread KVRocks with XDP-Rocks KVRocks with RocksDB Mixed Workload Overcoming KV-Rocks Scalability with XDP-Rocks No Scalability of KVRocks with RocksDB
  30. Deployment Models Server NVMe-oF JBOF NVMe-oF Server JBOF Single NVMe-oF

    JBOF 3 Multiple NVMe-oF JBOF 4 Server DAS Server / Storage Server 1 JBOF NVMe-oF NVMe-oF Target 2
  31. Top 5 SaaS Provider XDP-RAIDplus Benefits S1 S2 S3 S4

    S5 S6 S7 S8 S9 S10 ... S23 S24 S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S23 S24 ... 20 User Instances 66TB Usable, XDP-RAIDplus Zero Server Failures/Year/ across all servers Full Rack Software RAID vs Pliops XDP Customer Benefits Current Setup Pliops Setup 15.84TB x 2 SSDs Db1 Dbn ... 15.84TB x 4 SSDs Db1 Dbn ... 15 User Instances 21 TB Usable, RAID 0 600 Server Failures/Year/ 10K servers 3.2x ↑ Higher Capacity Zero Eliminated >600 SSD Related Failover Events 58% ↓ Reduction in TCO/TB User Data Improved Customer Experience/ Satisfaction 33% ↑ Increase Multitenancy Database Instances based on 3 years
  32. 40 Top 5 CSP Flash Storage Service XDP Benefits Accelerated

    High-Capacity EBS Customer Benefits S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 Software-Defined Storage File System 7.6TB x 24 TLC SSDs Current Setup Pliops Setup 15.36TB x 22 QLC SSDs Hardware Accelerated Software-Defined Storage 10+2 Servers 2 Erasure Coding 1.5PB Usable > 1500 Rebuilds /Year 10+2 Servers 2 Erasure Coding 4.3PB Usable XDP Drive Fail Protection 0 SSD-Related Failures/Year 3x ↑ Increase in Effective Storage Capacity 1.5x ↑ Improved Endurance Using QLC vs. SW TLC 65% ↓ Reduced TCO/TB Isolate Drive Failures from Cluster Performance Impact (FTT n+’1’) Reduced Carbon Footprint Considerably based on 3 years
  33. 41 Redis on Flash TCO Advantage 22 Instances of Redis

    on Flash Pliops RocksDB API 959 KIOPS, 19.9ms 99.99% Latency S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 22 Instances of Redis on Flash 145 KIOPS, 79ms 99.99% Latency Current Setup Pliops Setup Redis on Flash vs Redis with Pliops XDP Customer Benefits 3.84TB x 4 SSDs 3.84TB x 4 SSDs based on 3 years 7x ↑ Higher Performance 86% ↓ TCO/IOPS Reduction 4x ↓ Lower 4 9’s Latency Improved Customer Experience/ Satisfaction 5.8x ↑ Improved Endurance
  34. XDP-RAIDplus Case Study: Paperspace challenges resolved by Pliops XDP RAIDplus:

    1) Paperspace is growing rapidly, and their customers are adding up to 40TB of data per day. To stay ahead of the demand, Paperspace usually adds multiple storage nodes at a time. Due to supply chain constraints, they were unable to get the Broadcom MegaRAID cards needed to protect their customer data in time. 2) Performance. Paperspace was performance limited by the SATA SSD throughput due to bandwidth constraints. They wanted to move to NVMe drives for better performance but were concerned about storage density and reliability. 3) Third was storage availability. Paperspace was facing weekly drive failures resulting in extended recovery time. In addition, there was severe performance degradation during the drive rebuild process.
  35. XDP-RAIDplus Case Study: What XDP RAIDplus solved: RIKEN SPring-8 have

    been unable to utilize the higher resolution due to the current storage constraints of increased data bandwidth and size without dropping frames. They were looking for a high-capacity storage solution which has consistent high-speed write performance to store all the captured image data without dropping any frames. Other requirements RIKEN SPring-8 needed included low power utilization, protection of the data with high Peta Bytes Written (PBW) class endurance and compatibility with their current Linux version and libraries used.