Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Automating Big Data benchmarking and performance analysis with Aloja by Nicolas Poggi at Big Data Spain 2015

Automating Big Data benchmarking and performance analysis with Aloja by Nicolas Poggi at Big Data Spain 2015

Automating Big Data Benchmarking and Performance Analysis workshop will give a hands-on experience on the different aspects getting the most value of Big Data infrastructures using ALOJA's open source tools. ALOJA (http://aloja.bsc.es), is a research initiative from the Barcelona Supercomputing Center (BSC) and Microsoft Research to explore new cost-effective hardware architectures and applications for Big Data. ALOJA's main goal and intent is to better understand the performance, therefore the costs of running different Big Data applications. As well as to automate Knowledge Discovery (KD) from system behavior, to produce insights that can optimize and guide the development of efficient Big Data applications and data centers.

Session presented at Big Data Spain 2015 Conference
15th Oct 2015
Kinépolis Madrid
http://www.bigdataspain.org
Event promoted by: http://www.paradigmatecnologico.com
Abstract: http://www.bigdataspain.org/program/thu/slot-20.html

Cb6e6da05b5b943d2691ceefa3381cad?s=128

Big Data Spain

October 22, 2015
Tweet

Transcript

  1. None
  2. www.bsc.es Automating Big Data Benchmarking and Performance Analysis with ALOJA

    October 2015 Nicolas Poggi, Senior Researcher
  3. Barcelona Supercomputing Center (BSC) Spanish national supercomputing center – 22

    year history in Computer Architecture, networking and distributed systems research – Based at the Technical University of Catalonia (UPC) Led by Mateo Valero – ACM fellow, Eckert-Mauchly award 2007, Goode award 2009 – Active research staff with 1000+ publications Large ongoing life science computational projects – Computational Genomics, Molecular modeling & Bioinformatics, Protein Interactions & Docking In place computational capabilities – Mare Nostrum Super Computer Prominent body of research activity around Hadoop since 2008 – Previous to ALOJA • SLA-driven scheduling (Adaptive Scheduler), in memory caching, etc. BSC-MSRS Centre – Long-term relationship between BSC, Microsoft Research product teams – ALOJA is the latest phase of the engagement to explore cost-efficient upcoming Big Data architectures – Open model: • No patents, public IP, publications and open source main focus
  4. The MareNostrum 3 Supercomputer 70% distributed through PRACE 24% distributed

    through RES 6% for BSC-CNS use Over 1015 Floating Point Operations per second Nearly 50,000 cores 100.8 TB of main memory 2 PB of disk storage
  5. Agenda 1. Intro on Hadoop performance 1. Current scenario and

    problematic 2. ALOJA project 1. Background 2. Open source tools 3. Benchmarking 1. Benchmarking workflow 2. DEMO 4. Results 1. HW and SW speedups 2. Cost/Performance 3. Online results DEMO 5. Predictive Analytics and learning 6. Future lines and conclusions
  6. Intro: Hadoop performance and ecosystem

  7. Hadoop design Hadoop was designed to solve complex data –

    Structured and non structured – with [close to] linear scalability – and application reliability Simplifying the programming model – From MPI, OpenMP, CUDA, … Operating as a blackbox for data analysts, but… – Complex runtime for admins – YARN abstracts even more Image source: Hadoop, the definitive guide
  8. Hadoop highly-scalable but… Not a high-performance solution! Requires – Design,

    • Clusters, topology clusters – Setup, • OS, Hadoop config – Fine tuning required • Iterative approach • Time consuming and extensive benchmarking
  9. Setting up your Big Data system Hadoop – > 100+

    tunable parameters – obscure and interrelated • mapred.map/reduce.tasks.speculative.execution • io.sort.mb 100 (300) • io.sort.record.percent 5% (15%) • io.sort.spill.percent 80% (95 – 100%) – Similar for Hive, Spark, HBase Dominated by rules-of-thumb – Number of containers in parallel: • 0.5 - 2 per CPU core Large stack for tuning Image source: Intel® Distribution for Apache Hadoop
  10. Product claims on performance and TCO Eco-system is not transparent

    – Needs auditing!
  11. How do I set my system, too many options!!! Default

    values in Apache source not ideal Large and spread eco system – Different distributions – Product claims Each job is different – No one-fits-all solution Cloud vs. On-premise – IaaS • Tens of different VMs to choose – PaaS • HDInsight, CloudBigData, EMR New economic HW – SSDs, InfiniBand Networking
  12. The ALOJA project, research lines, and challenges

  13. BSC’s project ALOJA: towards cost-effective Big Data Open research project

    for improving the cost-effectiveness of Big Data deployments Benchmarking and Analysis tools Online repository and largest Big Data repo – 50,000+ runs of HiBench, TPC-H, and [some] BigBench – Over 100 HW configurations tested • Of different Node/VM, disks, and networks • Cloud: Multi-cloud provider including both IaaS and PaaS • On-premise: High-end, HPC, commodity, low-power Community – Collaborations with industry and Academia – Presented in different conferences and workshops – Visibility: 47 different countries http://aloja.bsc.es Big Data Benchmarking Online Repository Web Analytics
  14. ALOJA research lines, broad coverage Techniques for obtaining Cost/Performance Insights

    Profiling • HPC, Low-level • High Accuracy • Manual Analysis Benchmarking • Iterate configs • HW and SW • Real executions • Log parsing and data sanitization Analysis tools • Summarize large number of results • By criteria • Filter noise • Fast processing Predictive Analytics • Automated modeling • Estimations • Virtual executions • Automated KD Big Data Apps Frameworks Systems / Clusters Cloud Providers/Datacenters Evaluation of:
  15. Test different clusters and architectures – On-premise • Commodity, high-end,

    appliance, low-power – Cloud IaaS • 32 different VMs in Azure, similar in other providers – Cloud PaaS • HDInsight, EMR, CloudBigData Different access level – Full admin, user-only, request- to-install, everything ready, queuing systems (SGE) Different versions – Hadoop, JVM, Spark, Hive, etc… – Other benchmarks Problems – All systems though for PROD • Not for comparison – No Azure support – Many different packages – No one-fits-all solution Dev environments and testing – Big Data usually requires a cluster to develop and test Solution – Custom implementation – Based in simple components – Wrapping commands Challenges (circa end 2013)
  16. Benchmarking with ALOJA’s open source tools

  17. ALOJA Platform main components 2 Online Repository •Explore results •Execution

    details •Cluster details •Costs •Data sharing 3 Web Analytics •Data views and evaluations •Aggregates •Abstracted Metrics •Job characterization •Machine Learning •Predictions and clustering 1 Big Data Benchmarking •Deploy & Provision •Conf Management •Parameter selection & Queuing •Perf counters •Low-level instrumentation •App logs 19 NGINX, PHP, MySQL BASH, Unix tools, CLIs R, SQL, JS
  18. Extending and collaborating in ALOJA 1. Install prerequisites – git,

    vagrant, VirtualBox 2. git clone https://github.com/Aloja/aloja.git 3. cd aloja 4. vagrant up 5. Open your browser at: http://localhost:8080 6. Optional start the benchmarking cluster vagrant up /.*/ Setting up a DEV environment: Installs a Web Server with sample data Sets a local cluster to test benchmarking
  19. Workflow in ALOJA Cluster(s) definition • VM sizes • #

    nodes • OS, disks • Capabilities Execution plan • Start cluster • Setup • Exec Benchmarks • Cleanup Import data • Convert perf metric • Parse logs • Import into DB Evaluate data • Data views in Vagrant VM • Or http://aloja.bsc.es PA and KD •Predictive Analytics •Knowledge Discovery Historic Repo
  20. Commands and providers Provisioning commands Providers Connect – Node and

    Cluster – Builds SSH cmd line • SSH proxies Deploy – Creates a cluster – Sets SSH credentials – If created, updates config as needed – If stopped, starts nodes Start, Stop Delete Queue jobs to clusters On-premise – Custom settings for clusters • Multiple disk types • Different architectures Cloud IaaS – Azure, OpenStack, Rackspace, AWS (testing) Cloud PaaS – HDInsight, CloudBigData, EMR soon Code at: https://github.com/Aloja/aloja/tree/master/aloja-deploy
  21. Cluster and nodes definitions: multi-provider abstraction Steps to define a

    cluster: Import defaults (if any) – Sets OS, version Select provider – Azure, RackSpace, AWS, On- premise, vagrant… Name the cluster and size Optional – Select VM type – Attached disks – Define metadata – And costs Nodes can also be defined – For Web, share folders, etc. You can logically split clusters Azure 8-datanode sample #load AZURE defaults source "$CONF_DIR/cluster_defaults.conf" clusterName="azure-large-8" numberOfNodes="8" vmSize="Large" attachedVolumes="3" diskSize="1024" #in GB #details vmCores="4" vmRAM="7" #in GB #costs clusterCostHour="1.584" #in USD clusterType="IaaS" Source sample: https://github.com/Aloja/aloja/blob/master/shell/conf/cluster_al-08.conf
  22. Running benchmarks in ALOJA Benchmarking with defaults: /repo_location/aloja-bench/run_benchs.sh To queue

    jobs: /repo_location/shell/exeq.sh Code at: https://github.com/Aloja/aloja/blob/master/aloja-bench/run_benchs.sh
  23. Testing different configurations Approaches 1. Config folders 2. Override variables

    1. In benchmark_defaults.conf 2. In cluster config 3. Cmd line 1. Via parameters run_benchs.sh -r 2 -m 10 1. Via shell globals HADOOP_VERSION=hadoop-2.7.1 BENCH_DATA_SIZE=1TB Things to look for HW / OS – Versions – Disk config and mounts SW – Replication – Block sizes – Compression – IO buffers Build your exec plan in a script and queue! Or follow ML recommendations!
  24. ALOJA-WEB Entry point for explore the results collected from the

    executions, – Provides insights on the obtained results through continuously evolving data views. Online DEMO at: http://aloja.bsc.es
  25. Online benchmarking results

  26. 28 2.) ALOJA-WEB Online Repository Entry point for explore the

    results collected from the executions – Index of executions • Quick glance of executions • Searchable, Sortable – Execution details • Performance charts and histograms • Hadoop counters • Jobs and task details Data management of benchmark executions – Data importing from different clusters – Execution validation – Data management and backup Cluster definitions – Cluster capabilities (resources) – Cluster costs Sharing results – Download executions – Add external executions Documentation and References – Papers, links, and feature documentation Available at: http://hadoop.bsc.es
  27. Comparing 3 runs on same cluster, different configs: Mappers and

    reducers, 48-node cluster URL: http://aloja.bsc.es/perfcharts?execs%5B%5D=90086&execs%5B%5D=90088&execs%5B%5D=90104 400s, 2 containers, Local disk 800s, 3 containers, Local disk 600s, 2 containers, Remote disk
  28. Comparing 3 runs on same cluster, different configs: CPU utilization,

    48-node cluster URL: http://aloja.bsc.es/perfcharts?execs%5B%5D=90086&execs%5B%5D=90088&execs%5B%5D=90104 Moderate iowait% Higher iowait% Very high iowait%
  29. Comparing 3 runs on same cluster, different configs: CPU queues

    , 48-node cluster URL: http://aloja.bsc.es/perfcharts?execs%5B%5D=90086&execs%5B%5D=90088&execs%5B%5D=90104 1 blocked process 4 blocked processes 4 blocked processes (map phase)
  30. Comparing 3 runs on same cluster, different configs: CPU context

    switches, 48-node cluster URL: http://aloja.bsc.es/perfcharts?execs%5B%5D=90086&execs%5B%5D=90088&execs%5B%5D=90104 High context switches with 3 containers on a 2-core VM
  31. Impact of SW configurations in Speedup (4 node clusters) Number

    of mappers Compression algorithm No comp. ZLIB BZIP2 snappy 4m 6m 8m 10m Speedup (higher is better) Results using: http://hadoop.bsc.es/configimprovement Details: https://raw.githubusercontent.com/Aloja/aloja/master/publications/BSC-MSR_ALOJA.pdf
  32. Impact of HW configurations in Speedup Disks and Network Cloud

    remote volumes Local only 1 Remote 2 Remotes 3 Remotes 3 Remotes /tmp local 2 Remotes /tmp local 1 Remotes /tmp local HDD-ETH HDD-IB SSD-ETH SDD-IB Speedup (higher is better) Results using: http://hadoop.bsc.es/configimprovement Details: https://raw.githubusercontent.com/Aloja/aloja/master/publications/BSC-MSR_ALOJA.pdf
  33. VM Size comparison (Azure) Lower is better

  34. Clusters by cost-effectiveness 1/2 URL http://aloja.bsc.es/clustercosteffectiveness • Cluster ID reference

    • RL-06 = 8 performance1-8 VMs • RL-16 = 8 general1-8 VMs • RL-19 = 8 io1-15 VMs • RL-33 = 8 performance2-30 VMs • RL-30 = 8 io1-30 VMs Performance2-30 Io1-30 Io1-15 General1-8 Performance1-8 Io1-30
  35. Clusters by cost-effectiveness 2/2 URL http://aloja.bsc.es/clustercosteffectiveness Fastest Exec Cheapest exec

    • Cluster ID reference • RL-06 = 8 performance1-8 VMs • RL-16 = 8 general1-8 VMs • RL-19 = 8 io1-15 VMs • RL-33 = 8 performance2-30 VMs • RL-30 = 8 io1-30 VMs
  36. Cost/Performance Scalability of cluster size This shows a sample of

    a new screen (with sample data) to find the most cost-effective cluster size – X axis number of datanodes (cluster size – Left Y Execution time (lower is better) – Right Y Execution cost Execution time Execution cost Recommended size
  37. Predictive Analytics and automated learning

  38. Modeling Hadoop – Methodology Methodology – 3-step learning process: –

    Different split sizes tested: (10% ≤ training ≤ 50%) – Different learning algorithms: Regression trees; Nearest-neighbors methods; Linear/Multinomial regressions; Neural networks Learning results – Mean Absolute Errors ~250s (ranges in [100s, 6000s]) – Relative Absolute Errors between [0.10, 0.25] • Depend on benchmark and # of examples per benchmark • Some executions are/may be anomalies 40 ALOJA Data-Set Training Validation Testing Model Select this model? Final Model Train Test the model Test the model Tune algorithm, re-train NO YES
  39. Use case 1: Anomaly Detection Anomaly Detection – Model-based detection

    procedure – Pass executions through the model – Executions not fitting the model are considered “out of the system” Anomaly detection procedure: Sample view from site:
  40. Use case 2: Guided Benchmarking – Method Guided Benchmarking: –

    Best subset of configurations for modeling a Hadoop deployment – Clustering to get the “representative execution” for each similar subset of executions ALOJA Data-Set Increase number of centers NO YES Clustering Data-set (centers) Model Is error OK? Configs. to execute Model Build Build Test Reference 42
  41. Use case 3: Knowledge Discovery Make analyzing results easier –

    Multi-variable visualization – Trees separating relevant attributes – Other interesting tools 43 pred_time HDD SSD Tree Descriptor: │ ├───Disk=HDD │ ├───Net=ETH │ │ ├───IO.FBuf=128KB ⇒ 2935s │ │ └───IO.FBuf=64KB ⇒ 2942s │ └───Net=IB │ ├───IO.FBuf=128KB ⇒ 3118s │ └───IO.FBuf=64KB ⇒ 3125s └───Disk=SSD ├───Net=ETH │ ├───IO.FBuf=128KB ⇒ 1248s │ └───IO.FBuf=64KB ⇒ 1256s └───Net=IB ├───IO.FBuf=128KB ⇒ 1233s └───IO.FBuf=64KB ⇒ 124s1
  42. Concluding remarks and reference

  43. Concluding remarks Benchmarking its fun!, or at least… – It

    will save you €€€ and allow you to scale But it is also tough – The industry needs more transparency – We still have a lot to do… In ALOJA we provide the benchmarking scripts – And also de results, that should be your first entry point We are adding constantly new features – Benchmarks, systems providers It is an open initiate, your invited to participate – Beta testers  – Contributors With predictive analytics we can automate and find tendencies faster – Especially to save in benchmarking costs and time! Find us around the conference for more details on the tools… Fork our repo at: https://github.com/Aloja/aloja ≠
  44. More info: ALOJA Benchmarking platform and online repository – http://aloja.bsc.es

    http://aloja.bsc.es/publications Benchmarking Big Data – http://www.slideshare.net/ni_po/benchmarking-hadoop BDOOP meetup group in Barcelona Big Data Benchmarking Community (BDBC) mailing list – (~200 members from ~80organizations) – http://clds.sdsc.edu/bdbc/community Workshop Big Data Benchmarking (WBDB) – Next: http://clds.sdsc.edu/wbdb2015.ca SPEC Research Big Data working group – http://research.spec.org/working-groups/big-data-working- group.html Slides and video: – Michael Frank on Big Data benchmarking • http://www.tele-task.de/archive/podcast/20430/ – Tilmann Rabl Big Data Benchmarking Tutorial • http://www.slideshare.net/tilmann_rabl/ieee2014-tutorialbarurabl
  45. www.bsc.es Q&A Thanks! Contact: nicolas.poggi@bsc.es