Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Big Data Collection, Storage and Analytics (SFLiveBerlin2012 2012-11-22)

David Zuelke
November 22, 2012

Big Data Collection, Storage and Analytics (SFLiveBerlin2012 2012-11-22)

Presentation given at Symfony Live Berlin 2012 in Berlin, Germany.

David Zuelke

November 22, 2012
Tweet

More Decks by David Zuelke

Other Decks in Programming

Transcript

  1. SOME NUMBERS • Facebook, ingest per day: • I/08: 200

    GB • II/09: 2 TB compressed • I/10: 12 TB compressed • III/12: 500 TB • Google • Data processed per month: 400 PB (in 2007!) • Average job size: 180 GB
  2. is data lost? will other nodes in the grid have

    to re-start? how do you coordinate this?
  3. BASIC MAPREDUCE FLOW 1.A Mapper reads records and emits <key,

    value> pairs 1.Input could be a web server log, with each line as a record 2.A Reducer is given a key and all values for this specific key 1.Even if there are many Mappers on many computers; the results are aggregated before they are handed to Reducers * In pratice, it’s a lot smarter than that
  4. EXAMPLE OF MAPPED INPUT IP Bytes 212.122.174.13 18271 212.122.174.13 191726

    212.122.174.13 198 74.119.8.111 91272 74.119.8.111 8371 212.122.174.13 43
  5. REDUCER WILL RECEIVE THIS IP Bytes 212.122.174.13 18271 212.122.174.13 191726

    212.122.174.13 198 212.122.174.13 43 74.119.8.111 91272 74.119.8.111 8371
  6. PSEUDOCODE function  map($line_number,  $line_text)  {    $parts  =  parse_apache_log($line_text);  

     emit($parts['ip'],  $parts['bytes']); } function  reduce($key,  $values)  {    $bytes  =  array_sum($values);    emit($key,  $bytes); } 212.122.174.13  210238 74.119.8.111      99643 212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /foo  HTTP/1.1"  200  18271 212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /bar  HTTP/1.1"  200  191726 212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /baz  HTTP/1.1"  200  198 74.119.8.111      -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /egg  HTTP/1.1"  200  43 74.119.8.111      -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /moo  HTTP/1.1"  200  91272 212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /yay  HTTP/1.1"  200  8371
  7. HADOOP AT FACEBOOK • Predominantly used in combination with Hive

    (~95%) • Largest cluster holds over 100 PB of data • Typically 8 cores, 12 TB storage and 32 GB RAM per node • 1x Gigabit Ethernet for each server in a rack • 4x Gigabit Ethernet from rack switch to core Hadoop is aware of racks and locality of nodes
  8. HADOOP AT YAHOO! • Over 25,000 computers with over 100,000

    CPUs • Biggest Cluster: • 4000 Nodes • 2x4 CPU cores each • 16 GB RAM each • Over 40% of jobs run using Pig http://wiki.apache.org/hadoop/PoweredBy
  9. OTHER NOTABLE USERS • Twitter (storage, logging, analysis. Heavy users

    of Pig) • Rackspace (log analysis; data pumped into Lucene/Solr) • LinkedIn (contact suggestions) • Last.fm (charts, log analysis, A/B testing) • The New York Times (converted 4 TB of scans using EC2)
  10. HADOOP FRAMEWORKS AND ECOSYSTEM • Apache Hive SQL-like syntax •

    Apache Pig Data flow language • Cascading Java abstraction layer • Scalding (Scala) • Apache Mahout Machine Learning toolkit • Apache HBase BigTable-like database • Apache Nutch Search engine • Cloudera Impala Realtime queries (no MR)
  11. TWITTER STORM • Often called “the Hadoop for Real-Time” •

    Central Nimbus service coordinates execution w/ ZooKeeper • A Storm cluster runs Topologies, processing continuously • Spouts produce streams: unbounded sequences of tuples • Bolts consume input streams, process, output again • Topologies can consist of many steps for complex tasks
  12. TWITTER STORM • Bolts can be written in other languages

    • Uses STDIN/STDOUT like Hadoop Streaming, plus JSON • Storm can provide transactions for topologies and guarantee processing of messages • Architecture allows for non stream processing applications • e.g. Distributed RPC
  13. CLOUDERA IMPALA • Implementation of a Dremel/BigQuery like system on

    Hadoop • Uses Hadoop v2 YARN infrastructure for distributed work • No MapReduce, no job setup overhead • Query data in HDFS or HBase • Hive compatible interface • Potential game changer for its performance characteristics
  14. HADOOPHP • A little framework to help with writing mapred

    jobs in PHP • Takes care of input splitting, can do basic decoding et cetera • Automatically detects and handles Hadoop settings such as key length or field separators • Packages jobs as one .phar archive to ease deployment • Also creates a ready-to-rock shell script to invoke the job
  15. DATA ACQUISITION • Batch loading • Log files into HDFS

    • *SQL to Hive via Sqoop • Streaming • Facebook Scribe • Apache Flume • Apache Chuckwa • Apache Kafka
  16. WANT TO KEEP IT SIMPLE? • Measure Anything, Measure Everything

    http://codeascraft.etsy.com/2011/02/15/measure-anything- measure-everything/ • StatsD receives counter or timer values via UDP • StatsD::increment("grue.dinners"); • Periodically flushes information to Graphite • But you need to know what you want to know!
  17. OPS MONITORING • Flume and Chuckwa have sources for everything:

    MySQL status, Kernel I/O, FastCGI statistics, ... • Build a flow into HDFS store for persistence • Impala queries for fast checks on service outages etc • Correlate with Storm flow results to find problems • Use cloud-based notifications to produce SMS/email alerts