Upgrade to Pro — share decks privately, control downloads, hide ads and more …

MongoDB and Hadoop

MongoDB and Hadoop

Why and How MongoDB and Hadoop are working together?

Learn this in this presentation.

This presentation was delivered during MongoDB Day Paris 2014

Tugdual Grall

October 28, 2014
Tweet

More Decks by Tugdual Grall

Other Decks in Technology

Transcript

  1. • Terabyte and Petabyte datasets • Data warehousing • Advanced

    analytics Hadoop “The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.” http://hadoop.apache.org
  2. Operational: MongoDB First-Level Analytics Internet of Things Social Mobile Apps

    Product/Asset Catalog Security & Fraud Single View Customer Data Management Churn Analysis Risk Modeling Sentiment Analysis Trade Surveillance Recommender Warehouse & ETL Ad Targeting Predictive Analytics
  3. Analytical: Hadoop First-Level Analytics Internet of Things Social Mobile Apps

    Product/Asset Catalog Security & Fraud Single View Customer Data Management Churn Analysis Risk Modeling Sentiment Analysis Trade Surveillance Recommender Warehouse & ETL Ad Targeting Predictive Analytics
  4. Operational & Analytical: Lifecycle First-Level Analytics Internet of Things Social

    Mobile Apps Product/Asset Catalog Security & Fraud Single View Customer Data Management Churn Analysis Risk Modeling Sentiment Analysis Trade Surveillance Recommender Warehouse & ETL Ad Targeting Predictive Analytics
  5. Commerce Applications powered by Analysis powered by Products & Inventory

    Recommended products Customer profile Session management Elastic pricing Recommendation models Predictive analytics Clickstream history MongoDB Connector
 for Hadoop
  6. Insurance Applications powered by Analysis powered by Customer profiles Insurance

    policies Session data Call center data Customer action analysis Churn analysis Churn prediction Policy rates MongoDB Connector
 for Hadoop
  7. Fraud Detection MongoDB Connector
 for Hadoop Payments Nightly Analysis  

       3rd Party
 Data Sources Results Cache Fraud   Detection Query Only Query Only
  8. ‹#› Connector Overview DATA • Read/Write MongoDB • Read/Write BSON


    TOOLS • MapReduce • Pig • Hive • Spark
 PLATFORMS • Apache Hadoop • Cloudera CDH • Hortonworks HDP • MapR • Amazon EMR

  9. ‹#› Connector Features and Functionality • Computes splits to read

    data • Single Node, Replica Sets, Sharded Clusters • Mappings for Pig and Hive • MongoDB as a standard data source/destination • Support for • Filtering data with MongoDB queries • Authentication • Reading from Replica Set tags • Appending to existing collections
  10. ‹#› MapReduce Configuration • MongoDB input/output mongo.job.input.format = com.mongodb.hadoop.MongoInputFormat mongo.input.uri

    = mongodb://mydb:27017/db1.collection1 mongo.job.output.format = com.mongodb.hadoop.MongoOutputFormat mongo.output.uri = mongodb://mydb:27017/db1.collection2 • BSON input/output mongo.job.input.format = com.hadoop.BSONFileInputFormat mapred.input.dir = hdfs:///tmp/database.bson mongo.job.output.format = com.hadoop.BSONFileOutputFormat mapred.output.dir = hdfs:///tmp/output.bson
  11. ‹#› Pig Mappings • Input: BSONLoader and MongoLoader data =

    LOAD ‘mongodb://mydb:27017/db.collection’ 
 using com.mongodb.hadoop.pig.MongoLoader • Output: BSONStorage and MongoInsertStorage STORE records INTO ‘hdfs:///output.bson’
 using com.mongodb.hadoop.pig.BSONStorage
  12. ‹#› Hive Support • Access collections as Hive tables •

    Use with MongoStorageHandler or BSONStorageHandler CREATE TABLE mongo_users (id int, name string, age int)
 STORED BY "com.mongodb.hadoop.hive.MongoStorageHandler"
 WITH SERDEPROPERTIES("mongo.columns.mapping” = "_id,name,age”) TBLPROPERTIES("mongo.uri" = "mongodb://host:27017/test.users”)
  13. ‹#› Spark • Use with MapReduce input/output formats • Create

    Configuration objects with input/output formats and data URI • Load/save data using SparkContext Hadoop file API
  14. ‹#› Data Movement Dynamic  queries  to  MongoDB  vs.  BSON  snapshots

     in  HDFS Dynamic queries with most recent data Puts load on operational database Snapshots move load to Hadoop Snapshots add predictable load to MongoDB
  15. ‹#› MovieWeb Web Application • Browse - Top movies by

    ratings count - Top genres by movie count • Log in to - See My Ratings - Rate movies • Recommendations - Movies You May Like - Recommendations
  16. ‹#› MovieWeb Components • MovieLens dataset – 10M ratings, 10K

    movies, 70K users – http://grouplens.org/datasets/movielens/ • Python web app to browse movies, recommendations – Flask, PyMongo • Spark app computes recommendations – MLLib collaborative filter • Predicted ratings are exposed in web app – New predictions collection
  17. ‹#› Spark Recommender • Apache Hadoop (2.3) - HDFS &

    YARN - Top genres by movie count • Spark (1.0) - Execute within YARN - Assign executor resources • Data - From HDFS, MongoDB - To MongoDB
  18. ‹#› MovieWeb Workflow Snapshot db as BSON Predict ratings for

    all pairings Write Prediction to MongoDB collection Store BSON 
 in HDFS Read BSON into Spark App Create user movie pairing Web Application exposes recommendations Repeat Process Train Model from existing ratings
  19. ‹#› Execution $ spark-submit --master local \
 --driver-memory 2G --executor-memory

    2G \
 --jars mongo-hadoop-core.jar,mongo-java-driver.jar \
 --class com.mongodb.workshop.SparkExercise \
 ./target/spark-1.0-SNAPSHOT.jar \
 hdfs://localhost:9000 \
 mongodb://127.0.0.1:27017/movielens \
 predictions \
  20. ‹#› Business First! First-Level Analytics Internet of Things Social Mobile

    Apps Product/ Asset Catalog Security & Fraud Single View Customer Data Manageme Churn Analysis Risk Modeling Sentiment Analysis Trade Surveillanc e Recommen der Warehouse & ETL Ad Targeting Predictive Analytics What/Why How
  21. ‹#› The good tool for the task • Dataset size

    • Data processing complexity • Continuous improvement V1.0
  22. ‹#› The good tool for the task • Dataset size

    • Data processing complexity • Continuous improvement V2.0
  23. ‹#› Resources / Questions • MongoDB Connector for Hadoop -

    http://github.com/mongodb/mongo-hadoop
 • Getting Started with MongoDB and Hadoop - http://docs.mongodb.org/ecosystem/tutorial/getting- started-with-hadoop/
 • MongoDB-Spark Demo - https://github.com/crcsmnky/mongodb-hadoop- workshop