Upgrade to Pro — share decks privately, control downloads, hide ads and more …

MongoNYC 2012: Life of MongoDB at the Energy Frontier

mongodb
May 25, 2012
1.4k

MongoNYC 2012: Life of MongoDB at the Energy Frontier

MongoNYC 2012: Life of MongoDB at the Energy Frontier, Valentin Kuznetsov, Cornell / CERN.The Data Aggregation system is a next generation of data discovery service for CMS experiment at CERN LHC. Its primary goal is to help physicists to find desired information across distributed CMS data-services. The MongoDB was chosen as a primary back-end to carry this task. We will provide system overview and our experience with MognoDB in production environment.

mongodb

May 25, 2012
Tweet

Transcript

  1. Outline ✤ CMS :: LHC :: CERN ✤ Data Aggregation

    System and MongoDB ✤ Experience ✤ Summary 2 Monday, May 21, 12
  2. CMS :: LHC :: CERN Large Hadron Collider located at

    CERN, Geneva, Switzerland CMS is one of the 4 experiments to probe our knowledge of particle interactions and search for a new physics 3 Monday, May 21, 12
  3. CMS :: LHC :: CERN ✤ 40 countries, 172 institutions,

    more then 3000 scientists ✤ CMS experiment produces a few PB of real data each year and we collect ~TB of meta-data ✤ CMS relies on GRID infrastructure for data processing and uses 100+ computing centers word-wide ✤ CMS software consists of 4M lines of C++(framework), 2M lines of python (data management), plus Java, perl, etc. ✤ ORACLE, MySQL, SQLite, NoSQL 6 Monday, May 21, 12
  4. Dilemma DBS SiteDB Phedex GenDB LumiDB RunDB PSetDB Data Quality

    Overview How I can find my data? 7 Monday, May 21, 12
  5. Motivations ✤ Users want to query different data services without

    knowing about their existence ✤ Users want to combine information from different data services ✤ Some users may have domain knowledge, but they need to query X services, using Y interface and dealing with Z data formats to get our data block, site lumi site DBS run, file, block, site, config, tier, dataset, lumi, parameters, .... LumiDB lumi, luminosity, hltpath SiteDB site, admin, site.status, .. Phedex block, file, block.replica, file.replica, se, node, ... GenDB generator, xsection, process, decay, ... RunSummary run, trigger, detector, ... DataQuality trigger, ecal, hcal, ... run, lumi run MC id Overview country, node, region, .. Parameter Set DB CMSSW parameters run Service E param1, param2, .. Service D param1, param2, .. Service C param1, param2, .. Service B param1, param2, .. Service A param1, param2, .. pset Data Aggregation System 8 Monday, May 21, 12
  6. Implementation idea ✤ When we talk we may use different

    languages (English, French, etc.) or different conventions (pounds vs kg) ✤ In order to establish communication we use translation, dictionary, thesaurus 9 Monday, May 21, 12
  7. Pros ✤ Separate data management from discovery service ✤ Data

    are safe and secure ✤ Pluggable architecture (new translations) ✤ Users never bother with interface, naming and schema conflicts, data- formats, security policies ✤ Information is aggregated in a real-time over distributed services ✤ Data consistency checks for free ✤ DB and API changes are transparent for end-users 11 Monday, May 21, 12
  8. Cons ✤ DAS does not own the data ✤ lots

    of writes/reads/translations ✤ Data-services are real bottleneck ✤ nothing is guaranteed, e.g. service can go down, no control of its performance, requested data can be really large, etc. ✤ cache often and preemptive MongoDB to rescue !!! 12 Monday, May 21, 12
  9. Data Aggregation System DAS web server dbs sitedb phedex lumidb

    runsum DAS cache DAS Analytics CPU core DAS core DAS core DAS Cache server record query, API call to Analytics Fetch popular queries/APIs Invoke the same API(params) Update cache periodically DAS mapping Map data-service output to DAS records mapping parser  data-services DAS merge plugins aggregator UI RESTful interface DAS robot 13 Monday, May 21, 12
  10. Mapping DB ✤ Holds translation between user keywords and data-service

    APIs, resolve naming conflicts, etc. ✤ city=Ithaca query translates into Google API call {'das2api': [{'api_param': 'q', 'das_key': 'city.name', 'pattern': ''}], 'daskeys': [{'key': 'city', 'map': 'city.name', 'pattern': ''}], 'expire': 3600, 'format': 'JSON', 'params': {'output': 'json', 'q': 'required'}, 'system': 'google_maps', 'url': 'http://maps.google.com/maps/geo', 'urn': 'google_geo_maps'} 14 Monday, May 21, 12
  11. Analytics DB ✤ Keep tracks of user queries, data-service API

    calls {'api': {'params': {'q': 'Ithaca', 'output': 'json'}, 'name': 'google_geo_maps'}, 'qhash': '7272bdeac45174823d3a4ea240c124ec', 'system': 'google_maps', 'counter': 5} ✤ Used by DAS analytics daemons to pre-fetch “hot” queries ✤ ValueHotSpot look-up data by popular values ✤ KeyHotSpot look-up data by popular key ✤ QueryMaintainer to keep given query always in cache 15 Monday, May 21, 12
  12. Caching DB ✤ Data coming out from data-service providers are

    translated into JSON and stored into cache collection ✤ naming translation are performed at this level ✤ Data records from cache collection are processed on common key, e.g. city.name, and merged into merge collection cache collection merge collection {'city': {'name': 'Ithaca', 'lat':42, 'lng':-76, 'zip':14850}} {'city': {'name': 'Ithaca', 'lat':42, 'lng':-76}} {'city': {'name': 'Ithaca', 'zip':14850}} 16 Monday, May 21, 12
  13. DAS workflow ✤ Query parser ✤ Query DAS merge collection

    ✤ Query DAS cache collection ✤ invoke call to data service ✤ write to analytics ✤ Aggregate results ✤ Represent results on web UI or via command line interface query parser query DAS merge Aggregator query DAS cache query data-services DAS merge DAS cache no yes no yes results DAS Mapping DAS Analytics Web UI DAS logging DAS core 17 Monday, May 21, 12
  14. DAS QL & MongoDB QL ✤ DAS Query Language built

    on top of MongoDB QL; it represents MongoDB QL in human readable form ✤ UI level: block dataset=/a/b/c | grep block.size | count(block.size) ✤ DB level: col.find(spec={‘dataset.name’:‘/a/b/c’}, fields=[block.size]).count() ✤ We enrich QL with additional filters (grep, sort, unique) and implement set of coroutines for aggregator functions 19 Monday, May 21, 12
  15. DAS & MongoDB ✤ DAS works with 15 distributed data-services

    ✤ their size vary, on average O(100GB) ✤ DAS uses 40 MongoDB collections ✤ caching, mapping, analytics, logging (normal, capped, gridfs cols) ✤ DAS inserts/deletes O(1M) records on a daily basis ✤ We operate on a single 64-bit Linux node with 8 CPUs, 24 GB of RAM and 1TB of disk space, sharding were tested, but it is not enabled 20 Monday, May 21, 12
  16. MongoDB benefits ✤ Fast I/O and schema-less database are ideal

    for cache implementation ✤ you’re not limited by key:value approach ✤ Flexible query language allows to build domain specific QL ✤ stay on par with SQL ✤ No administrative costs with DB ✤ easy to install and maintain 21 Monday, May 21, 12
  17. MongoDB issues (ver 2.0.X) ✤ We were unable to directly

    store DAS queries into analytics collection, due to the dot constrain, e.g. {‘a.b’:1} ✤ queries <=> storage format {‘key’:‘a.b’, ‘value’:1} ✤ Scons is not suitable in fully controlled build environment ✤ it removes $PATH/$LD_LIBRARY_PATH for compiler commands; it forces to use -L/lib64. As a result we used wrappers. ✤ Uncompressed field names and limitation with pagination/ aggregation ✤ should be addressed in new MongoDB aggregation framework 22 Monday, May 21, 12
  18. Tradeoffs ✤ Query collisions: DAS does not own the data

    and there is no transactions, we rely on query status and update it accordingly ✤ Index choice: initially one per select key, later one per query hash ✤ Storage size: we compromise storage vs data flexibility vs naming conventions ✤ Speed: we compromise simple data access vs conglomerate of restrictions (naming, security policies, interfaces, etc.), but we tuning- up our data-service APIs based on query patterns 23 Monday, May 21, 12
  19. Results ✤ The service in production over one year ✤

    Users authenticated via GRID certificates and DAS uses proxy server to pass credentials to back-end services ✤ Single query request yields few thousand records and resolved within few seconds ✤ Pluggable architecture allows to query your service(s) ✤ unit tests are done against public data-services, e.g. Google, IP look-up, etc. 24 Monday, May 21, 12
  20. NoSQL @ CERN ✤ MongoDB is used by other experiments

    at CERN ✤ logging, monitoring, data analytics ✤ MongoDB is not the only NoSQL solution used at CERN ✤ One size does not fit all ✤ CouchDB, Cassandra, HBase, etc. ✤ There is on-going discussion between experiments and CERN IT about adoption of NoSQL 25 Monday, May 21, 12
  21. Summary ✤ CMS experiment built Data Aggregation System as an

    intelligent cache to query distributed data-services ✤ MongoDB is used as DAS back-end ✤ During first year of operation we did not experience any significant problems ✤ I’d like to thank MongoDB team and its community for their constant support ✤ Questions? Contact: [email protected] ✤ https://github.com/vkuznet/DAS/ 26 Monday, May 21, 12
  22. From query to results Query API lookup Merge results Aggreator

    Data service generator Data service generator Data service generator Aggreator Aggreator 28 Monday, May 21, 12
  23. From query to results Query API lookup Merge results Aggreator

    Data service generator Data service generator Data service generator Aggreator Aggreator 28 Monday, May 21, 12
  24. From query to results Query API lookup Merge results Aggreator

    Data service generator Data service generator Data service generator Aggreator Aggreator block dataset=/a/b/c MongoDB spec Mapping DB holds relationships 28 Monday, May 21, 12
  25. From query to results Query API lookup Merge results Aggreator

    Data service generator Data service generator Data service generator Aggreator Aggreator block dataset=/a/b/c MongoDB spec Mapping DB holds relationships Caching DB holds service records 28 Monday, May 21, 12
  26. From query to results Query API lookup Merge results Aggreator

    Data service generator Data service generator Data service generator Aggreator Aggreator block dataset=/a/b/c MongoDB spec Mapping DB holds relationships Caching DB holds service records Merge DB holds merged records 28 Monday, May 21, 12
  27. From query to results Query API lookup Merge results Aggreator

    Data service generator Data service generator Data service generator Aggreator Aggreator block dataset=/a/b/c MongoDB spec Mapping DB holds relationships Caching DB holds service records Merge DB holds merged records 28 Monday, May 21, 12