This is the talk about our metrics solution at gutefrage.net. We use openTSDB. I gave this talk at the Open Source Monitorin Conference 2013 in Nuremberg.
Who am I? Senior Engineer - Data and Infrastructure at gutefrage.net GmbH Was doing software development before DevOps advocate Donnerstag, 24. Oktober 13
Who is Gutefrage.net? Germany‘s biggest Q&A platform #1 German site (mobile) about 5M Unique Users #3 German site (desktop) about 17M Unique Users > 4 Mio PI/day Part of the Holtzbrinck group Running several platforms (Gutefrage.net, Helpster.de, Cosmiq, Comprano, ...) Donnerstag, 24. Oktober 13
We were looking at some options Munin Graphite openTSDB Ganglia Scales well no sort of yes yes Keeps all data no no yes no Creating metrics easy easy easy easy Donnerstag, 24. Oktober 13
We have a winner! Munin Graphite openTSDB Ganglia Scales well no sort of yes yes Keeps all data no no yes no Creating metrics easy easy easy easy Bingo! Donnerstag, 24. Oktober 13
The ecosystem App feeds metrics in via RabbitMQ We base Icinga checks on the metrics We evaluate etsy Skyline for anomaly detection We deploy sensors via chef Donnerstag, 24. Oktober 13
openTSDB Written at StumbleUpon but OpenSource Uses HBase (which is based on HDFS) as a storage Distributed system (multiple TSDs) Donnerstag, 24. Oktober 13
It gets even better tcollector is a python script that runs your collectors handles network connection, starts your collectors at set intervals does basic process management adds host tag, does deduplication Donnerstag, 24. Oktober 13
What was that HDFS again? HDFS is a distributed filesystem suitable for Petabytes of data on thousands of machines. Runs on commodity hardware Takes care of redundancy Used by e.g. Facebook, Spotify, eBay,... Donnerstag, 24. Oktober 13
Okay... and HBase? HBase is a NoSQL database / data store on top of HDFS Modeled after Google‘s BigTable Built for big tables (billions of rows, millions of columns) Automatic sharding by row key Donnerstag, 24. Oktober 13
Keys are key! Data is sharded across regions based on their row key You query data based on the row key You can query row key ranges (say e.g. A...D) So: think about key design Donnerstag, 24. Oktober 13
Take 1 Row key format: timestamp, metric id 1382536472, 5 17 1382536472, 6 24 1382536472, 8 12 1382536473, 5 134 1382536473, 6 10 1382536473, 8 99 Server A Server B Donnerstag, 24. Oktober 13
Take 2 Metric ID first, then timestamp Searching through many rows is slower than searching through viewer rows. (Obviously) So: Put multiple data points into one row Donnerstag, 24. Oktober 13
Where are the tags stored? They are put at the end of the row key Both metric names and metric values are represented by IDs Donnerstag, 24. Oktober 13
The Row Key 3 Bytes - metric ID 4 Bytes - timestamp (rounded down to the hour) 3 Bytes tag ID 3 Bytes tag value ID Total: 7 Bytes + 6 Bytes * Number of tags Donnerstag, 24. Oktober 13
What works well We store about 200M data points in several thousand time series with no issues tcollector is decoupling measurement from storage Creating new metrics is really easy Donnerstag, 24. Oktober 13
Challenges The UI is seriously lacking no annotation support out of the box Only 1s time resolution (and only 1 value/s/ time series) Donnerstag, 24. Oktober 13
Friendly advice Pick a naming scheme and stick to it Use tags wisely (not more than 6 or 7 tags per data point) Use tcollector wait for openTSDB 2 ;-) Donnerstag, 24. Oktober 13