InfluxDB's new storage engine: The Time Structured Merge Tree

39b7a68b6cbc43ec7683ad0bcc4c9570?s=47 Paul Dix
October 14, 2015

InfluxDB's new storage engine: The Time Structured Merge Tree

39b7a68b6cbc43ec7683ad0bcc4c9570?s=128

Paul Dix

October 14, 2015
Tweet

Transcript

  1. InfluxDB’s new storage engine: The Time Structured Merge Tree Paul

    Dix CEO at InfluxDB @pauldix paul@influxdb.com
  2. preliminary intro materials…

  3. Everything is indexed by time and series

  4. Shards 10/11/2015 10/12/2015 Data organized into Shards of time, each

    is an underlying DB efficient to drop old data 10/13/2015 10/10/2015
  5. InfluxDB data temperature,device=dev1,building=b1 internal=80,external=18 1443782126

  6. InfluxDB data temperature,device=dev1,building=b1 internal=80,external=18 1443782126 Measurement

  7. InfluxDB data temperature,device=dev1,building=b1 internal=80,external=18 1443782126 Measurement Tags

  8. InfluxDB data temperature,device=dev1,building=b1 internal=80,external=18 1443782126 Measurement Tags Fields

  9. InfluxDB data temperature,device=dev1,building=b1 internal=80,external=18 1443782126 Measurement Tags Fields Timestamp

  10. InfluxDB data temperature,device=dev1,building=b1 internal=80,external=18 1443782126 Measurement Tags Fields Timestamp We

    actually store up to ns scale timestamps but I couldn’t fit on the slide
  11. Each series and field to a unique ID temperature,device=dev1,building=b1#internal temperature,device=dev1,building=b1#external

    1 2
  12. Data per ID is tuples ordered by time temperature,device=dev1,building=b1#internal temperature,device=dev1,building=b1#external

    1 2 1 (1443782126,80) 2 (1443782126,18)
  13. Arranging in Key/Value Stores 1,1443782126 Key Value 80 ID Time

  14. Arranging in Key/Value Stores 1,1443782126 Key Value 80 2,1443782126 18

  15. Arranging in Key/Value Stores 1,1443782126 Key Value 80 2,1443782126 18

    1,1443782127 81 new data
  16. Arranging in Key/Value Stores 1,1443782126 Key Value 80 2,1443782126 18

    1,1443782127 81 key space is ordered
  17. Arranging in Key/Value Stores 1,1443782126 Key Value 80 2,1443782126 18

    1,1443782127 81 2,1443782256 15 2,1443782130 17 3,1443700126 18
  18. Many existing storage engines have this model

  19. New Storage Engine?!

  20. First we used LSM Trees

  21. deletes expensive

  22. too many open file handles

  23. Then mmap COW B+Trees

  24. write throughput

  25. compression

  26. met our requirements

  27. High write throughput

  28. Awesome read performance

  29. Better Compression

  30. Writes can’t block reads

  31. Reads can’t block writes

  32. Write multiple ranges simultaneously

  33. Hot backups

  34. Many databases open in a single process

  35. Enter InfluxDB’s Time Structured Merge Tree (TSM Tree)

  36. Enter InfluxDB’s Time Structured Merge Tree (TSM Tree) like LSM,

    but different
  37. Components WAL In memory cache Index Files

  38. Components WAL In memory cache Index Files Similar to LSM

    Trees
  39. Components WAL In memory cache Index Files Similar to LSM

    Trees Same
  40. Components WAL In memory cache Index Files Similar to LSM

    Trees Same like MemTables
  41. Components WAL In memory cache Index Files Similar to LSM

    Trees Same like MemTables like SSTables
  42. awesome time series data WAL (an append only file)

  43. awesome time series data WAL (an append only file) in

    memory index
  44. In Memory Cache // cache and flush variables cacheLock sync.RWMutex

    cache map[string]Values flushCache map[string]Values temperature,device=dev1,building=b1#internal
  45. In Memory Cache // cache and flush variables cacheLock sync.RWMutex

    cache map[string]Values flushCache map[string]Values writes can come in while WAL flushes
  46. // cache and flush variables cacheLock sync.RWMutex cache map[string]Values flushCache

    map[string]Values dirtySort map[string]bool values can come in out of order. mark if so, sort at query time
  47. Values in Memory type Value interface { Time() time.Time UnixNano()

    int64 Value() interface{} Size() int }
  48. awesome time series data WAL (an append only file) in

    memory index on disk index (periodic flushes)
  49. The Index Data File Min Time: 10000 Max Time: 29999

    Data File Min Time: 30000 Max Time: 39999 Data File Min Time: 70000 Max Time: 99999 Contiguous blocks of time
  50. The Index Data File Min Time: 10000 Max Time: 29999

    Data File Min Time: 15000 Max Time: 39999 Data File Min Time: 70000 Max Time: 99999 can overlap
  51. The Index cpu,host=A Min Time: 10000 Max Time: 20000 cpu,host=A

    Min Time: 21000 Max Time: 39999 Data File Min Time: 70000 Max Time: 99999 but a specific series must not overlap
  52. The Index Data File Data File Data File a file

    will never overlap with more than 2 others time ascending Data File Data File
  53. Data files are read only, like LSM SSTables

  54. The Index Data File Min Time: 10000 Max Time: 29999

    Data File Min Time: 30000 Max Time: 39999 Data File Min Time: 70000 Max Time: 99999 Data File Min Time: 10000 Max Time: 99999 they periodically get compacted (like LSM)
  55. Compacting while appending new data

  56. Compacting while appending new data func (w *WriteLock) LockRange(min, max

    int64) { // sweet code here } func (w *WriteLock) UnlockRange(min, max int64) { // sweet code here }
  57. Compacting while appending new data func (w *WriteLock) LockRange(min, max

    int64) { // sweet code here } func (w *WriteLock) UnlockRange(min, max int64) { // sweet code here } This should block until we get it
  58. Locking happens inside each Shard

  59. Back to the data files… Data File Min Time: 10000

    Max Time: 29999 Data File Min Time: 30000 Max Time: 39999 Data File Min Time: 70000 Max Time: 99999
  60. Data File Layout

  61. Data File Layout Similar to SSTables

  62. Data File Layout

  63. Data File Layout blocks have up to 1,000 points by

    default
  64. Data File Layout

  65. Data File Layout 4 byte position means data files can

    be at most 4GB
  66. Data Files type dataFile struct { f *os.File size uint32

    mmap []byte }
  67. Memory mapping lets the OS handle caching for you

  68. Access file like a byte slice func (d *dataFile) MinTime()

    int64 { minTimePosition := d.size - minTimeOffset timeBytes := d.mmap[minTimeOffset : minTimeOffset+timeSize] return int64(btou64(timeBytes)) }
  69. Binary Search for ID func (d *dataFile) StartingPositionForID(id uint64) uint32

    { seriesCount := d.SeriesCount() indexStart := d.indexPosition() min := uint32(0) max := uint32(seriesCount) for min < max { mid := (max-min)/2 + min offset := mid*seriesHeaderSize + indexStart checkID := btou64(d.mmap[offset : offset+timeSize]) if checkID == id { return btou32(d.mmap[offset+timeSize : offset+timeSize+posSize]) } else if checkID < id { min = mid + 1 } else { max = mid } } return uint32(0) } The Index: IDs are sorted
  70. Compressed Data Blocks

  71. Timestamps: encoding based on precision and deltas

  72. Timestamps (best case): Run length encoding Deltas are all the

    same for a block
  73. Timestamps (good case): Simple8B Ann and Moffat in "Index compression

    using 64-bit words"
  74. Timestamps (worst case): raw values nano-second timestamps with large deltas

  75. float64: double delta Facebook’s Gorilla - google: gorilla time series

    facebook https://github.com/dgryski/go-tsz
  76. booleans are bits!

  77. int64 uses zig-zag same as from Protobufs (also looking at

    adding double delta and RLE)
  78. string uses Snappy same compression LevelDB uses (might add dictionary

    compression)
  79. How does it perform?

  80. Compression depends greatly on the shape of your data

  81. Write throughput depends on batching, CPU, and memory

  82. test last night: 100,000 series 100,000 points per series 10,000,000,000

    total points 5,000 points per request c3.8xlarge, writes from 4 other systems ~390,000 points/sec ~3 bytes/point (random floats, could be better)
  83. ~400 IOPS 30%-50% CPU There’s room for improvement!

  84. Detailed writeup https://influxdb.com/docs/v0.9/concepts/storage_engine.html

  85. Thank you! Paul Dix @pauldix paul@influxdb.com