Flux and InfluxDB 2.0

39b7a68b6cbc43ec7683ad0bcc4c9570?s=47 Paul Dix
November 07, 2018

Flux and InfluxDB 2.0

Talk given at InfluxDays SF 2018 on Flux, the new language we're creating, and where we're going with InfluxDB 2.0.

39b7a68b6cbc43ec7683ad0bcc4c9570?s=128

Paul Dix

November 07, 2018
Tweet

Transcript

  1. Flux and InfluxDB 2.0 Paul Dix @pauldix paul@influxdata.com

  2. None
  3. • Data-scripting language • Functional • MIT Licensed • Language

    & Runtime/Engine
  4. Language + Query Engine

  5. None
  6. None
  7. 2.0

  8. Biggest Change Since 0.9

  9. Clean Migration Path

  10. Compatibility Layer

  11. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor

    rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  12. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor

    rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  13. TICK is dead

  14. Long Live InfluxDB 2.0 (and Telegraf)

  15. Consistent Documented API Collection, Write/Query, Streaming & Batch Processing, Dashboards

  16. None
  17. Officially Supported Client Libraries Go, Node.js, Ruby, Python, PHP, Java,

    C#, C, Kotlin
  18. Visualization Libraries

  19. None
  20. Ways to run Flux - (interpreter, InfluxDB 1.7 & 2.0)

  21. None
  22. None
  23. Flux Language Elements

  24. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
  25. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Comments
  26. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Named Arguments
  27. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") String Literals
  28. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Buckets, not DBs
  29. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Duration Literal
  30. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:2018-11-07T00:00:00Z) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Time Literal
  31. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Pipe forward operator
  32. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Anonymous Function
  33. // get all data from the telegraf db from(bucket:”telegraf/autogen”) //

    filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu") and r.host == “serverA") Predicate Function
  34. // variables some_int = 23

  35. // variables some_int = 23 some_float = 23.2

  36. // variables some_int = 23 some_float = 23.2 some_string =

    “cpu"
  37. // variables some_int = 23 some_float = 23.2 some_string =

    “cpu" some_duration = 1h
  38. // variables some_int = 23 some_float = 23.2 some_string =

    “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00
  39. // variables some_int = 23 some_float = 23.2 some_string =

    “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22]
  40. // variables some_int = 23 some_float = 23.2 some_string =

    “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22] some_object = {foo: "hello" bar: 22}
  41. Data Model & Working with Tables

  42. Example Series _measurement=mem,host=A,region=west,_field=free _measurement=mem,host=B,region=west,_field=free _measurement=cpu,host=A,region=west,_field=usage_system _measurement=cpu,host=A,region=west,_field=usage_user

  43. Example Series _measurement=mem,host=A,region=west,_field=free _measurement=mem,host=B,region=west,_field=free _measurement=cpu,host=A,region=west,_field=usage_system _measurement=cpu,host=A,region=west,_field=usage_user Measurement

  44. Example Series _measurement=mem,host=A,region=west,_field=free _measurement=mem,host=B,region=west,_field=free _measurement=cpu,host=A,region=west,_field=usage_system _measurement=cpu,host=A,region=west,_field=usage_user Field

  45. Table _measurement host region _field _time _value mem A west

    free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10
  46. _measurement host region _field _time _value mem A west free

    2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Column
  47. _measurement host region _field _time _value mem A west free

    2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Record
  48. _measurement host region _field _time _value mem A west free

    2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Group Key _measurement=mem,host=A,region=west,_field=free
  49. _measurement host region _field _time _value mem A west free

    2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Every record has the same value! _measurement=mem,host=A,region=west,_field=free
  50. Table Per Series _measurement host region _field _time _value mem

    A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 11 _measurement host region _field _time _value mem B west free 2018-06-14T09:15:00 20 mem B west free 2018-06-14T09:14:50 22 _measurement host region _field _time _value cpu A west usage_user 2018-06-14T09:15:00 45 cpu A west usage_user 2018-06-14T09:14:50 49 _measurement host region _field _time _value cpu A west usage_system 2018-06-14T09:15:00 35 cpu A west usage_system 2018-06-14T09:14:50 38
  51. input tables -> function -> output tables

  52. input tables -> function -> output tables // example query

    from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  53. input tables -> function -> output tables What to sum

    on? // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  54. input tables -> function -> output tables Default columns argument

    // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum(columns: [“_value”])
  55. input tables -> function -> output tables _meas ureme host

    region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu e mem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 Input in table form // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  56. input tables -> function -> output tables _meas ureme host

    region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu e mem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 sum() // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  57. input tables -> function -> output tables // example query

    from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum() _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu e mem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 sum() _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 21 _meas ureme host region _field _time _valu e mem B west free 2018-06- 14T09:15 42
  58. N to N table mapping (1 to 1 mapping)

  59. N to M table mapping

  60. window // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn:

    r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) 30s of data (4 samples)
  61. window // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn:

    r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) split into 20s windows
  62. window _meas host region _field _time _valu mem A west

    free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) Input
  63. window _meas host region _field _time _valu mem A west

    free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s)
  64. window _meas host region _field _time _valu mem A west

    free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) _meas ureme host region _field _time _valu e mem A west free …14:30 10 mem A west free …14:40 11 _meas ureme host region _field _time _valu e mem B west free …14:50 23 mem B west free …15:00 24 _meas ureme host region _field _time _valu e mem B west free …14:30 20 mem B west free …14:40 22 _meas ureme host region _field _time _valu e mem A west free …14:50 12 mem A west free …15:00 13
  65. window _meas host region _field _time _valu mem A west

    free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) _meas ureme host region _field _time _valu e mem A west free …14:30 10 mem A west free …14:40 11 _meas ureme host region _field _time _valu e mem B west free …14:50 23 mem B west free …15:00 24 _meas ureme host region _field _time _valu e mem B west free …14:30 20 mem B west free …14:40 22 _meas ureme host region _field _time _valu e mem A west free …14:50 12 mem A west free …15:00 13 N to M tables
  66. Window based on time _start and _stop columns

  67. group // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn:

    r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"])
  68. group // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn:

    r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"]) new group key
  69. group _meas host region _field _time _valu mem A west

    free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"])
  70. group _meas host region _field _time _valu mem A west

    free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 group( keys: [“region”]) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"]) _meas ureme host region _field _time _valu e mem A west free …14:30 10 mem B west free …14:30 20 mem A west free …14:40 11 mem B west free …14:40 21 mem A west free …14:50 12 mem B west free …14:50 22 mem B west free …15:00 13 mem B west free …15:00 23 N to M tables M == cardinality(group keys)
  71. Group based on columns

  72. Flux Design Principles

  73. Useable

  74. Make Everyone a Data Programmer!

  75. None
  76. None
  77. None
  78. Readable

  79. Flexible

  80. Composable

  81. Testable

  82. Contributable

  83. Shareable

  84. Functions Overview

  85. Inputs from, fromKafka, fromFile, fromS3, fromPrometheus, fromMySQL, etc.

  86. Flux != InfluxDB

  87. None
  88. None
  89. None
  90. None
  91. Follow Telegraf Model

  92. import "mysql" customers = mysql.from(connect: loadSecret(name:”mysql_prod"), query: "select id, name

    from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results")
  93. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name

    from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Imports for sharing code!
  94. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name

    from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Pulling data from a non-InfluxDB source
  95. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name

    from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Raw query (for now)
  96. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name

    from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Loading Secret
  97. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name

    from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Renaming & Shaping Data
  98. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name

    from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Join on any column
  99. Outputs to, toKafka, toFile, toS3, toPrometheus, toMySQL, etc.

  100. Outputs are for Tasks

  101. option task = { name: “Alert on disk", every: 5m,

    } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts")
  102. option task = { name: “Alert on disk", every: 5m,

    } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Option syntax for tasks
  103. option task = { name: “Alert on disk", every: 5m,

    } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Get at the last value without specifying time range
  104. option task = { name: “Alert on disk", every: 5m,

    } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: “critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Adding a column to decorate the data
  105. option task = { name: “Alert on disk", every: 5m,

    } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") To writes to the local InfluxDB
  106. Separate Alerts From Notifications!

  107. option task = {name: "slack critical alerts", every: 1m} import

    "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: “slack_alert_config”), message: “_value”) |> to(bucket: “notifications")
  108. option task = {name: "slack critical alerts", every: 1m} import

    "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == “critical”) // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") We have state so we don’t resend
  109. option task = {name: "slack critical alerts", every: 1m} import

    "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Use last time as argument to range
  110. option task = {name: "slack critical alerts", every: 1m} import

    "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Now function for current time
  111. option task = {name: "slack critical alerts", every: 1m} import

    "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Map function to iterate over values
  112. option task = {name: "slack critical alerts", every: 1m} import

    "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") String interpolation
  113. option task = {name: "slack critical alerts", every: 1m} import

    "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Send to Slack and record in InfluxDB
  114. option task = { name: "email alert digest", cron: "0

    5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} times\n") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message)
  115. option task = { name: "email alert digest", cron: "0

    5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} times\n") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Cron syntax
  116. option task = { name: "email alert digest", cron: "0

    5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} times\n") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Closures
  117. Tasks run logs (just another time series)

  118. UI will hide complexity

  119. Built on top of primitives

  120. API for Defining Dashboards

  121. Bulk Import & Export Specify bucket, range, predicate

  122. Same API in OSS, Cloud, and Enterprise

  123. CLI & UI

  124. 2.0

  125. Thank you. Paul Dix @pauldix paul@influxdata.com