Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Flux and InfluxDB 2.0

Paul Dix
November 07, 2018

Flux and InfluxDB 2.0

Talk given at InfluxDays SF 2018 on Flux, the new language we're creating, and where we're going with InfluxDB 2.0.

Paul Dix

November 07, 2018
Tweet

More Decks by Paul Dix

Other Decks in Technology

Transcript

  1. Flux and InfluxDB 2.0
    Paul Dix

    @pauldix

    paul@influxdata.com

    View full-size slide

  2. • Data-scripting language

    • Functional

    • MIT Licensed

    • Language & Runtime/Engine

    View full-size slide

  3. Language + Query Engine

    View full-size slide

  4. Biggest Change Since 0.9

    View full-size slide

  5. Clean Migration Path

    View full-size slide

  6. Compatibility Layer

    View full-size slide

  7. • MIT Licensed

    • Multi-tenanted

    • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

    • OSS single server

    • Cloud usage based pricing

    • Dedicated Cloud

    • Enterprise on-premise

    View full-size slide

  8. • MIT Licensed

    • Multi-tenanted

    • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

    • OSS single server

    • Cloud usage based pricing

    • Dedicated Cloud

    • Enterprise on-premise

    View full-size slide

  9. TICK is dead

    View full-size slide

  10. Long Live InfluxDB 2.0
    (and Telegraf)

    View full-size slide

  11. Consistent Documented API
    Collection, Write/Query, Streaming & Batch Processing, Dashboards

    View full-size slide

  12. Officially Supported Client
    Libraries
    Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin

    View full-size slide

  13. Visualization Libraries

    View full-size slide

  14. Ways to run Flux - (interpreter,
    InfluxDB 1.7 & 2.0)

    View full-size slide

  15. Flux Language Elements

    View full-size slide

  16. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")

    View full-size slide

  17. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    Comments

    View full-size slide

  18. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    Named Arguments

    View full-size slide

  19. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    String Literals

    View full-size slide

  20. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    Buckets, not DBs

    View full-size slide

  21. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    Duration Literal

    View full-size slide

  22. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:2018-11-07T00:00:00Z)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    Time Literal

    View full-size slide

  23. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    Pipe forward operator

    View full-size slide

  24. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
    Anonymous Function

    View full-size slide

  25. // get all data from the telegraf db
    from(bucket:”telegraf/autogen”)
    // filter that by the last hour
    |> range(start:-1h)
    // filter further by series with a specific measurement and field
    |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu")
    and r.host == “serverA")
    Predicate Function

    View full-size slide

  26. // variables
    some_int = 23

    View full-size slide

  27. // variables
    some_int = 23
    some_float = 23.2

    View full-size slide

  28. // variables
    some_int = 23
    some_float = 23.2
    some_string = “cpu"

    View full-size slide

  29. // variables
    some_int = 23
    some_float = 23.2
    some_string = “cpu"
    some_duration = 1h

    View full-size slide

  30. // variables
    some_int = 23
    some_float = 23.2
    some_string = “cpu"
    some_duration = 1h
    some_time = 2018-10-10T19:00:00

    View full-size slide

  31. // variables
    some_int = 23
    some_float = 23.2
    some_string = “cpu"
    some_duration = 1h
    some_time = 2018-10-10T19:00:00
    some_array = [1, 6, 20, 22]

    View full-size slide

  32. // variables
    some_int = 23
    some_float = 23.2
    some_string = “cpu"
    some_duration = 1h
    some_time = 2018-10-10T19:00:00
    some_array = [1, 6, 20, 22]
    some_object = {foo: "hello" bar: 22}

    View full-size slide

  33. Data Model & Working with
    Tables

    View full-size slide

  34. Example Series
    _measurement=mem,host=A,region=west,_field=free
    _measurement=mem,host=B,region=west,_field=free
    _measurement=cpu,host=A,region=west,_field=usage_system
    _measurement=cpu,host=A,region=west,_field=usage_user

    View full-size slide

  35. Example Series
    _measurement=mem,host=A,region=west,_field=free
    _measurement=mem,host=B,region=west,_field=free
    _measurement=cpu,host=A,region=west,_field=usage_system
    _measurement=cpu,host=A,region=west,_field=usage_user
    Measurement

    View full-size slide

  36. Example Series
    _measurement=mem,host=A,region=west,_field=free
    _measurement=mem,host=B,region=west,_field=free
    _measurement=cpu,host=A,region=west,_field=usage_system
    _measurement=cpu,host=A,region=west,_field=usage_user
    Field

    View full-size slide

  37. Table
    _measurement host region _field _time _value
    mem A west free 2018-06-14T09:15:00 10
    mem A west free 2018-06-14T09:14:50 10

    View full-size slide

  38. _measurement host region _field _time _value
    mem A west free 2018-06-14T09:15:00 10
    mem A west free 2018-06-14T09:14:50 10
    Column

    View full-size slide

  39. _measurement host region _field _time _value
    mem A west free 2018-06-14T09:15:00 10
    mem A west free 2018-06-14T09:14:50 10
    Record

    View full-size slide

  40. _measurement host region _field _time _value
    mem A west free 2018-06-14T09:15:00 10
    mem A west free 2018-06-14T09:14:50 10
    Group Key
    _measurement=mem,host=A,region=west,_field=free

    View full-size slide

  41. _measurement host region _field _time _value
    mem A west free 2018-06-14T09:15:00 10
    mem A west free 2018-06-14T09:14:50 10
    Every record has
    the same value!
    _measurement=mem,host=A,region=west,_field=free

    View full-size slide

  42. Table Per Series
    _measurement host region _field _time _value
    mem A west free 2018-06-14T09:15:00 10
    mem A west free 2018-06-14T09:14:50 11
    _measurement host region _field _time _value
    mem B west free 2018-06-14T09:15:00 20
    mem B west free 2018-06-14T09:14:50 22
    _measurement host region _field _time _value
    cpu A west usage_user 2018-06-14T09:15:00 45
    cpu A west usage_user 2018-06-14T09:14:50 49
    _measurement host region _field _time _value
    cpu A west usage_system 2018-06-14T09:15:00 35
    cpu A west usage_system 2018-06-14T09:14:50 38

    View full-size slide

  43. input tables -> function -> output tables

    View full-size slide

  44. input tables -> function -> output tables
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> sum()

    View full-size slide

  45. input tables -> function -> output tables
    What to sum on?
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> sum()

    View full-size slide

  46. input tables -> function -> output tables
    Default columns argument
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> sum(columns: [“_value”])

    View full-size slide

  47. input tables -> function -> output tables
    _meas
    ureme
    host region _field _time _valu
    e
    mem A west free 2018-06-
    14T09:1
    10
    mem A west free 2018-06-
    14T09:1
    11
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free 2018-06-
    14T09:15
    20
    mem B west free 2018-06-
    14T09:14
    22
    Input in table form
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> sum()

    View full-size slide

  48. input tables -> function -> output tables
    _meas
    ureme
    host region _field _time _valu
    e
    mem A west free 2018-06-
    14T09:1
    10
    mem A west free 2018-06-
    14T09:1
    11
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free 2018-06-
    14T09:15
    20
    mem B west free 2018-06-
    14T09:14
    22
    sum()
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> sum()

    View full-size slide

  49. input tables -> function -> output tables
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> sum()
    _meas
    ureme
    host region _field _time _valu
    e
    mem A west free 2018-06-
    14T09:1
    10
    mem A west free 2018-06-
    14T09:1
    11
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free 2018-06-
    14T09:15
    20
    mem B west free 2018-06-
    14T09:14
    22
    sum()
    _meas
    ureme
    host region _field _time _valu
    e
    mem A west free 2018-06-
    14T09:1
    21
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free 2018-06-
    14T09:15
    42

    View full-size slide

  50. N to N table mapping
    (1 to 1 mapping)

    View full-size slide

  51. N to M table mapping

    View full-size slide

  52. window
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> window(every:20s)
    30s of data (4 samples)

    View full-size slide

  53. window
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> window(every:20s)
    split into 20s windows

    View full-size slide

  54. window
    _meas host region _field _time _valu
    mem A west free …14:30 10
    mem A west free …14:40 11
    mem A west free …14:50 12
    mem A west free …15:00 13
    _meas host region _field _time _valu
    mem B west free …14:30 20
    mem B west free …14:40 22
    mem B west free …14:50 23
    mem B west free …15:00 24
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> window(every:20s)
    Input

    View full-size slide

  55. window
    _meas host region _field _time _valu
    mem A west free …14:30 10
    mem A west free …14:40 11
    mem A west free …14:50 12
    mem A west free …15:00 13
    _meas host region _field _time _valu
    mem B west free …14:30 20
    mem B west free …14:40 22
    mem B west free …14:50 23
    mem B west free …15:00 24
    window(
    every:20s)
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> window(every:20s)

    View full-size slide

  56. window
    _meas host region _field _time _valu
    mem A west free …14:30 10
    mem A west free …14:40 11
    mem A west free …14:50 12
    mem A west free …15:00 13
    _meas host region _field _time _valu
    mem B west free …14:30 20
    mem B west free …14:40 22
    mem B west free …14:50 23
    mem B west free …15:00 24
    window(
    every:20s)
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> window(every:20s) _meas
    ureme
    host region _field _time _valu
    e
    mem A west free …14:30 10
    mem A west free …14:40 11
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free …14:50 23
    mem B west free …15:00 24
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free …14:30 20
    mem B west free …14:40 22
    _meas
    ureme
    host region _field _time _valu
    e
    mem A west free …14:50 12
    mem A west free …15:00 13

    View full-size slide

  57. window
    _meas host region _field _time _valu
    mem A west free …14:30 10
    mem A west free …14:40 11
    mem A west free …14:50 12
    mem A west free …15:00 13
    _meas host region _field _time _valu
    mem B west free …14:30 20
    mem B west free …14:40 22
    mem B west free …14:50 23
    mem B west free …15:00 24
    window(
    every:20s)
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> window(every:20s) _meas
    ureme
    host region _field _time _valu
    e
    mem A west free …14:30 10
    mem A west free …14:40 11
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free …14:50 23
    mem B west free …15:00 24
    _meas
    ureme
    host region _field _time _valu
    e
    mem B west free …14:30 20
    mem B west free …14:40 22
    _meas
    ureme
    host region _field _time _valu
    e
    mem A west free …14:50 12
    mem A west free …15:00 13
    N to M tables

    View full-size slide

  58. Window based on time
    _start and _stop columns

    View full-size slide

  59. group
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> group(keys:[“region"])

    View full-size slide

  60. group
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> group(keys:[“region"])
    new group key

    View full-size slide

  61. group
    _meas host region _field _time _valu
    mem A west free …14:30 10
    mem A west free …14:40 11
    mem A west free …14:50 12
    mem A west free …15:00 13
    _meas host region _field _time _valu
    mem B west free …14:30 20
    mem B west free …14:40 22
    mem B west free …14:50 23
    mem B west free …15:00 24
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> group(keys:[“region"])

    View full-size slide

  62. group
    _meas host region _field _time _valu
    mem A west free …14:30 10
    mem A west free …14:40 11
    mem A west free …14:50 12
    mem A west free …15:00 13
    _meas host region _field _time _valu
    mem B west free …14:30 20
    mem B west free …14:40 22
    mem B west free …14:50 23
    mem B west free …15:00 24
    group(
    keys:
    [“region”])
    // example query
    from(db:"telegraf")
    |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
    |> filter(fn: r => r._measurement == “mem" and
    r._field == “free”)
    |> group(keys:[“region"])
    _meas
    ureme
    host region _field _time _valu
    e
    mem A west free …14:30 10
    mem B west free …14:30 20
    mem A west free …14:40 11
    mem B west free …14:40 21
    mem A west free …14:50 12
    mem B west free …14:50 22
    mem B west free …15:00 13
    mem B west free …15:00 23
    N to M tables
    M == cardinality(group keys)

    View full-size slide

  63. Group based on columns

    View full-size slide

  64. Flux Design Principles

    View full-size slide

  65. Make Everyone a Data
    Programmer!

    View full-size slide

  66. Contributable

    View full-size slide

  67. Functions Overview

    View full-size slide

  68. Inputs
    from, fromKafka, fromFile, fromS3, fromPrometheus, fromMySQL, etc.

    View full-size slide

  69. Flux != InfluxDB

    View full-size slide

  70. Follow Telegraf Model

    View full-size slide

  71. import "mysql"
    customers = mysql.from(connect: loadSecret(name:”mysql_prod"),
    query: "select id, name from customers")
    data = from(bucket: "my_data")
    |> range(start: -4h)
    |> filter(fn: (r) => r._measurement == “write_requests")
    |> rename(columns: {customer_id: “id"})
    join(tables: {customers, data}, on: ["id"])
    |> yield(name: "results")

    View full-size slide

  72. import "mysql"
    customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
    query: "select id, name from customers")
    data = from(bucket: "my_data")
    |> range(start: -4h)
    |> filter(fn: (r) => r._measurement == “write_requests")
    |> rename(columns: {customer_id: “id"})
    join(tables: {customers, data}, on: ["id"])
    |> yield(name: "results")
    Imports for sharing code!

    View full-size slide

  73. import "mysql"
    customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
    query: "select id, name from customers")
    data = from(bucket: "my_data")
    |> range(start: -4h)
    |> filter(fn: (r) => r._measurement == “write_requests")
    |> rename(columns: {customer_id: “id"})
    join(tables: {customers, data}, on: ["id"])
    |> yield(name: "results")
    Pulling data from a non-InfluxDB source

    View full-size slide

  74. import "mysql"
    customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
    query: "select id, name from customers")
    data = from(bucket: "my_data")
    |> range(start: -4h)
    |> filter(fn: (r) => r._measurement == “write_requests")
    |> rename(columns: {customer_id: “id"})
    join(tables: {customers, data}, on: ["id"])
    |> yield(name: "results")
    Raw query (for now)

    View full-size slide

  75. import "mysql"
    customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
    query: "select id, name from customers")
    data = from(bucket: "my_data")
    |> range(start: -4h)
    |> filter(fn: (r) => r._measurement == “write_requests")
    |> rename(columns: {customer_id: “id"})
    join(tables: {customers, data}, on: ["id"])
    |> yield(name: "results")
    Loading Secret

    View full-size slide

  76. import "mysql"
    customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
    query: "select id, name from customers")
    data = from(bucket: "my_data")
    |> range(start: -4h)
    |> filter(fn: (r) => r._measurement == “write_requests")
    |> rename(columns: {customer_id: “id"})
    join(tables: {customers, data}, on: ["id"])
    |> yield(name: "results")
    Renaming & Shaping Data

    View full-size slide

  77. import "mysql"
    customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
    query: "select id, name from customers")
    data = from(bucket: "my_data")
    |> range(start: -4h)
    |> filter(fn: (r) => r._measurement == “write_requests")
    |> rename(columns: {customer_id: “id"})
    join(tables: {customers, data}, on: ["id"])
    |> yield(name: "results")
    Join on any column

    View full-size slide

  78. Outputs
    to, toKafka, toFile, toS3, toPrometheus, toMySQL, etc.

    View full-size slide

  79. Outputs are for Tasks

    View full-size slide

  80. option task = {
    name: “Alert on disk",
    every: 5m,
    }
    crit = 90 // alert at this percentage
    warn = 80 // warn at this percentage
    data = from(bucket: "telegraf/autogen")
    |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
    |> last()
    data |> filter(fn: (r) => r._value > crit)
    |> addColumn(key: "level", value: "critical")
    |> addColumn(key: "alert", value: task.name)
    |> to(bucket: "alerts")
    data |> filter(fn: (r) => r._value > warn && r._value < crit)
    |> addColumn(key: "level", value: "warn")
    |> to(bucket: "alerts")

    View full-size slide

  81. option task = {
    name: “Alert on disk",
    every: 5m,
    }
    crit = 90 // alert at this percentage
    warn = 80 // warn at this percentage
    data = from(bucket: "telegraf/autogen")
    |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
    |> last()
    data |> filter(fn: (r) => r._value > crit)
    |> addColumn(key: "level", value: "critical")
    |> addColumn(key: "alert", value: task.name)
    |> to(bucket: "alerts")
    data |> filter(fn: (r) => r._value > warn && r._value < crit)
    |> addColumn(key: "level", value: "warn")
    |> to(bucket: "alerts")
    Option syntax for tasks

    View full-size slide

  82. option task = {
    name: “Alert on disk",
    every: 5m,
    }
    crit = 90 // alert at this percentage
    warn = 80 // warn at this percentage
    data = from(bucket: "telegraf/autogen")
    |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
    |> last()
    data |> filter(fn: (r) => r._value > crit)
    |> addColumn(key: "level", value: "critical")
    |> addColumn(key: "alert", value: task.name)
    |> to(bucket: "alerts")
    data |> filter(fn: (r) => r._value > warn && r._value < crit)
    |> addColumn(key: "level", value: "warn")
    |> to(bucket: "alerts")
    Get at the last value without specifying time range

    View full-size slide

  83. option task = {
    name: “Alert on disk",
    every: 5m,
    }
    crit = 90 // alert at this percentage
    warn = 80 // warn at this percentage
    data = from(bucket: "telegraf/autogen")
    |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
    |> last()
    data |> filter(fn: (r) => r._value > crit)
    |> addColumn(key: "level", value: “critical")
    |> addColumn(key: "alert", value: task.name)
    |> to(bucket: "alerts")
    data |> filter(fn: (r) => r._value > warn && r._value < crit)
    |> addColumn(key: "level", value: "warn")
    |> to(bucket: "alerts")
    Adding a column to decorate the data

    View full-size slide

  84. option task = {
    name: “Alert on disk",
    every: 5m,
    }
    crit = 90 // alert at this percentage
    warn = 80 // warn at this percentage
    data = from(bucket: "telegraf/autogen")
    |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
    |> last()
    data |> filter(fn: (r) => r._value > crit)
    |> addColumn(key: "level", value: "critical")
    |> addColumn(key: "alert", value: task.name)
    |> to(bucket: "alerts")
    data |> filter(fn: (r) => r._value > warn && r._value < crit)
    |> addColumn(key: "level", value: "warn")
    |> to(bucket: "alerts") To writes to the local InfluxDB

    View full-size slide

  85. Separate Alerts From
    Notifications!

    View full-size slide

  86. option task = {name: "slack critical alerts", every: 1m}
    import "slack"
    lastNotificationTime = from(bucket: "notificatons")
    |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
    |> group(none:true)
    |> last()
    |> recordValue(column:"_value")
    from(bucket: "alerts")
    |> range(start: lastNotificationTime)
    |> filter(fn: (r) => r.level == "critical")
    // shape the alert data to what we care about in notifications
    |> renameColumn(from: "_time", to: "alert_time")
    |> renameColumn(from: "_value", to: "used_percent")
    // set the time the notification is being sent
    |> addColumn(key: "_time", value: now())
    // get rid of unneeded columns
    |> drop(columns: ["_start", "_stop"])
    // write the message
    |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
    |> slack.to(config: loadSecret(name: “slack_alert_config”), message: “_value”)
    |> to(bucket: “notifications")

    View full-size slide

  87. option task = {name: "slack critical alerts", every: 1m}
    import "slack"
    lastNotificationTime = from(bucket: "notificatons")
    |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
    |> group(none:true)
    |> last()
    |> recordValue(column:"_value")
    from(bucket: "alerts")
    |> range(start: lastNotificationTime)
    |> filter(fn: (r) => r.level == “critical”)
    // shape the alert data to what we care about in notifications
    |> renameColumn(from: "_time", to: "alert_time")
    |> renameColumn(from: "_value", to: "used_percent")
    // set the time the notification is being sent
    |> addColumn(key: "_time", value: now())
    // get rid of unneeded columns
    |> drop(columns: ["_start", "_stop"])
    // write the message
    |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
    |> slack.to(config: loadSecret(name: "slack_alert"))
    |> to(bucket: “notifications")
    We have state so we don’t resend

    View full-size slide

  88. option task = {name: "slack critical alerts", every: 1m}
    import "slack"
    lastNotificationTime = from(bucket: "notificatons")
    |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
    |> group(none:true)
    |> last()
    |> recordValue(column:"_value")
    from(bucket: "alerts")
    |> range(start: lastNotificationTime)
    |> filter(fn: (r) => r.level == "critical")
    // shape the alert data to what we care about in notifications
    |> renameColumn(from: "_time", to: "alert_time")
    |> renameColumn(from: "_value", to: "used_percent")
    // set the time the notification is being sent
    |> addColumn(key: "_time", value: now())
    // get rid of unneeded columns
    |> drop(columns: ["_start", "_stop"])
    // write the message
    |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
    |> slack.to(config: loadSecret(name: "slack_alert"))
    |> to(bucket: “notifications")
    Use last time as argument to range

    View full-size slide

  89. option task = {name: "slack critical alerts", every: 1m}
    import "slack"
    lastNotificationTime = from(bucket: "notificatons")
    |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
    |> group(none:true)
    |> last()
    |> recordValue(column:"_value")
    from(bucket: "alerts")
    |> range(start: lastNotificationTime)
    |> filter(fn: (r) => r.level == "critical")
    // shape the alert data to what we care about in notifications
    |> renameColumn(from: "_time", to: "alert_time")
    |> renameColumn(from: "_value", to: "used_percent")
    // set the time the notification is being sent
    |> addColumn(key: "_time", value: now())
    // get rid of unneeded columns
    |> drop(columns: ["_start", "_stop"])
    // write the message
    |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
    |> slack.to(config: loadSecret(name: "slack_alert"))
    |> to(bucket: “notifications")
    Now function for current time

    View full-size slide

  90. option task = {name: "slack critical alerts", every: 1m}
    import "slack"
    lastNotificationTime = from(bucket: "notificatons")
    |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
    |> group(none:true)
    |> last()
    |> recordValue(column:"_value")
    from(bucket: "alerts")
    |> range(start: lastNotificationTime)
    |> filter(fn: (r) => r.level == "critical")
    // shape the alert data to what we care about in notifications
    |> renameColumn(from: "_time", to: "alert_time")
    |> renameColumn(from: "_value", to: "used_percent")
    // set the time the notification is being sent
    |> addColumn(key: "_time", value: now())
    // get rid of unneeded columns
    |> drop(columns: ["_start", "_stop"])
    // write the message
    |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
    |> slack.to(config: loadSecret(name: "slack_alert"))
    |> to(bucket: “notifications")
    Map function to iterate
    over values

    View full-size slide

  91. option task = {name: "slack critical alerts", every: 1m}
    import "slack"
    lastNotificationTime = from(bucket: "notificatons")
    |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
    |> group(none:true)
    |> last()
    |> recordValue(column:"_value")
    from(bucket: "alerts")
    |> range(start: lastNotificationTime)
    |> filter(fn: (r) => r.level == "critical")
    // shape the alert data to what we care about in notifications
    |> renameColumn(from: "_time", to: "alert_time")
    |> renameColumn(from: "_value", to: "used_percent")
    // set the time the notification is being sent
    |> addColumn(key: "_time", value: now())
    // get rid of unneeded columns
    |> drop(columns: ["_start", "_stop"])
    // write the message
    |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
    |> slack.to(config: loadSecret(name: "slack_alert"))
    |> to(bucket: “notifications")
    String interpolation

    View full-size slide

  92. option task = {name: "slack critical alerts", every: 1m}
    import "slack"
    lastNotificationTime = from(bucket: "notificatons")
    |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
    |> group(none:true)
    |> last()
    |> recordValue(column:"_value")
    from(bucket: "alerts")
    |> range(start: lastNotificationTime)
    |> filter(fn: (r) => r.level == "critical")
    // shape the alert data to what we care about in notifications
    |> renameColumn(from: "_time", to: "alert_time")
    |> renameColumn(from: "_value", to: "used_percent")
    // set the time the notification is being sent
    |> addColumn(key: "_time", value: now())
    // get rid of unneeded columns
    |> drop(columns: ["_start", "_stop"])
    // write the message
    |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
    |> slack.to(config: loadSecret(name: "slack_alert"))
    |> to(bucket: “notifications")
    Send to Slack and
    record in InfluxDB

    View full-size slide

  93. option task = {
    name: "email alert digest",
    cron: "0 5 * * 0"
    }
    import "smtp"
    body = ""
    from(bucket: "alerts")
    |> range(start: -24h)
    |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
    |> group(by: ["alert"])
    |> count()
    |> group(none: true)
    |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} times\n")
    smtp.to(
    config: loadSecret(name: "smtp_digest"),
    to: "[email protected]",
    title: "Alert digest for {now()}",
    body: message)

    View full-size slide

  94. option task = {
    name: "email alert digest",
    cron: "0 5 * * 0"
    }
    import "smtp"
    body = ""
    from(bucket: "alerts")
    |> range(start: -24h)
    |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
    |> group(by: ["alert"])
    |> count()
    |> group(none: true)
    |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} times\n")
    smtp.to(
    config: loadSecret(name: "smtp_digest"),
    to: "[email protected]",
    title: "Alert digest for {now()}",
    body: message)
    Cron syntax

    View full-size slide

  95. option task = {
    name: "email alert digest",
    cron: "0 5 * * 0"
    }
    import "smtp"
    body = ""
    from(bucket: "alerts")
    |> range(start: -24h)
    |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
    |> group(by: ["alert"])
    |> count()
    |> group(none: true)
    |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} times\n")
    smtp.to(
    config: loadSecret(name: "smtp_digest"),
    to: "[email protected]",
    title: "Alert digest for {now()}",
    body: message)
    Closures

    View full-size slide

  96. Tasks run logs
    (just another time series)

    View full-size slide

  97. UI will hide complexity

    View full-size slide

  98. Built on top of primitives

    View full-size slide

  99. API for Defining Dashboards

    View full-size slide

  100. Bulk Import & Export
    Specify bucket, range, predicate

    View full-size slide

  101. Same API in OSS, Cloud, and
    Enterprise

    View full-size slide

  102. Thank you.
    Paul Dix

    @pauldix

    paul@influxdata.com

    View full-size slide