$30 off During Our Annual Pro Sale. View Details »

Prometheus as exposition format for eBPF programs on k8s

Prometheus as exposition format for eBPF programs on k8s

Nowadays every application exposes their metrics via an HTTP endpoint readable by using Prometheus. Recently the exposition format got included into the OpenMetrics standard of the CNCF. Nevertheless, this very common pattern by definition only expose metrics regarding the specific applications being observed.
This talk wants to expose the idea, and a reference implementation, of a slightly different use case that uses eBPF programs, running on Kubernetes via a Custom Resource Definition (CRD), as a source of information to allow the exposition and collection of kernel and application probes via a Prometheus endpoint.

Leonardo Di Donato

February 27, 2019
Tweet

More Decks by Leonardo Di Donato

Other Decks in Programming

Transcript

  1. View Slide

  2. I am Leonardo Di Donato
    ,

    View Slide

  3. OBSERVABILITY
    insights
    deductions
    ongoing
    blackbox data fine-grained
    MONITORING

    on
    just any data
    garbage
    Data lake
    Another really cool buzzword is TRACING.
    Execution path along the code. Impacts runtime performances. Usually disabled or sampled.

    View Slide


  4. View Slide


  5. observability

    View Slide

  6. View Slide

  7. Towards OpenMetrics.

    View Slide

  8. AFTER PROMETHEUS
    BEFORE PROMETHEUS
    Metrics?

    View Slide







  9. View Slide



  10. ● ⛱





    View Slide

  11. View Slide

  12. New metric types
    ● counter
    ● gauge
    ● histogram
    ● summary
    ● untyped → unknown
    ● state set
    ● info
    ● gauge histogram
    Format



    Exemplars
    A normal sample line but
    without the metric name.
    A space after the value (or
    timestamp if present), a
    hash sign, a space and
    then the exemplar.
    Histogram buckets can
    have them!

    View Slide

  13. Prometheus
    OpenMetrics
    # TYPE foo histogram
    foo_bucket{le="0.01"} 0
    foo_bucket{le="0.1"} 8 # {} 0.054
    foo_bucket{le="1"} 10 # {id="9856e"} 0.67
    foo_bucket{le="10"} 17 # {id="12fa8"} 9.8 1520879607.789
    ...

    View Slide



  14. View Slide

  15. Berkeley Packet Filters

    View Slide

  16. View Slide

  17. View Slide

  18. Why don’t we make BPF programs look more YAML ✌

    View Slide

  19. Did y’all say

    View Slide

  20. View Slide

  21. Custom resources
    store retrieve
    structured data
    Controllers
    continuously trying
    Shared informers
    watches
    shared state
    workqueue

    View Slide

  22. View Slide

  23. Doing all the BPF things, with YAML

    View Slide

  24. # HELP test_packets Data coming from packets BPF map
    # TYPE test_packets counter
    test_packets{key="00002",node="127.0.0.1"} 1
    test_packets{key="00006",node="127.0.0.1"} 551
    test_packets{key="00008",node="127.0.0.1"} 1
    test_packets{key="00017",node="127.0.0.1"} 15930
    test_packets{key="00089",node="127.0.0.1"} 9
    test_packets{key="00233",node="127.0.0.1"} 1
    bfptools/kube-bpf

    View Slide

  25. Any questions?

    View Slide




  26. View Slide