Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Developer tooling for Kubernetes configurations

Developer tooling for Kubernetes configurations

Talk from KubeCon in Austin, about learning from other programming environments and using tools to validate and test your configurations, whether you're writing them by hand or using higher level tools like ksonnet and Helm.

Gareth Rushgrove

December 07, 2017
Tweet

More Decks by Gareth Rushgrove

Other Decks in Technology

Transcript

  1. Gareth Rushgrove
    Developer tooling for
    Kubernetes configuration
    Tools for developer-friendly workflows

    View Slide

  2. View Slide

  3. @garethr

    View Slide

  4. - Infrastructure as code
    - Validation, linting and unit testing
    - Demos and examples

    View Slide

  5. Testing infrastructure
    as code
    A bit of useful history

    View Slide

  6. Infrastructure as code is the process
    of managing and provisioning
    computers through machine-readable
    definition files, rather than physical
    hardware configuration or interactive
    configuration tools
    https://en.wikipedia.org/wiki/Infrastructure_as_Code

    View Slide

  7. rspec-puppet - originally written in 2011

    View Slide

  8. Examples span communities
    ChefSpec, Puppet Lint, Puppet Syntax,
    Food Critic, Cookstyle

    View Slide

  9. Why test a declarative configuration?

    View Slide

  10. Increasingly configuration:
    - Contains logic
    - Takes arguments
    - Interfaces with other configuration
    - Spans multiple versions of software

    View Slide

  11. Those points from a Puppet talk from 4 years ago

    View Slide

  12. As the community adopts higher-level
    tools like Helm, ksonnet, Compose,
    Kedge, Kapitan, etc. this becomes
    more relevant to Kubernetes users

    View Slide

  13. Take bitnami/kube-manifests as
    an example. It contains 7000+ lines
    of JSON in 159 files generated from
    2000 lines of Jsonnet.

    View Slide

  14. Sidenote
    Brian Grant found 43 higher-levels
    tools for managing K8S configurations

    View Slide

  15. Relevant discussion in the App Def Working Group

    View Slide

  16. I posit that the lessons learnt applying
    testing practices to infrastructure as
    code apply to Kubernetes configs

    View Slide

  17. Widely adopted tools break down into
    - Validation
    - Linting
    - Unit testing
    - Acceptance testing

    View Slide

  18. What would tools to address these
    problems look like for Kubernetes?

    View Slide

  19. Validation
    Using the type schemas when writing configs

    View Slide

  20. apiVersion: v1
    kind: Service
    metadata:
    name: redis-master
    labels:
    app: redis
    role: master
    tier: backend
    spec:
    ports:
    - port: 6379
    targetPort: "6379"
    selector:
    app: redis
    role: master1
    tier: backend
    Is this a valid Kubernetes configuration file?

    View Slide

  21. Is this a valid Kubernetes configuration file?
    apiVersion: v1
    kind: ReplicationController
    spec:
    replicas: none
    selector:
    app: nginx
    loadbalancer: lb-1
    templates:
    name: nginx
    labels: backend
    app: nginx
    spec:
    containers:
    - name: nginx
    image: nginx

    View Slide

  22. Is this Helm template valid for Kubernetes?
    apiVersion: v1
    kind: Service
    metadata:
    name: {{ template "fullname" . }}
    labels:
    app: {{ template "fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
    spec:
    ports:
    - name: memcache
    port: 11211
    targetPort: memcache
    selector:
    app: {{ template "fullname" . }}

    View Slide

  23. Is this Puppet code valid for Kubernetes?
    kubernetes_pod { 'sample-pod':
    ensure => present,
    metadata => {
    namespace => 'default',
    },
    spec => {
    containers => [{
    name => 'container-name',
    image => 'nginx',
    }]
    },
    }

    View Slide

  24. Kubernetes has a well-defined set of
    API primitives; Pods, Deployments,
    Services, ReplicationControllers, etc.

    View Slide

  25. Kubernetes uses OpenAPI to describe the API

    View Slide

  26. OpenAPI uses JSON Schema internally

    View Slide

  27. Kubernetes JSON Schema

    View Slide

  28. That’s a lot of JSON
    PS> Get-Content -Path swagger.json | Measure-Object -line).Lines
    85340
    PS> (Get-ChildItem -Path v*/*.json -Recurse | Measure-Object).Count
    26181
    PS> (Get-ChildItem -Path v*/*.json -Recurse | Get-Content | Measure-Object -line).Lines
    7296392

    View Slide

  29. OpenShift JSON Schema

    View Slide

  30. Validate directly using the schemas
    $ jsonschema -F "{error.message}" -i hello-nginx.json 1.6.6-standalone/deployment.json
    u'template' is a required property

    View Slide

  31. Validation is useful for
    - Catching typos
    - Fast feedback
    - Missing properties
    - Checking against multiple K8 versions

    View Slide

  32. Introducing Kubeval

    View Slide

  33. A nice CLI for validating K8S configurations
    $ kubeval --help
    Validate a Kubernetes YAML file against the relevant schema
    Usage:
    kubeval [file...] [flags]
    Flags:
    -h, --help help for kubeval
    -v, --kubernetes-version string Version of Kubernetes to validate against
    --openshift Use OpenShift schemas instead of upstream Kubernetes
    --schema-location string Base URL used to download schemas. Can also be
    specified with the environment variable
    KUBEVAL_SCHEMA_LOCATION
    --version Display the kubeval version information and exit

    View Slide

  34. Validate YAML files on the command line
    $ kubeval my-invalid-rc.yaml
    The document my-invalid-rc.yaml contains an invalid ReplicationController
    --> spec.replicas: Invalid type. Expected: integer, given: string
    $ echo $?
    1
    $ cat my-invalid-rc.yaml | kubeval
    The document my-invalid-rc.yaml contains an invalid ReplicationController
    --> spec.replicas: Invalid type. Expected: integer, given: string
    $ echo $?
    1

    View Slide

  35. Validate multiple resources in the same file
    $ kubeval multi.yaml
    The document fixtures/multi.yaml contains a valid Service
    The document fixtures/multi.yaml contains an invalid Deployment
    --> spec.template.spec.containers.0.env.0.value: Invalid type. Expected: string, given: integer
    The document fixtures/multi.yaml contains an invalid ReplicationController
    --> spec.replicas: Invalid type. Expected: integer, given: string
    The document fixtures/multi.json contains a valid Deployment
    The document fixtures/multi.yaml contains a valid ReplicationController

    View Slide

  36. Validate against multiple versions of Kubernetes
    $ kubeval --v 1.7.9 my-invalid-rc.yaml
    The document my-invalid-rc.yaml contains an invalid ReplicationController
    --> spec.replicas: Invalid type. Expected: integer, given: string
    $ kubeval --v 1.8.1 my-invalid-rc.yaml
    The document my-invalid-rc.yaml contains an invalid ReplicationController
    --> spec.replicas: Invalid type. Expected: integer, given: string

    View Slide

  37. Use as a library in other Go tools
    import (
    "github.com/garethr/kubeval/kubeval"
    )
    results, err := kubeval.Validate(fileContents, fileName)

    View Slide

  38. Unit testing
    Custom business rules for configuration

    View Slide

  39. Validation says something is valid,
    not that it’s what you intended

    View Slide

  40. Lots of teams have a script to
    check certain properties of their
    K8S configurations against
    internal policies

    View Slide

  41. Our internal tooling includes a
    linter that renders helm charts and
    validates the resources produced
    pass certain internal rules.

    View Slide

  42. - Podspec needs resource requests
    - Ensure certain labels are set
    - Prevent usage of latest images
    - Prohibit privileged containers
    - Enforce naming conventions

    View Slide

  43. Introducing kubetest

    View Slide

  44. Run tests against your configurations
    $ kubetest rc.yaml --verbose
    INFO rc.yaml should not use latest images
    WARN rc.yaml ReplicationController should have at least 4 replicas

    View Slide

  45. Tests are written in Skylark

    View Slide

  46. Skylark is a dialect of Python. It is
    an untyped dynamic language
    with high-level data types,
    first-class functions with lexical
    scope, and garbage collection

    View Slide

  47. A Skylark interpreter is typically
    embedded within an application
    which may define additional
    domain-specific functions and
    data types beyond those provided
    by the core language

    View Slide

  48. Example tests for kubetest
    #// vim: set ft=python:
    def test_for_latest_image():
    if spec["kind"] == "ReplicationController":
    for container in spec["spec"]["template"]["spec"]["containers"]:
    tag = container["image"].split(":")[-1]
    assert_not_equal(tag, "latest", "should not use latest images")
    def test_minimum_replicas():
    if spec["kind"] == "ReplicationController":
    test = spec["spec"]["replicas"] >= 4
    assert_true(test, "ReplicationController should have at least 4 replicas")
    test_for_latest_image()
    test_minimum_replicas()

    View Slide

  49. Tests enforcing a team label
    #// vim: set ft=python:
    def test_for_team_label():
    if spec["kind"] == "Deployment":
    labels = spec["spec"]["template"]["metadata"]["labels"]
    assert_contains(labels, "team", "should indicate which team owns the deployment")
    test_for_team_label()

    View Slide

  50. Linting
    Building common community assertions

    View Slide

  51. Organisation-specific assertions
    are useful, but require you to
    write the tests yourself

    View Slide

  52. Many assertions are common,
    they are the result of emerging
    community best-practice

    View Slide

  53. I’d like to build an out-of-the-box
    experience for kubetest

    View Slide

  54. Integration with K8 tools
    Demos and examples

    View Slide

  55. Using kubeval with Helm
    $ git clone https://github.com/kubernetes/helm.git
    $ cd helm/docs/examples
    $ ls
    alpine nginx README.md
    $ helm template nginx | kubeval
    The document stdin contains a valid Secret
    The document stdin contains a valid ConfigMap
    The document stdin contains a valid Service
    The document stdin contains a valid Pod
    The document stdin contains a valid Deployment
    The document stdin contains a valid Job

    View Slide

  56. Tests for our Helm Chart
    #// vim: set ft=python:
    def test_for_latest_image():
    if spec["kind"] in ["Job", "Deployment"]:
    for container in spec["spec"]["template"]["spec"]["containers"]:
    tag = container["image"].split(":")[-1]
    assert_not_equal(tag, "latest", spec["kind"] + " should not use latest images")
    test_for_latest_image()

    View Slide

  57. Using kubetest with Helm
    $ helm template nginx | kubetest --verbose
    INFO stdin Deployment should not use latest images
    INFO stdin Job should not use latest images

    View Slide

  58. Using kubeval with ksonnet
    $ cat deployment.jsonnet
    local k = import "ksonnet.beta.2/k.libsonnet";
    local deployment = k.apps.v1beta1.deployment;
    local container = deployment.mixin.spec.template.spec.containersType;
    local containerPort = container.portsType;
    // Create nginx container with container port 80 open.
    local nginxContainer =
    container.new("nginx", "nginx:1.13.0") +
    container.ports(containerPort.newNamed("http", 80));
    // Create default Deployment object from nginx container.
    deployment.new("nginx", 5, nginxContainer, {app: "nginx"})

    View Slide

  59. Using kubeval with ksonnet
    $ cat deployment.jsonnet
    local k = import "ksonnet.beta.2/k.libsonnet";
    local deployment = k.apps.v1beta1.deployment;
    local container = deployment.mixin.spec.template.spec.containersType;
    local containerPort = container.portsType;
    // Create nginx container with container port 80 open.
    local nginxContainer =
    container.new("nginx", "nginx:1.13.0") +
    container.ports(containerPort.newNamed("http", 80));
    // Create default Deployment object from nginx container.
    deployment.new("nginx", 5, nginxContainer, {app: "nginx"})
    $ jsonnet deployment.jsonnet | kubeval
    The document stdin contains a valid Deployment

    View Slide

  60. Using kubetest with ksonnet
    $ jsonnet deployment.jsonnet | kubetest --verbose
    INFO stdin should not use latest images
    WARN stdin ReplicationController should have at least 7 replicas

    View Slide

  61. Using kubeval with Puppet
    $ cat guestbook.pp
    kubernetes_service { 'frontend':
    ensure => 'present',
    metadata => {
    'labels' => {'app' => 'guestbook', 'tier' => 'frontend'},
    'namespace' => 'default',
    },
    spec => {
    'type' => 'LoadBalancer',
    'ports' => [
    {'port' => 80, 'protocol' => 'TCP'}
    ],
    'selector' => {
    'app' => 'guestbook',
    'tier' => 'frontend'
    },
    },

    View Slide

  62. Using kubeval with Puppet
    $ puppet kubernetes convert --manifest guestbook.pp | kubeval
    The document stdin contains a valid ReplicationController
    The document stdin contains a valid Service
    The document stdin contains a valid ReplicationController
    The document stdin contains a valid Service
    The document stdin contains a valid ReplicationController
    The document stdin contains a valid Service

    View Slide

  63. Using kubetest with Puppet
    $ puppet kubernetes convert --manifest guestbook.pp | kubetest --verbose
    INFO stdin should not use latest images
    INFO stdin should not use latest images
    INFO stdin should not use latest images

    View Slide

  64. kubeval is a great tool to validate your
    Kubernetes configuration files as part
    of your CI pipeline

    View Slide

  65. Using kubeval and kubetest in CI

    View Slide

  66. Conclusions
    If all you remember is...

    View Slide

  67. The community is still exploring
    ways to describe Kubernetes
    configuration in code

    View Slide

  68. The Configuration Complexity Clock is real

    View Slide

  69. Some of the current tools lend
    themselves to native testing,
    others are just data and templates

    View Slide

  70. Having testing tools that work
    with different configuration
    approaches can make moving
    between tools easier

    View Slide

  71. Lots more interesting opportunities
    around testing Kubernetes configs

    View Slide

  72. And lots of inspiration we can take
    from other communities and tools

    View Slide

  73. Any questions?
    And thanks for listening

    View Slide