Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Serverless containers … with source-to-image

Serverless containers … with source-to-image

There will be a future where container workloads and serverless platforms are BFF. An essential building block on this way is Source2Image. It provides the magic of source code being transformed automatically into an executable container image containing all required runtime components. Think of it as a black box continuous integration server for containerized applications. The talk will introduce, showcase, and compare leading Source2Image open source projects.

Josef Adersberger

November 06, 2019
Tweet

More Decks by Josef Adersberger

Other Decks in Technology

Transcript

  1. Dr. Josef Adersberger, CTO & Co-Founder QAware Serverless containers …

    with source-to-image https://github.com/adersberger/source2image
  2. The evolution of software delivery The dark ages: Export JAR,

    upload to deployment server, write ticket, wait until application is deployed to multi-project application server by far shore ops team. The container era: Build application, package with runtime into container image, push to image registry, deploy to container manager. PaaS & Serverless heaven: git push + magic happens here industrialization process: 1. lower change lead time 2. higher quality confidence 3. lower vertical integration
  3. Serverless flavors git push functions git push something in a

    container generic container CI/CD pipeline FULL SERVERLESS MILD SERVERLESS serverlessy: black box application runtime and infrastructure resources Why you might need mild serverless: regulatory compliance, shift left quality checks & automated tests, complex staging and deployment patterns, decoupling from cloud vendors or immature open source projects serverlessy: scale-to-zero, elastic
  4. The anatomy of a mild serverless toolchain Watch for code

    changes Choose compilation method and base image Compile code, prepare image, inject binaries Deploy image to target container manager Source-to-Image workflow Developer's workspace aka inventor's workshop CI/CD pipeline aka assembly line Static analysis, test automation, staging and promotion, image scanning, ... Image builders
  5. 8 With the volkswagen CI plugin you can completely focus

    on source-to-image https://github.com/auchenberg/volkswagen
  6. The source-to-image challengers: WORKFLOW TOOLS (inner loop & outer loop)

    • Skaffold (https://skaffold.dev) • Tilt (https://tilt.dev) • Garden (https://garden.io) BUILDER TOOLS • OpenShift Source2Image (https://github.com/openshift/source-to-image) • buildpacks.io (https://buildpacks.io) • Draft (https://draft.sh) • Jib (https://github.com/GoogleContainerTools/jib)
  7. # install pack tool (buildpack reference implementation) brew tap buildpack/tap

    brew install pack # get suggested builders for sample application # build image for sample application
  8. Buildpack internals Builder Image (e.g. heroku/buildpacks or cloudfoundry/bionic) App Image

    Stack Build Base Image Run Base Image Lifecycle Buildpack 1 Buildpack n ... Detection Analysis Build Export bin/detect bin/build Runtime Layer Dependency Layer App Layer
  9. # install s2i brew install source-to-image # get and build

    source2image for springboot & java git clone https://github.com/ganrad/openshift-s2i-springboot-java.git docker build --build-arg MAVEN_VER=3.6.2 --build-arg GRADLE_VER=5.6.3 -t springboot-java . # build image for sample application s2i build --incremental=true . springboot-java skaffold-example-god
  10. S2I internals BUILDER IMAGE Pre-defined scripts: APP IMAGE Build Base

    Image building the application artifacts from source and placing them into the appropriate directories inside the app image executing the application (entrypoint) Runtime Layer Build Layer Artifact Layer CLI tool: entrypoint: run
  11. # install draft along with helm brew install kubernetes-helm helm

    init brew install azure/draft/draft # create draft files for application (Helm chart, draft.toml, Dockerfile) draft create --> Draft detected Shell (46.149372%) --> Could not find a pack for Shell. Trying to find the next likely language match... --> Draft detected Batchfile (28.163621%) --> Could not find a pack for Batchfile. Trying to find the next likely language match... --> Draft detected Java (12.213444%) --> Ready to sail # build image for sample application and deploy application to k8s draft up # connect to the application endpoint draft connect
  12. Draft internals BUILDER HELM CHART APPLICATION HELM CHART [environments] [environments.development]

    name = "god" namespace = "default" wait = true watch = false watch-delay = 2 auto-connect = false dockerfile = "Dockerfile" chart = "" draft.toml java primary language detection by github linguist and mapped to chart directory by language name generated by draft create
  13. # install skaffold brew install skaffold # build & deploy

    image (once) skaffold run # build & deploy image (everytime the code changes) skaffold dev apiVersion: skaffold/v1beta16 kind: Config build: artifacts: - image: skaffold-example-god context: . jib: {} deploy: kubectl: manifests: - src/k8s/*.yaml apiVersion: skaffold/v1beta16 kind: Config build: artifacts: - image: skaffold-example-god custom: buildCommand: ./build-buildpacks.sh dependencies: paths: - . deploy: kubectl: manifests: - src/k8s/*.yaml #!/bin/bash set -e images=$(echo $IMAGES | tr " " "\n") for image in $images do pack build $image --builder cloudfoundry/cnb:bionic if $PUSH_IMAGE then docker push $image fi done driven by skaffold.yaml:
  14. Builder performance comparison with Skaffold Builder Time s2i (--incremental=true) 1:23m

    Draft 1:14m Buildpacks 0:42m jib 0:21m median of 3 runs timed by "time" command after an initial warming run and a code change between each run - build and caching behaviour not optimized time skaffold run -f=skaffold-s2i.yml time skaffold run -f=skaffold-buildpacks.yml time skaffold run -f=skaffold-jib.yml time draft up
  15. Builder shootout (lower is better) Criteria Buildpacks.io s2i Draft Jib

    Speed • lead time to change • image size (docker image ls) • rebasing 2 4 3 1 Supported application technologies Java, Node.JS, Python, GoLang, ... 2 3 1 4 (k.O. if non-Java) Auto-detection of application technologies yes / no 1 3 1 3 Maturity / future proof 3 2 4 (k.O.) 1 8 12 9 (k.O.) 9
  16. # install Tilt brew tap windmilleng/tap brew install windmilleng/tap/tilt #

    build & deploy image (with every change) tilt up # Deploy: tell Tilt what YAMLs to deploy k8s_yaml('src/k8s/pod-god.yaml') # Build: tell Tilt what images (name) to build from which directories docker_build('skaffold-example-god', '.') # Watch: tell Tilt how to connect locally (optional) k8s_resource('web', port_forwards=8080) driven by Tiltfile (Starlark, a Python dialect):
  17. # install Garden brew tap garden-io/garden brew install garden-cli #

    build & deploy image (once) garden build # build & deploy image (with # every change) garden dev kind: Project name: god-project environments: - name: local providers: - name: local-kubernetes context: docker-desktop --- kind: Module name: god description: God service type: container services: - name: god ports: - name: http containerPort: 8080 healthCheck: httpGet: path: / port: http ingresses: - path: / port: http driven by garden.yml containing garden-defined resource types as abstractions for k8s primitives:
  18. Workflow shootout (lower is better) Criteria ⇒ Position Skaffold Tilt

    Garden Pipeline integratability • As pipeline tasks in Jenkins Pipelines, Tekton, Build tools • Support for container testing • Deployment options: Helm, Kustomize, kubectl 1 3 2 Supported image builders • Plain Docker • Daemon-less builds • Builders: Buildpacks, Draft, s2i, Jib 2 3 1 Multi-environments Support for multiple environments like local, dev, prod 1 3 1 Multi-image projects Support for code repositories containing multi-image projects 1 1 1 Local dev support Local build, local run, build-on-change 1 1 1 Maturity / future proof 1 2 2 7 13 8
  19. 1. The way from source to image can be done

    in a generic way 2. If you're doing Java then go for the Google guys: Skaffold and Jib 3. If you're polyglot then go for Skaffold and buildpacks.io 4. Use the same workflow & builder tool for local builds and CI/CD builds 5. Optimize the change lead time for features and the local round trip time for developers 5 things:
  20. Bonus slide: Change lead time optimization 1. Use well-architectured, security-hardened

    and minimal base images like: a. Google Distroless Images (https://github.com/GoogleContainerTools/distroless) b. RedHat Universal Base Images (https://developers.redhat.com/products/rhel/ubi) 2. Use a Docker daemon-less image builder with excessive caching: a. Google Kaniko (https://github.com/GoogleContainerTools/kaniko) b. uber Makiso (https://github.com/uber/makisu) c. Docker BuildKit (https://github.com/moby/buildkit) d. Google Bazel (https://bazel.build) 3. Use an efficient pipeline orchestrator with task parallelization capabilities: a. Tekton (https://tekton.dev) b. Argo CD (https://argoproj.github.io/argo-cd)