Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Déploiement continu dans le Cloud - tour d’hori...

Déploiement continu dans le Cloud - tour d’horizon des solutions du moment

Déploiement continu dans le Cloud - tour d’horizon des solutions du moment

Laurent Grangeau

November 12, 2019
Tweet

More Decks by Laurent Grangeau

Other Decks in Programming

Transcript

  1. Déploiement continu dans le Cloud - tour d’horizon des solutions

    du moment LAURENT GRANGEAU - @laurentgrangeau ADRIEN BLIND - @adrienblind LUDOVIC PIOT - @lpiot
  2. Agenda Today, we are going to see how to do

    CI/CD, from the first one (TravisCI), to a more modern pipeline (GitlabCI) and to finish with Cloud-native one (Azure DevOps) Lots of demos, lots of tools to use !! #devoxxma @laurentgrangeau
  3. Nowadays, everything is code, and stored in a single source

    of truth - pipeline-as-code - configuration-as-code - infrastructure-as-code #devoxxma @laurentgrangeau
  4. Travis-CI is an ‘old-timer’. It’s the historical CI-CD SaaS platform

    to automate your Github repositories - pipeline-as-code - on-demand runners - docker capabilities - Github smooth integration #devoxxma @laurentgrangeau
  5. Travis-CI integrates with your Github repository Read events happening onto

    your Github repo, and then apply actions described into your .travis.yaml file #devoxxma @laurentgrangeau
  6. Every event in the repository is now track by Travis-CI.

    And Travis-CI activity is visible directly in Github. #devoxxma @laurentgrangeau
  7. Pipeline executions can easily be monitored in Travis-CI WebUI or

    CLI Or integrated into GitHub. #devoxxma @laurentgrangeau
  8. Our demo application: DockerCoin A micro-service app relying on •

    orchestration • auto-discovery #devoxxma @laurentgrangeau
  9. Travis-CI pipeline-as-code dist: bionic sudo: false language: generic git: depth:

    5 jobs: include: - stage: deploy if: branch = master script: - bash -c 'docker run --volume "$(pwd)/secrets":/secrets --volume "$(pwd)/scripts":/scripts --volume "$(pwd)/k8s-resources":/k8s-resources --env GOOGLE_PROJECT=travis-ci-example-258722 --env GOOGLE_APPLICATION_CREDENTIALS="/secrets/gcp-keyfile.json" --workdir /k8s-resources thegaragebandofit/infra-as-code-tools:gcl270-ter0.12.13-pac1.4.5 /scripts/deploy.sh' stages: - deploy before_install: - openssl aes-256-cbc -K $encrypted_f735c1e13263_key -iv $encrypted_f735c1e13263_iv -in ./secrets/gcp-keyfile.json.enc -out ./secrets/gcp-keyfile.json -d #devoxxma @laurentgrangeau Managing secrets is an issue! Let’s explain how to do that
  10. deployment script #!/bin/sh echo "Retrieve credentials so that Gcloud is

    able to request the right GCP project…" gcloud auth activate-service-account --key-file=/secrets/gcp-keyfile.json --project=travis-ci-example-258722 echo "Check that a Kubernetes cluster is there…" gcloud container clusters list echo "Retrieve credentials so that kubectl is able to request the Kubernetes cluster" gcloud container clusters get-credentials travis-ci-cluster --zone=us-central1-a kubectl get nodes echo "Deploy DockerCoins app" kubectl apply --recursive=true -f /k8s-resources #devoxxma @laurentgrangeau
  11. how to encrypt secrets with Travis-CI $ travis login --com

    We need your GitHub login to identify you. This information will not be sent to Travis CI, only to api.github.com. The password will not be displayed. Try running with --github-token or --auto if you dont want to enter your password anyway. Username: lpiot Password for lpiot: ******* Two-factor authentication code for lpiot: 787939 $ travis encrypt-file ./secrets/gcp-keyfile.json --com encrypting ./secrets/gcp-keyfile.json for theGarageBandOfIT/Travis-CI-example storing result as gcp-keyfile.json.enc storing secure env variables for decryption Please add the following to your build script (before_install stage in your .travis.yml, for instance): openssl aes-256-cbc -K $encrypted_f735c1e13263_key -iv $encrypted_f735c1e13263_iv -in gcp-keyfile.json.enc -out ./secrets/gcp-keyfile.json -d Pro Tip: You can add it automatically by running with --add. Make sure to add gcp-keyfile.json.enc to the git repository. Make sure not to add ./secrets/gcp-keyfile.json to the git repository. Commit all changes to your .travis.yml. #devoxxma @laurentgrangeau
  12. Gitlab-CI is a huge challenger to Github and its ecosystem.

    It relies on very similar concepts. It comes with a more all-inclusive philosophy. For example, it knows how to manage your own Kubernetes cluster. #devoxxma @laurentgrangeau
  13. You can add an already existing cluster… • …in the

    Cloud or on-premise… • …or part of a cluster (by playing with RBAC). #devoxxma @laurentgrangeau
  14. You can also let Gitlab create and manage its own

    GKE cluster… #devoxxma @laurentgrangeau
  15. … and even a subset of Kubernetes-based utilities such as

    • service mesh • serverless framework • PKI • datascience notebook env. • monitoring • any middleware deployable by Helm #devoxxma @laurentgrangeau
  16. Pipeline is described in a very similar way than Travis-CI

    And monitoring too #devoxxma @laurentgrangeau
  17. Pipeline is described in a very similar way than Travis-CI

    And monitoring too #devoxxma @laurentgrangeau
  18. Tekton is composed of many projects: - Tekton operator -

    Tekton pipelines - Tekton trigger - Tekton dashboard - Tekton catalog #devoxxma @laurentgrangeau
  19. Install Tekton pipelines with operator and CRDs It will create

    5 different sort of kind: - Task - TaskRun - Pipeline - PipelineRun - PipelineResource #devoxxma @laurentgrangeau
  20. Tekton Pipelines are Cloud Native: - Run on Kubernetes -

    Have Kubernetes clusters as a first class type - Use containers as their building blocks #devoxxma @laurentgrangeau
  21. Tekton Pipelines are Decoupled: - One Pipeline can be used

    to deploy to any k8s cluster - The Tasks which make up a Pipeline can easily be run in isolation - Resources such as git repos can easily be swapped between runs #devoxxma @laurentgrangeau
  22. Tekton Pipelines are Typed: The concept of typed resources means

    that for a resource such as an Image, implementations can easily be swapped out (e.g. building with kaniko v.s. buildkit) #devoxxma @laurentgrangeau
  23. A Task (or a ClusterTask) is a collection of sequential

    steps you would want to run as part of your continuous integration flow. A task will run inside a pod on your cluster. A Task declares: - Inputs - Outputs - Steps #devoxxma @laurentgrangeau
  24. Use the TaskRun resource object to create and run on-cluster

    processes to completion. To create a TaskRun, you must first create a Task which specifies one or more container images that you have implemented to perform and complete a task. A TaskRun runs until all steps have completed or until a failure occurs. #devoxxma @laurentgrangeau
  25. A Pipeline will execute a graph of Tasks. A valid

    Pipeline declaration must include a reference to at least one Task. Pipelines can declare input parameters that must be supplied to the Pipeline during a PipelineRun. #devoxxma @laurentgrangeau
  26. On its own, a Pipeline declares what Tasks to run,

    and the order they run in. To execute the Tasks in the Pipeline, you must create a PipelineRun. Creation of a PipelineRun will trigger the creation of TaskRuns for each Task in your pipeline. #devoxxma @laurentgrangeau
  27. PipelineResources in a pipeline are the set of objects that

    are going to be used as inputs to a Task and can be output by a Task. A Task can have multiple inputs and outputs. #devoxxma @laurentgrangeau
  28. The build system supports two types of authentication, using Kubernetes'

    first-class Secret types: - kubernetes.io/basic-auth - kubernetes.io/ssh-auth #devoxxma @laurentgrangeau
  29. Triggers enables users to map fields from an event payload

    into resource templates. Put another way, this allows events to both model and instantiate themselves as Kubernetes resources. In the case of pipeline, this makes it easy to encapsulate configuration into PipelineRuns and PipelineResources. #devoxxma @laurentgrangeau
  30. A TriggerTemplate is a resource that can template resources. TriggerTemplates

    have parameters that can be substituted anywhere within the resource template. #devoxxma @laurentgrangeau
  31. As per the name, TriggerBindings bind against events/triggers. TriggerBindings enable

    you to capture fields from an event and store them as parameters. The separation of TriggerBindings from TriggerTemplates was deliberate to encourage reuse between them. #devoxxma @laurentgrangeau
  32. JenkinsX It use GitOps as a first-class paradigm It can

    be deployed on any kubernetes cluster (AKS/EKS/GKE/IKS/OKE), but also on cluster created with kops, and even on minishift and minikube It uses different projects from different vendors: - Prow as the notification manager - Kaniko as the builder - Tekton as the pipeline as code
  33. Jenkins-X architecture Many components: - deck (ui) - hook (prow

    hook listens for Git hooks) - crier (status of pipeline builds) - tide (managing PRs) - nexus (registry) - chartmuseum (helm charts) - tekton (pipeline) #devoxxma @laurentgrangeau
  34. Basically, it creates 2 projects on Github to manage applications:

    - staging - production Each deployment are handled with PR from Github. JenkinsX then received a webhook, and validate the PR. #devoxxma @laurentgrangeau
  35. The build system is based on two projects from Google:

    - Skaffold - Kaniko Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. #devoxxma @laurentgrangeau
  36. Why both ? Spinnaker is only CD. It can deploy

    Jenkins though. You have to hook Spinnaker with a CI tool I choose Github Actions #devoxxma @laurentgrangeau
  37. GitHub Actions enables you to create custom software development life

    cycle (SDLC) workflows directly in your GitHub repository. #devoxxma @laurentgrangeau
  38. It is based on YAML file. It has lots of

    community workflows. It works for any language. #devoxxma @laurentgrangeau
  39. Can be launch for every PR You can edit the

    actions directly on Github #devoxxma @laurentgrangeau
  40. It can be then manipulated with Github Actions We can

    send it also in an environment variable #devoxxma @laurentgrangeau
  41. With Halyard, you have to enable: - Enable Pipeline as

    Code - Enable Kubernetes manifest - Enable Kayenta With these features enabled, you can start your deployments #devoxxma @laurentgrangeau
  42. You have to create every service account you want to

    use. For example, you have to create service account for: - Docker hub - Github - Kubernetes cluster #devoxxma @laurentgrangeau
  43. Azure DevOps Server is a Microsoft product that provides version

    control, reporting, requirements management, project management, automated builds, lab management, testing and release management capabilities. It covers the entire application lifecycle, and enables DevOps capabilities. It’s based on Team Foundation Server and is the online devops tools from Microsoft. #devoxxma @laurentgrangeau
  44. In order to use Kubernetes integration, you have to first

    enable Multi-stage pipeline. From here, you can start using CD with YAML file. #devoxxma @laurentgrangeau
  45. You can then create a “azure-pipeline.yml” file which contains all

    your steps For the build part, you can use the Docker task to build and push your image. For the deploy part, you can use the Kubernetes task. You can then manipulate your cluster. #devoxxma @laurentgrangeau
  46. The Kubernetes task manifest is open-source and on Github.com https://github.com/microsoft/azure-pipelines-tasks/tree/master/Tasks/Kubern

    etesManifestV0 Task Deploy to Kubernetes : https://docs.microsoft.com/en-us/azure/devops/pipelines/ecosystems/kubern etes/deploy?view=azure-devops #devoxxma @laurentgrangeau
  47. The Kubernetes manifest task has multiples verbs, like : -

    Action: deploy/promote/reject/bake/scale/patch/delete - Strategy: Deployment strategy to be used while applying manifest files on the cluster. Currently, 'canary' is the only acceptable deployment strategy #devoxxma @laurentgrangeau
  48. Azure DevOps has access to AKS (and Azure) through a

    ServicePrincipal It can then creates some service account inside the AKS cluster. It’s based on RBAC role of the cluster, and the ServicePrincipal created for Azure DevOps. #devoxxma @laurentgrangeau
  49. To access ACR, a new command has been integrated in

    the CLI. It uses the ServicePrincipal to authenticate everything in Azure. #devoxxma @laurentgrangeau