CI/CD, from the first one (TravisCI), to a more modern pipeline (GitlabCI) and to finish with Cloud-native one (Azure DevOps) Lots of demos, lots of tools to use !! #devoxxma @laurentgrangeau
able to request the right GCP project…" gcloud auth activate-service-account --key-file=/secrets/gcp-keyfile.json --project=travis-ci-example-258722 echo "Check that a Kubernetes cluster is there…" gcloud container clusters list echo "Retrieve credentials so that kubectl is able to request the Kubernetes cluster" gcloud container clusters get-credentials travis-ci-cluster --zone=us-central1-a kubectl get nodes echo "Deploy DockerCoins app" kubectl apply --recursive=true -f /k8s-resources #devoxxma @laurentgrangeau
We need your GitHub login to identify you. This information will not be sent to Travis CI, only to api.github.com. The password will not be displayed. Try running with --github-token or --auto if you dont want to enter your password anyway. Username: lpiot Password for lpiot: ******* Two-factor authentication code for lpiot: 787939 $ travis encrypt-file ./secrets/gcp-keyfile.json --com encrypting ./secrets/gcp-keyfile.json for theGarageBandOfIT/Travis-CI-example storing result as gcp-keyfile.json.enc storing secure env variables for decryption Please add the following to your build script (before_install stage in your .travis.yml, for instance): openssl aes-256-cbc -K $encrypted_f735c1e13263_key -iv $encrypted_f735c1e13263_iv -in gcp-keyfile.json.enc -out ./secrets/gcp-keyfile.json -d Pro Tip: You can add it automatically by running with --add. Make sure to add gcp-keyfile.json.enc to the git repository. Make sure not to add ./secrets/gcp-keyfile.json to the git repository. Commit all changes to your .travis.yml. #devoxxma @laurentgrangeau
It relies on very similar concepts. It comes with a more all-inclusive philosophy. For example, it knows how to manage your own Kubernetes cluster. #devoxxma @laurentgrangeau
to deploy to any k8s cluster - The Tasks which make up a Pipeline can easily be run in isolation - Resources such as git repos can easily be swapped between runs #devoxxma @laurentgrangeau
steps you would want to run as part of your continuous integration flow. A task will run inside a pod on your cluster. A Task declares: - Inputs - Outputs - Steps #devoxxma @laurentgrangeau
processes to completion. To create a TaskRun, you must first create a Task which specifies one or more container images that you have implemented to perform and complete a task. A TaskRun runs until all steps have completed or until a failure occurs. #devoxxma @laurentgrangeau
Pipeline declaration must include a reference to at least one Task. Pipelines can declare input parameters that must be supplied to the Pipeline during a PipelineRun. #devoxxma @laurentgrangeau
and the order they run in. To execute the Tasks in the Pipeline, you must create a PipelineRun. Creation of a PipelineRun will trigger the creation of TaskRuns for each Task in your pipeline. #devoxxma @laurentgrangeau
into resource templates. Put another way, this allows events to both model and instantiate themselves as Kubernetes resources. In the case of pipeline, this makes it easy to encapsulate configuration into PipelineRuns and PipelineResources. #devoxxma @laurentgrangeau
you to capture fields from an event and store them as parameters. The separation of TriggerBindings from TriggerTemplates was deliberate to encourage reuse between them. #devoxxma @laurentgrangeau
be deployed on any kubernetes cluster (AKS/EKS/GKE/IKS/OKE), but also on cluster created with kops, and even on minishift and minikube It uses different projects from different vendors: - Prow as the notification manager - Kaniko as the builder - Tekton as the pipeline as code
- staging - production Each deployment are handled with PR from Github. JenkinsX then received a webhook, and validate the PR. #devoxxma @laurentgrangeau
- Skaffold - Kaniko Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. #devoxxma @laurentgrangeau
control, reporting, requirements management, project management, automated builds, lab management, testing and release management capabilities. It covers the entire application lifecycle, and enables DevOps capabilities. It’s based on Team Foundation Server and is the online devops tools from Microsoft. #devoxxma @laurentgrangeau
your steps For the build part, you can use the Docker task to build and push your image. For the deploy part, you can use the Kubernetes task. You can then manipulate your cluster. #devoxxma @laurentgrangeau
Action: deploy/promote/reject/bake/scale/patch/delete - Strategy: Deployment strategy to be used while applying manifest files on the cluster. Currently, 'canary' is the only acceptable deployment strategy #devoxxma @laurentgrangeau
ServicePrincipal It can then creates some service account inside the AKS cluster. It’s based on RBAC role of the cluster, and the ServicePrincipal created for Azure DevOps. #devoxxma @laurentgrangeau