and pain points • Deployment of Helm releases through Terraform’s Helm provider • 5 level of nested code 1. Terraform “stack” 2. DataDome Terraform Module 3. Upstream Terraform module 4. DataDome Helm charts 5. Upstream Helm charts • Change propagation flow seems complex and restrictive • Due to the nesting of the code-base • Terraform changes are driven by Atlantis Workflow (does not fit every team)
of applications to specified target environments • Support for multiple config management/templating tools • Ability to manage and deploy to multiple clusters • SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitHub, …) • Multi-tenancy and RBAC policies for authorization • Rollback/Roll-anywhere to any application configuration committed in Git repository • Health status analysis of application resources • Automated configuration drift detection and visualization • Automated or manual syncing of applications to its desired state • Web UI which provides real-time view of application activity • CLI for automation and CI integration • Webhook integration (GitHub, BitBucket, GitLab) • Access tokens for automation • PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades) • Audit trails for application events and API calls • Prometheus metrics • Parameter overrides for overriding helm parameters in Git • …
and control of Kubernetes-specific resources ◦ Using ArgoCD UI ◦ Using ArgoCD GitOps functionalities for Kubernetes applications deployment • Allows a clear separation of concerns between ◦ the infrastructure resources (AWS), ◦ the “platform components” (Kubernetes applications), ◦ and the business applications (Kubernetes applications) • Simplifies the Kubernetes Clusters management (from SRE point of view) • Simplifies the Kubernetes Applications deployment and management (from Non-SRE point of view) 3 - ArgoCD 24