the possible reasons for something failing, final testing for a feature being done on some part of the traffic is not something unheard of. • Canary being a precursor to enable full blue green deployments. ◦ Why? ▪ No feature flags for respective microservices. ▪ Hence canary testing becomes paramount to test out features. 3
of deployments and referring to figure 1, v1 and v2, both would be separate deployment objects with separate label selectors. Both v1 and v2 deployments would be exposed via the same svc object which would point to their pods. 5
• Can be done without any plugins/extra stuff in the vanilla k8s we get in GKE. Disadvantages • Traffic will be a function of replicas, and cannot be customized. For example, if the traffic splitting is done between the two deployment with v1 having 3 replicas and v2 having 1 replica, the traffic split for canary will be 25% 6
to set the routing rules to configure the traffic distribution. • Similar to the approach above, we have two deployments and svc objects for the same service, called v1 and v2. • The rule will look something like 7
quite some time. i.e Battle tested • Flagger can be added to have automated canary promotion. • GKE has an add on feature which can be used to install istio in our clusters • Traffic routing and replica deployment are two completely orthogonal independent functions • Focused canary testing, eg: instead of exposing the canary to an arbitrary number of users, if you wanted the users from some-company-name.com to the canary version, leaving the other users unaffected, you can do that too with a rule to match the headers for the match to check for the cookie for the above. Disadvantages • Another add on to manage inside the cluster if gone through the route of installing the istio version 9
canary promotion/demotion • Battle tested and has been in use by the virtue of being the first service mesh Disadvantages • Another component inside the k8s cluster to be maintained 12
deployment and svc objects for the service which needs to have canary enabled for it. • It makes use of the ingress object in k8s to define the traffic split between the services. 13
as it’s just an ingress controller in the k8s controller (like contour/ nginx-ingress-controller) • Doesn’t need the pods to be scaled. • Support for tracing included. Disadvantages • No inbuilt process to shift weights from v1 to v2 or revert back traffic in case of increased error rates. Ie. not a clear cut way to integrate with flagger for automated canary promotion/demotion 15