Multi Data Center Deployment with Kubernetes
When releasing an application to Kubernetes or deploying updates to that application, you have several methods for deployment. This topic discusses several methods that are popular in application deployment, including Big Bang, Blue-Green, Canary, and Rolling deployment methodologies. You will also see an example of how these methods are applied using Kubernetes.
Big Bang Deployment
Traditional method
Flip the switch to a new application version
All systems are in lock step, no variation in versions

In the Big Bang release deployment model, the entire application is upgraded in one window. This methodology is common in traditional deployments. There is a single maintenance window to complete the upgrade and downtime is required.
In Kubernetes, this deployment means that all traffic to the application will fail until the pods on the new deployment are available. If downtime is not an issue, then this method is a cost-effective and easy deployment process.
Big Bang, or Re-Create, Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
strategies:
type: RecreateIn Kubernetes, the Big Bang deployment is called Re-create. This strategy is built into Kubernetes deployments. This deployment reuses the existing cluster resources, so there should be no extra costs. To re-create the application, the manifest for the deployment needs to be adjusted. Under the spec field, you will add a new line for strategy, and under strategy, set the type to Re-create. It is important that the name field in the metadata does not change.
Use the kubectl apply -f (filename) command to apply the changes.
Rolling Deployment
The new version is slowly rolled out.
Automated deployment removes old pods as new pods become available.
 In Kubernetes, the Rolling deployment is a little different than traditional deployments. In this method, Kubernetes replaces some of the replicas with the new deployment. As each new pod comes up, another replacement occurs.
The replacement settings can be configured before the deployment starts, but once started, there is no way to control these settings. The configurable settings are as follows:
MaxSurge: This setting specifies how many new pods can be added at one time.
maxUnavailable: This setting specifies the number of pods that can be down during the update.
There is no downtime with this deployment. Depending on the settings, this method may not use up any more system resources, so costs are minimized during upgrades. This system is difficult to test during deployment, and rollbacks cannot occur until after the deployment completes.
Rolling Deployment or Rolling Update
When deploying a new version, the rollout happens slowly. New pods are brought up and old pods are terminated, but not all at once.
Configurable using maxSurge and maxUnavailable
This strategy is built into Kubernetes deployments. To update the application, the manifest for the deployment needs to be adjusted. Under the spec field, you will add a new line for strategy, and under strategy, set the type to RollingUpdate. It is important that the name field in the metadata does not change. Optionally, you can add the maxSurge and maxUnavailable settings.
Use the kubectl apply -f (filename) command to apply the changes.
Blue-Green Deployment
Multiple versions are live at the same time.
Migrate the application from the blue to the green set, with the ability to move back to blue if there are issues with the green application.

The Blue-Green deployment is not a configurable option in Kubernetes, so you need to configure this deployment. This situation makes the Blue-Green deployment a little more complex. With the Blue-Green deployment, the goal is to have both deployments available, then have a hard cutover between them. Although this option has a higher resource cost, there are benefits from not having downtime and the ability to test before cutting over to the green option.
In Kubernetes, there are a few options for Blue-Green deployments. One is to use the Service and the other is to use Ingress. The choice depends on which option was deployed to provide external access.
Blue-Green Deployment Using Service or Ingress
The native way of configuring Blue-Green deployments in Kubernetes is to use a deployment with a different name and a different label, in this case, the version number. The name value must be unique between the two different deployments, because both will remain active. In this example, the existing deployment is configured with a name nginx-v1, and has two labels that are configured, app: nginx and version: v1.0.0. The Service has just the base name without the version number, because the Service will not need two instances. The selector for the Service will match the existing deployment. It is important that both the app and the version match.
First, deploy the new application. This deployment should be tagged in the label. The old version should also be tagged.
The deployment name should be unique as well.
Because the names are different, both deployments can exist in the same cluster.
To deploy the update to the application, you will make a new deployment for the application. The snippet in the figure shows the updates to the deployment. The updates to the containers for the new code are not shown.
The metadata will need to be changed to nginx-v2 because you are deploying version 2. This change will allow nginx-v1 to be up at the same time as nginx-v2. In addition, the version label will need to be updated to the new version number.
Use the kubectl apply -f (filename.yml) command to deploy the new version.
Second, wait for all pods in the new service to finish deploying.
kubectl rollout status deploy nginx-v2 -w
After deploying to Kubernetes, you need to verify that the deployment was rolled out successfully. This fact can be verified by running kubectl rollout status deploy nginx-v2 -w command.
After you run the command, if the deployment was successful, the output should show that the deployment rolled out successfully.
Third, update the existing service.
In the first command, you are supplying a JSON patch.
Once everything is verified, remove the existing deployment.
Once the deployment is successful, you may want to verify operation by port-forwarding because the Service still points to the previous deployment. When ready, you will want to update the Service. This update can be done easily by running the kubectl patch command and supplying the changes in JSON patch format. Here is an example:
This command will update the existing Service resource in Kubernetes and set the selector to look for the new deployment. Once everything has been verified, you can safely remove the old deployment using the kubectl delete deploy nginx-v1 command.
Canary Deployment
Route a subset of users to an application.
Different branches are used for different Canary servers.
Verify that new features and functions are working without impacting most users.

Canary deployment in Kubernetes is similar to a rolling update, but provides you with the control. The Canary deployment is fairly complex and can create challenges to troubleshoot. Costs are minimal because you use the same cluster resources. In this configuration, both systems are active at the same time, so there is zero downtime.
Like the Blue-Green deployment, Kubernetes does not have the built-in option for a Canary deployment, but there are several ways to do a Canary deployment. One option uses the base version of Kubernetes, and the other options require an ingress controller such as Nginx or Istio.
Canary Deployment Using Replica Scale
An example of the existing deployment is provided in the figure. The existing deployment should look similar to the Blue-Green deployment. The replicas have been increased to 10 to provide better examples.
The new deployment will also be similar to the Blue-Green deployment, because the name and version number are updated. The replicas on the new deployment have been set to 1. With a Canary deployment, it is good to start small, with a subset of traffic, and then increase as you gain confidence in the update.
Here, you will deploy version 2 and scale down version 1. In the example, version 2 has one replica, so one replica is removed from version 1.
You can continue increasing replicas on version 2 and reducing them on version 1 until all are on version 2.
When you apply the new Canary deployment, you will also want to scale down the existing deployment. In the example, the scale will be reduced from 10 to 9 on the existing deployment, because one replica is being added on the new deployment.
After verifying the application, you can optionally increase the scale to 50 percent of each version for more testing. Or you could skip this step and go to the last step.
kubectl scale --replicas=5 deploy nginx-v1
kubectl scale --replicas=5 deploy nginx-v2
Finally, once you are confident and ready to fully transition to the new deployment, you can increase the scale of the new deployment to match the scale of the old deployment before you started. In this case, there will be replicas. You will also delete the old deployment.
kubectl scale --replicas=10 deploy nginx-v2
kubectl delete deploy nginx-v1
Release Strategies Comparison

There are benefits and risks that are associated with the various release strategies. There are financial costs that are involved with the size of the system environment and complexity. There are trade-offs among the categories that are listed, and others that are not listed. The release deployment strategy that is right for your organization will differ based on the values that are associated with the strategy. You may be able to have a minimal amount of downtime that is accounted for in a planned maintenance window to keep complexity and costs down. Or you may want to have 100 percent uptime with no outage and fast rollbacks.
The Big Bang deployment method has downtime, but is the least complex and requires the least amount of system resources to complete the strategy.
Rolling deployments have a bit more complexity and keep the costs down. The downside is that no real traffic is hitting the application until it is in production. The deployment also has to complete before rollbacks can be made.
The Blue-Green deployment has the quickest rollback capabilities. If there are issues on the new application version (green side), you can switch back to the previous production instance because it is still 100 percent in production. This methodology has the highest cost because you are maintaining two instances of the environment.
The Canary deployment has some complexity trade-offs because the rollout includes different versions within the production environment. This deployment has a longer mismatch than a rollout release, but gives you the ability to see real-world traffic on the Canary systems before continuing to increase the production load of the new application version.
Last updated