An Overview of a Kubernetes Deployment

With Kubernetes, your containerized application is deployed to the servers in the cluster (the worker nodes) using pods. Each pod is an instance of your application or of a particular microservice that forms part of your app. By grouping pods into services (not to be confused with microservices!) and exposing only the service, Kubernetes makes pods interchangeable and ensures they can be replaced automatically.

A deployment is a description of the desired state of the pods, which the Kubernetes replicaSet then work to make a reality. You can use deployments to roll out a new application or microservice or update an existing one.

Why Does Deployment Matter?

In the past, releasing a change to an application was a big deal, potentially involving several hours of downtime while servers were taken offline, updated, and re-deployed, followed by many more hours of nervous watching and waiting to see if everything was still working as expected. The experience for end-users was poor, ranging from several hours of service being unavailable if things went well to downtime followed by further interruptions and a potentially buggy system if things went badly.

For developers, the arduous release process and the need to give plenty of notice (or further degrade the user experience) was a deterrent to releasing small, regular changes which could have provided valuable feedback from users, while the effort required to script each release in order to make it repeatable meant the best practice was often an aspiration rather than reality.

Kubernetes changes all of that by leveraging cluster resources to avoid any downtime while automatically monitoring the health of servers hosting the application (the worker nodes and the pods they contain) and either rolling back or replacing instances as needed, without manual intervention. Because each deployment is recorded as a configuration in a YAML file, it's versioned and repeatable, so the same steps can be trialed in pre-production environments before going live.

Kubernetes is particularly well suited to microservice architectures, as each microservice can be deployed, updated, and scaled independently by addressing the pods associated with it. The various deployment strategies provide teams with options for testing the water before replacing all instances of the service or for rolling back if something goes wrong. Deployments also make it easy to scale individual services independently of one another. As automated deployments are quicker and more reliable, it's much easier for developers to roll out regular updates to service.

Deployment Strategies

There are multiple strategies for deploying your application to your Kubernetes cluster, each with different advantages. The best one to use will depend on the situation.

Ramped(Rolling) -- A ramped or rolling deployment is the default deployment strategy with Kubernetes. New pods are brought online, and traffic is directed to them to ensure they are working as expected before old pods are removed. This is particularly useful for stateful applications, as Kubernetes keeps old pods alive for a grace period after redirecting traffic to the new pod to allow any open transactions to terminate.

Recreate -- Unlike the other deployment strategies, a recreate strategy does involve downtime, as all pods are terminated before new pods are brought online. This avoids having two versions of a container running at the same time.

Blue/Green -- With a blue/green deployment, new pods are deployed alongside the existing pods. The new pods are tested before redirecting traffic to them. Although this strategy requires double the resources, it makes it much easier to roll back in the event of a problem arising with the new deployment.

Conclusion

Kubernetes provides a range of options for rolling out changes to containerized applications automatically. All deployments are versioned and automated, making them faster and more reliable than manual releases. If you're developing microservices, Kubernetes enables you to deliver updates to individual services rapidly and frequently, while also allowing you to scale those services independently.