Containerization and orchestration technologies have immensely contributed to the revolution of mainstream IT and continue to play a pivotal role in the rise of DevOps culture across organizations. Before we started working with containers it’s important to understand what they are and how they work before diving deep into the implementation part of the technology.
In this post, we will explain what are containers and orchestration tools/technologies and why you should consider using them.


Why use containers?
Containers are an extremely powerful tool and have had a tremendous impact on how the industry does IT and with good reason. Containers help us to:

1) Move faster and adapt more quickly to a changing environment.
2) Containers allow a greater degree of automation in managing, maintaining and updating software.
3) Help increase reliability as they are easy to create, destroy and replicate and thus help in reducing downtime.
4) They help reduce cost since they are extremely lightweight and do not require a large number of resources to function.


What are the containers?
Simply put containers are all about portable software. Containers are a technology that allows you to deploy software on a variety of systems ranging from a developer’s laptop all the way to a production system. When we are able to run our software in a portable fashion on multiple different systems, it speeds up deployment, simplifies automation and ensures that your code runs consistently in production as well as anywhere else. Similar to virtual machines (VMs), containers wrap your code in a standardized environment that allows it to run consistently on varied machines. But unlike virtual machines, containers are smaller, use fewer resources and are significantly easier to automate as compared to virtual machines. With this, we’ve completed our basic and jargon-free introduction to containers. We’ll now talk about orchestration.


What is orchestration?
Container Orchestration simply refers to processes used to manage containers and to automate the management of containers. Let’s consider an example wherein I want to start up a set of twenty containers in production. I could spin up each container manually or, I could tell an orchestration tool like Kubernetes that I want twenty containers and let the orchestration tool take care of the container deployment. For the sake of redundancy and fault tolerance, it makes sense to spin up my twenty containers on twenty different hosts. This consideration for redundancy and reliability could fairly easily be incorporated in the orchestration tool while deploying the containers. The more complex our requirements for managing containers become, the more useful our orchestration tools become!


Zero downtime deployments: An important use case for orchestration tools
Before the emergence of containers deployments would involve bringing the system down for maintenance during which it would be unavailable to customers. Then we would perform the deployment and bring the server back up.

A zero-downtime deployment (with containers) goes like this:
1. Spin up containers running the new code.
2. When they are fully up, direct user traffic to the new containers.
3. Remove the old containers running the old code. No downtime for users!

Orchestration tools help coordinate the different steps involved in the above scenario in a quick and efficient manner.

Here are some advantages and limitations of using containers:

• The isolation and portability of VMs.
• More lightweight than VMs – Less resource usage.
• Faster than VMs – Containers can start up in seconds, not minutes.
• Smaller than VMs – Container images can be measured in megabytes, not gigabytes.
• All of these add up to faster and simpler automation!

• Less flexibility than VMs – You can’t run a Windows container on a Linux machine.
• Introduces new challenges around orchestration and automation.



This concludes our introduction to containerization and orchestration. This would the first post in a series of posts where we would focus on containers particularly Docker. We hope that you found this post to be useful and we look forward to your suggestions and feedback.

The following two tabs change content below.

Sahil Suri

He started his career in IT in 2011 as a system administrator. He has since worked with HP-UX, Solaris and Linux operating systems along with exposure to high availability and virtualization solutions. He has a keen interest in shell, Python and Perl scripting and is learning the ropes on AWS cloud, DevOps tools, and methodologies. He enjoys sharing the knowledge he's gained over the years with the rest of the community.