In our previous article, we shared an introduction to containers and orchestration to familiarize you with the terms and talked briefly about the advantages of containers. In this post, we will explain some of the important use cases of containers in greater detail.
Microservices are a type of application architecture that involves splitting the application into a series of small, independent services. Microservices can be built, modified, and scaled separately, with relatively little impact on one another. Containers excel when it comes to managing a large number of small, independent workloads. Containers and orchestration make it easier to manage and automate the process of deploying, scaling, and connecting lots of microservice instances. For example, I may have one microservice that needs additional resources. With containers, all I need to do is create more containers for that service to handle the load. With orchestration, that can even be done automatically and in real-time.
Cloud transformation is the process of migrating your existing IT infrastructure to the cloud. Many companies today are making the transition away from locally-hosted services and toward services hosted in the cloud. However, moving your infrastructure into the cloud can come with challenges. Containers can help you move into the cloud. It is relatively easy to wrap existing software in containers. While containers may not be the answer for every type of existing software, they are a powerful tool. Using containers in the cloud to run your software can allow you to take advantage of the flexibility and automatability of containers and since they use fewer resources you can cut down on your cloud billing.
Automated Scaling refers to automatically provisioning resources in response to real-time data metrics. Without automated scaling, you must provision enough resources to cover your peak resource needs at all times. If I need 10 servers to handle my peak usage times, then I need 10 servers all the time. With automated scaling, you can automatically detect (or even predict) increase in usage. The automated system creates new servers to handle the peak usage time, then removes those servers when usage returns to normal levels. Automated scaling depends on the ability to spin up new instances quickly and efficiently. Since containers are small and can start up quickly, they are ideal for this purpose. This means that if the system detects an increase in usage, it can spin up new containers in a few seconds. Your users see less downtime due to high loads, and you don’t consume and pay for resources unnecessarily!
Continuous Deployment Pipelines
Continuous Deployment is the practice of deploying new code automatically and frequently. Instead of writing new code for months and doing a big deployment, continuous deployment means constantly doing many small deployments. Some companies even do multiple deployments a day! This allows you to get new functionality in front of customers faster, and it also reduces the risk associated with big deployments containing a large number of changes. To maintain stability while doing continuous deployment, it is important to make use of automation to ensure that deployments are stable and consistent. Containers work very well in the context of continuous deployment. They make it easy to test code in an environment that is the same as production because the code can be automatically tested inside the container itself. An automation pipeline for continuous delivery can automatically build a container image with the new code, test it, then automatically ship that same container image production. Because of the production environment, in this case, the container is built right into this automated process, developers even have the ability to use it for testing and troubleshooting.
Self-Healing Applications are applications that are able to automatically detect when something is broken and automatically take steps to correct the problem without the need for human involvement. System administrators will tell you that, in the past, it was often very common to have to wake up in the middle of the night to reboot a server.
What if an automated system could detect a problem and reboot the server automatically? That’s an example of self-healing! Since Containers start up quickly, it’s easy enough to automatically restart them. However, containers take the concept of rebooting the server a step farther. Since it is so quick and easy to start up new container instances, when something goes wrong with a container it can often be easily destroyed and replaced within a few seconds. That means that if something goes wrong, you can have a brand new, clean, working instance quickly replace the broken one!
In more traditional environments, it can be difficult for everyone to get access to a production system to troubleshoot when something goes wrong. That may be due to security concerns, or simply due to the fact that when something goes wrong, the first priority is to fix it as quickly as possible, not to find out why it happened. Anyone in the organization that does not have direct access to a production system has no idea why code may or may not be working in production. This leads to the constant refrain, “Well, it works on my machine! With Containers, the container is the production environment. This means that anyone can spin up an environment that is exactly like production, even on their own laptop. Developers (and others) have the ability to test their code and see exactly how it will behave in production. The additional visibility offered by containers can help your organization develop and troubleshoot code much more efficiently!
We hope that you found the use cases and scenarios explained in this post to be interesting and we hope this post gave you some more ideas on exploring the possibilities with containers.
Latest posts by Sahil Suri (see all)
- Setting up chrooted ssh jails in Linux - October 8, 2019
- How To exclude copying of specific directories in Linux using cp/scp/rsync - October 7, 2019
- Docker container ports explained - September 27, 2019
- Docker Volumes explained - September 25, 2019
- Docker networking commands explained - September 24, 2019