Docker is basically a concept of creating a light weight ‘Virtual Machines’ within a Linux Machine by just creating a single process in it. The real meaning of a Docker is “person who load and unload the containers from a ship’. If you don’t understand how it creates and manages virtual machines(aka containers), then this post is for you. In this post we will see what is a Docker in layman’s way and what problem it is going to solve in IT. Before explaining Docker, we should know what is a container and virtualization.
What is a container?
A container is an enclosed process which is isolated from other process running on a Linux machine. In this way we have full control on what the enclosed process can do and can not do. We can run multiple containers in this fashion. This can be achieved by using cgroups functionality which is built-in to a Linux kernel long time back. In other words, cgroups provide security to a running process. The concept of containers is old and it proved how good it is in securing the system.
What is virtualization?
Virtualization is a concept of creating virtual machines with in a system. This way we can effectively utilize system resources which are getting cheap day by day. Instead of buying new machines for deploying new application we just create a VM to run that application.
What is the disadvantage of virtualization?
Yes, virtualzation too have have a disadvantage. For a single application(Like Apache or DNS application etc) to run we are creating whole virtual machine which require up many virtual resources like CPU, RAM and hard disk. This is not an efficient way of utilizing resources and to just run one application we are creating whole Operating system. This is where Docker came into picture. Have a look at below Virtualization and docker architecture.
Idea of docker
If you see there is a redundancy of code in virtualization ie a host operating system code and virtual machine operating system code. Most of the time virtual machines run the same or equivalent kernel what it’s host is running. People who worked in virtual machines space though why not use same host OS for virtual machines which they create by providing security to base OS. From here the idea of Docker came in to existence. To get what docker can do, just do below calculation. This is just a rough idea and may not be exact.
For running an Apache application
Note: Below calculations depends on how big your application is.
Hard disk of 10Gig to 20Gig is required RAM of 1GB to 4GB is required CPU with 500Mega hertz?
Hard disk of 50 to 100MB for installing apache as we already use host operating system as it's OS. RAM of 250 to 500MB CPU with mega hertz(or even less depending on app)
In any case docker will come first in utilizing resources effectively.
Relation between docker, containers and virtualization
Docker uses containers concept when creating virtual machines. In order to separate or give security from one process(consider it as light weight VM’s) to other docker use containers cgroup security concept and emulate complete machine in that single process.
What is a docker?
Docker is a tool which quickly let us you to create light weight VMS with your code and deploy it as fast as possible through different services in various containers. Docker consists of various type of Containers(Docker VM’s) and Docker Hub(Online Docker’s VM sharing service). From within docker we see whole system but from base machine we just see one process running for each docker instance. There is a very smooth way of working inside Docker containers in which a system is built in such a way that developers, testers and Administrators can work together to deploy a code in faster way.
More about Docker Containers: A docker is devided in to three main parts.
1)Docker software which acts as a implementer of VM's in containers. 2)Containers where dockers deploy your light weight VM's 3)Docker hub, an online reposatory of preconfigured docker VM images.
The docker software:
This is essence of whole docker which provide a Docker CLI to access, create, downlaod dockers. This will act as middle man between base OS and all the containers which docker runs.
The main part of Docker which is widely used is Container. We do have containers before Docker, but Docker people have made it easy and fast process to build new containers(aka Virtual machiens)
There is a very big advantage of containers that they can be used almost at every place like Physical machines / Virtual machines / Data Centres / Cloud and many more. So we can easily move our applications along with settings/configurations from one place to another efficiently. Scaling of Docker containers is really an easy process as we can quickly launch more containers when required and shutdown as soon as the work is finished. So it is very easy to do small changes with least risk and higher uptime.
A docker hub is an online repository(https://hub.docker.com) of public and private docker VM images were you can download through command line to install new VM with preconfigured software. In olden days to build a machine we have to procure hardware, assemble it, install OS, install required software and then configure it. But with docker hub, we no need to do all this stuff and just download preconfigured VM which is just 10MB to 300MB maximum.
History OF DOCKER ?
Docker was started first by Solomon Hykes in France within a Dotcloud company and it basically represents the evolution of that proprietary technology built on open source projects such as cloudlet us. Docker was first released in March 2013 with an initial version of 0.9 and eventually by October, 2015 it had around 25,600 Github users, 6800 forks and nearly 1100 contributors. By late 2015, an analysis showed the main contirbutors of Doker were RedHat, Google, IBM, Cisco and Amadeus IT group.
Some other advantages of using Docker
First of all it elevates the efforts of an administrator as it runs on the host operating system and these can be started really quickly.
It readily omit’s problems such as version conflicts, Driver compatibility issues, Web server related issues, Database queries and many more because we can easily create an isolate environment and resources using Docker containers.
It can run anywhere on Physical machines, Virtual machines, Data Centres, Cloud and many more such places.
It gives us a Fast productivity plus a rapid deployment solution when it comes to actually working with Docker containers.
We can easily share the containers remotely by using remote repositories and also making our own private repository.
Since it has application dependencies so there is least Srisk for failure and other type of such problems.
Latest posts by Surendra Anne (see all)
- Review: Whizlabs Practice Tests for AWS Certified Solutions Architect Professional (CSAP) - August 27, 2018
- How to use ohai/chef-shell to get node attributes - July 19, 2018
- wget download a file to a directory in Linux/Unix - June 4, 2018
- GIT: How to compare two GIT branches? - June 3, 2018
- Online training on Linux Bash shell scripting - February 8, 2018