We’ve touched base upon different aspects of Docker in our previous posts. In this post, we’ll talk about Docker’s architecture in a little more detail and introduce the different components that make up the docker ecosystem. Docker is based on a client-server architecture which means that the docker client and the docker daemon are separate binaries. The client can be used to communicate with different daemons. We use the docker client to execute the commands but the docker daemon is performing the majority of the work behind the scenes. The docker daemon is responsible for building, running and distributing Docker containers. Both the daemon and the client communicate using a REST API which could be accomplished through a UNIX socket or a network interface.

What happens when we run a docker command?
When we run a docker command, it communicates with the API using a request and tells the Docker daemon to execute the action.


The docker daemon (dockerd)
dockerd is a persistent process that manages containers. It listens to Docker API requests and performs actions on docker objects. These objects include:

  • Images
  • Containers
  • Networks
  • Volumes


The docker client(docker)
The docker client is what we use to interact with the daemon. From the command line standpoint, docker is the command that we’ll be executing. For example, let’s say we execute ‘docker container run’, in this case, the docker client will send this command to the Docker daemon dockerd.


Docker registries
When we start building our own images, we need a place to store them. A registry is very similar to GitHub repository except for the fact that it will be used for docker images and not code. By default, docker is set to use docker hub which is a public registry where we could upload and download our own images along with images created by other people. For more on docker hub, we encourage you to check out our post on the topic. Docker does not ship with any images by default as we’ve already seen in previous posts where we set up docker from scratch. Also, in case you do not wish to use docker hub, you could set up a private registry and configure docker to communicate with it.


Now let’s talk about Docker objects. When we talk about docker objects we are referring to anything that is a first-class citizen and has its own set of APIs to manage it.

A docker image is a read-only template that contains a set of instructions for creating a docker container. This image is built of off another image called the base image. Other commands can be used top copy code over to the image, set the working directory and define what command will be used once the container starts. To build an image we use a dockerfile that contains all the instructions required to build that image.


A container is a runnable instance of an image.  Using the Docker CLI or API we could create, move, stop and delete containers.  We can create networks and then attach multiple containers to that network. Also, a container could be attached to more than one network. We can also attach persistent storage to our container. This allows us to preserve important data and prevent it from getting lost when the container is stopped or recreated. In addition to all this, we can also create a new image based on a container in its current running state. What this allows us to do is to launch a container, configure it as per our requirements and then create a new image from it. Containers are isolated from the host on which they are created. While creating a container there are certain options we could use to control this level of isolation.


Services allow containers to scale multiple docker hosts or docker daemons. We can have multiple docker hosts running together by using docker swarm and each host in the swarm will have its own docker daemon. All the hosts in the swarm will be able to communicate with each other using the docker API. There are two types of nodes in the docker swarm. The managers are responsible for managing the cluster and workers that are responsible for executing tasks. Using services also allows us to maintain the desired state. For example, if we are maintaining three replicas in our cluster and one host dies own, we can configure the swarm cluster to build another replica. This also allows us to set up load balancing wherein the load will be distributed among the workers in the cluster. Docker swarm clusters are available with docker versions 1.12 and higher.



This concludes our post about docker architecture. We hope that you found the explanation of different docker components to be useful and we look forward towards your suggestion and feedback.

The following two tabs change content below.

Sahil Suri

He started his career in IT in 2011 as a system administrator. He has since worked with HP-UX, Solaris and Linux operating systems along with exposure to high availability and virtualization solutions. He has a keen interest in shell, Python and Perl scripting and is learning the ropes on AWS cloud, DevOps tools, and methodologies. He enjoys sharing the knowledge he's gained over the years with the rest of the community.