Container Technology : Docker

Anay Dongre
DevOps.dev
Published in
8 min readJan 14, 2023

--

Containers provide a way of creating an isolated environment, sometimes called a sandbox, in which applications and their dependencies can live.

Why Containers ?

  • Portability: Containers allow developers to package and deploy software in a portable way, ensuring that the application will run consistently across different environment.
  • Isolation: Containers isolate the dependencies and configurations of an application from the host system, preventing conflicts and reducing the risk of system-wide problems.
  • Resource Efficiency: Containers are lightweight and fast to start up, which allows multiple applications to run on the same host, making efficient use of hardware resources.
  • Improved Developer Productivity: Containers enable developers to focus on writing code, while the operations team can handle the deployment and scaling of the application.
  • Cost Savings: Containers allow for more efficient use of resources, which can result in cost savings when running applications in cloud environments.
  • Scalability: Containers can be easily scaled up or down to meet changing demand, making it easy to handle unpredictable workloads and handle traffic spikes.
  • Flexibility: Containers can be run on a variety of platforms, including virtual machines, bare metal servers, and cloud environments, giving organizations the freedom to choose the best infrastructure for their needs.

Docker

Docker is a software platform that allows you to build, test and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using docker, you can quickly deploy and scale applications into any environment and know your code will run.

A. How Docker Works ?

Docker works by providing a standard way to run your code. Docker is an operating system for containers. Similar to how virtual machines visualize (removes the need to directly manage) server hardware, container visualize the operating system of a server. Docker is installed on each server and provides simple commands you can use to build, start, or stop containers.

Image from section.io

B. Why use Docker ?

Using docker lets you ship code faster, standardized application operations, seamlessly move code, and save money by improving resource utilization. With Docker, you get a single object that can reliably run anywhere. Docker’s simple and straightforward syntax gives you full control. Wide adoption means there’s a robust ecosystem of tools and off-the-self applications that are ready to use with Docker.

C. When to use Docker ?

You can use Docker containers as a core building block creating modern applications and platforms. Docker makes it easy to build and run distributed microservices architectures, deploy your code with standardized continuous integration and delivery pipelines, build highly-scalable data processing systems, and create fully-managed platforms for your developers. The recent collaboration between AWS and Docker makes it easier for you to deploy Docker Compose artifacts to Amazon ECS and AWS Fargate.

D. Components of Docker Architecture

Docker comprises the following different components within its core architecture:

  1. Images
  2. Containers
  3. Registries
  4. Docker Engine

Docker Images

  • Docker images are the building blocks of containerization in Docker. They are snapshots of an application and its dependencies, including the code, runtime, system tools, libraries, and settings.
  • A Docker image is created using a Dockerfile, which is a script that contains instructions for building the image. The Dockerfile includes information such as the base image to use, the software to install, and the configurations to set.
  • Images can also be versioned and tracked, so that users can rollback to previous versions or update to the latest version of the image.
  • Docker images are lightweight and efficient, which makes them well suited for distributed architectures, such as microservices. They are also easy to distribute and run, which makes them well suited for CI/CD (Continuous Integration and Deployment) pipelines.

Docker Containers

  • Docker containers are the executable instances of a Docker image. They are isolated environments that run the application and its dependencies. A container is created using the Docker run command, which takes an image as an argument and creates a container from it.
  • When a container is created, the Docker Engine creates a new namespace for the container, which isolates the container from the host system and other containers. The namespace includes a file system, network, and process space, among other things.
  • The container shares the host’s kernel, but has its own isolated process and network space. This allows multiple containers to run on the same host, while isolating the applications and their dependencies.
  • Containers are also lightweight and fast to start up, which makes them well suited for microservices and other distributed architectures. They are also easy to distribute and run, which makes them well suited for CI/CD (Continuous Integration and Deployment) pipelines.
  • Containers can be stopped, started, and deleted, and the changes made to a container during its runtime are not saved. To save the changes, you can create a new image from the container using the docker commit command.
  • Docker also provides an API for interacting with the Docker Engine, which allows developers to automate and orchestrate the deployment and scaling of containers.
  • Docker also has orchestration tools such as Docker Swarm and Kubernetes to manage container cluster and automate the scaling, load balancing, and self-healing of containerized applications.

Docker Registries

  • A Docker registry is a service that stores and distributes Docker images. A registry can be either public or private, depending on whether it is open to the public or restricted to specific users or organizations.
  • Docker Hub is the default public registry for Docker images, it allows developers to store and distribute their images to the public. Developers can also create and manage their own private registries using solutions such as Docker Trusted Registry, AWS Elastic Container Registry (ECR), or Google Container Registry (GCR).
  • When a developer creates an image, they can push it to a registry using the Docker push command. Other developers can then pull the image from the registry using the Docker pull command, and use it to create containers.
  • Registries also provide versioning and tagging features, which allows developers to track different versions of an image and easily rollback to previous versions or update to the latest version.
  • Registries also provide access control features that allows to restrict access to certain images or users. This feature comes in handy when an organization wants to keep their images private and only share with specific users or team.
  • In addition, registries can also be integrated with continuous integration and continuous delivery (CI/CD) tools to automate the process of building, testing, and deploying images to different environments.

Docker Engine

  • The Docker Engine is the core component of Docker that is responsible for managing the containers. It is a daemon that runs on a host machine and is responsible for creating, starting, stopping, and managing containers.
  • The Docker Engine is responsible for creating a container from an image, and it uses the namespace and control groups features of the Linux kernel to isolate the container from the host system and other containers.
  • The Docker Engine is also responsible for managing the container’s networking, storage, and security. It can be configured to use different networking and storage backends, such as overlay networks and storage volumes, to meet the needs of different applications.
  • Docker Engine can be installed on various operating systems such as Windows, Linux, and MacOS. It also supports different architectures such as x86, ARM, and IBM Power.
  • The Docker Engine also communicates with other components like the Docker hub, where images are stored and shared, and the Docker registry, which is used to store and distribute images. It also communicates with orchestration tools such as Docker Swarm and Kubernetes to manage container cluster and automate the scaling, load balancing, and self-healing of containerized applications.
  • A Docker Engine uses a client-server architecture and consists of the following sub-components :

1. The Docker Daemon:

  • The Docker daemon is the background process that runs on the host machine and manages the containers. It is responsible for creating, starting, stopping, and managing containers. The Docker daemon communicates with the Docker Engine, which is the core component of the Docker platform that provides the command-line interface (CLI) and API for interacting with the daemon.
  • The Docker daemon is responsible for creating a container from an image, and it uses the namespace and control groups features of the Linux kernel to isolate the container from the host system and other containers. It also manages the container’s networking, storage, and security.
  • The Docker daemon communicates with the Docker registry, where images are stored and shared, and it also communicates with other components such as the Docker hub and orchestration tools such as Docker Swarm and Kubernetes.
  • The Docker daemon also manages the container’s lifecycle, it starts, stops and remove containers. It also listens to the Docker API and responds to requests from the Docker Engine or other tools that use the Docker API.
  • It’s worth noting that the Docker daemon runs on the host machine and not inside the container, it’s the one that creates and manages the containers, but it’s not a part of them.
  • The Docker daemon can be configured using a configuration file, which allows you to set various options such as the storage backend, network settings, and logging options.

2. The Docker Client:

  • The Docker client is a command-line tool that allows developers to interact with the Docker daemon. The client sends commands to the daemon, which then carries out the corresponding actions on the host machine.
  • The Docker client provides a simple and consistent interface for managing containers, images, and networks, and it also provides commands for managing the Docker registry.
  • The Docker client can be used to perform various actions such as pulling images from the registry, creating and starting containers, and inspecting the state of running containers.
  • It’s also possible to use the Docker client to automate and orchestrate the deployment and scaling of containers, using tools like Docker Compose or Docker Swarm.
  • The Docker client can be installed on various operating systems such as Windows, Linux, and MacOS. It also supports different architectures such as x86, ARM, and IBM Power.
  • The Docker client communicates with the Docker daemon to perform the actions requested by the user, the Docker daemon performs these actions on the host machine, and the Docker client shows the results to the user.
  • The Docker client and the Docker daemon can be installed on the same machine, or they can be installed on different machines and communicate over a network.

3. A REST API :

  • Docker provides a REST API that allows developers to interact with the Docker daemon using a simple and consistent interface. This API can be used to perform various actions such as creating and managing containers, images, and networks, and managing the Docker registry.
  • The API can be accessed using HTTP requests, which can be sent to the Docker daemon using the curl command-line tool or a library that implements HTTP in a programming language.
  • The API is organized into a set of endpoints, each of which corresponds to a specific resource or action. For example, the API has an endpoint for managing containers, another endpoint for managing images, and another endpoint for managing networks.
  • The API also provides authentication and authorization features, which allows you to restrict access to certain endpoints or users.
  • The API documentation is available on the Docker website, it provides detailed information about the available endpoints, the parameters that can be passed to each endpoint and the expected responses.
  • The Docker API allows to perform almost all the actions that can be performed using the command-line interface, this means that one can manage the Docker daemon remotely, which gives more flexibility to the users.

--

--