What is Docker?
This software service runs on top of the operating system to creates "virtual containers", each with their own small operating system running on top of the Linux kernel. While we can open ports into the container to transmit data, it otherwise operates independently from the host system – sequestered into its own Kernel namespace for security.

This is same technology that companies use for hosting their web-based "cloud" services. Containers allow administrators to quickly deploy software on nearly any hardware environment. As your user base grows, Docker makes it simple to deploy multiple servers – each running the same software – so the load can be balanced between these independent systems.
Docker Engine
Docker interfaces directly with the Linux kernel to access the drivers that communicate with your computer's hardware. This enables software to be deployed regardless of the underlying hardware. The mechanism that virtual containers employ is fundamentally different than a virtual machine, an older technology that performs a similar function.
Virtual machines use a "hypervisor" to emulate the hardware necessary to run its own "guest" kernel and operating system. This happens under the supervision of your "host" operating system and incurs a great deal of computational overhead.

By comparison, containers share their host operating system's kernel and directly utilize the existing hardware infrastructure. This allows containers to emulate the smallest possible operating system required for their software.

Developers build a 'container image' that contains the complete operating system required for their application. Alpine Linux is the foundation of many Docker containers, requiring only 5mb of storage space and 120mb of RAM.
These images act as a template to quickly create a containerized operating system that can interface with your hardware through your host operating system. Since Docker can interface with the kernel, it can share device and file access to individual containers.

Container images are read-only and the files of the image cannot be changed – known as "immutable". Any changes you make will be reset once the container is restarted. In order to keep data in between power cycles, we need to designate storage space for the container.
Docker can automatically create virtual disk drives tied to the container that can be deleted when the container is unused. You can also mount a directory from the host computer inside the container. For security and speed, you can also create a temporary filesystem in memory that is deleted when the container is stopped.

Docker-optimized applications will often store all of their persistent data within a single directory, commonly called "/app" or "/config". This makes services easy to update services because all you need to do is download the latest Docker image and re-start the container using it.
When creating a container, we can open access to network ports that allow communication with the service you are hosting. For many self-hosted cloud services, this includes access to the browser-based graphical user interface served over HTTP. Services like qBittorrent use ports to communicate with the outside internet through your router. Each application chooses what the functions or purpose for the ports it uses.
These ports can also enable communication between multiple containers – such as an application frontend and it's database. Docker can increase security by allowing your services to communicate behind-the-scenes, inaccessible to access from outside your local computer.

Modern operating systems have a theoretical maximum of 65,535 ports to be individually allocated for hosted services. While a few are reserved (such as port 80 for HTTP), the majority are freely available for use. As a metaphor, consider how specific telephone numbers are reserved for emergency services, while others are available as residential or business phone numbers. While a computer system may not realistically host that many individual services, it can illustrate the flexibility of modern software.
By leveraging ports, we can access multiple services hosted from the same machine. This is a common practice – known as a Docker Stack – that allows you to deploy new containers as well as define the virtual private networks connecting them. Conceptually, a Stack sits a level higher than containers and can consist of several containers that work in tandem.
For example, we could create two stacks for hosting two independent websites. Each stack would have an nginx container allocating sequential ports – such as 3000 and 3001. Additionally, each stack would have MariaDB container for storing web application data. Nginx can communicate with the MariaDB container within its Stack, but it is completely unaware of the other MariaDB container within the second Stack.

Docker containers are controlled through the terminal, allowing you to easily start, stop and restart them. Similarly, you can connect to the operating system running inside the container to perform tasks and get information.
You can run a docker container from the terminal with one command.
sudo docker run --it -d -p 80:80 --name nginx -v /srv/nginx/:/config scr.io/linuxserver/nginx:latest
This is the basic syntax for creating any Docker container. The command has several important parameters that define how our container is created and then functions. The above command follows the basic syntax:
sudo [[program]] [[command]] [[parameters]]
Running 'sudo' tells the shell to run the command as Root – or 'super user do'. We are telling the 'docker' program to 'run' a container with the following parameters:
--it | Keeps the container's shell accessible through the terminal |
-d | Runs container in the background |
-p |
Opens a port on the container, connecting a port from the container to an external port on our host computer. This allows the service to be accessible by other computers on your network. |
--name | Name to use for the container |
-v | Links a directory or file from our host computer to the container so it can access it. |
scr.io/linuxserver/nginx:latest | The Docker image to use for creating the container |
We can check the status of running Docker containers by entering the command:
sudo docker ps
Docker Compose
This Docker Engine add-on makes it very easy to quickly pop-up containers using an easy-to-read syntax. Compose uses markup language known as YAML commonly used as a human-readable format for storing software configuration files.
people:
person1:
name: Sally
age: 32
interests:
- "Watching movies"
- Linux
person2:
name: John
age: 46
interests:
- Music
- "Eating out"
Using the Docker Compose YAML syntax, we can quickly define a Stack with one or more containers. This makes it simple to automatically define private and isolated networks for each Stack on the system.
services:
nginx:
image: lscr.io/linuxserver/nginx:latest
container_name: nginx
volumes:
- /srv/nginx/:/config
ports:
- 80:80
This Docker Compose snippet creates the same container as our Docker Engine example above using the command line. We can also create a Stack with multiple connected containers, such as WordPress which uses a web server and a database.
---
services:
db:
image: mariadb:10.6.4-focal
command: '--default-authentication-plugin=mysql_native_password'
volumes:
- /srv/wordpress/db:/var/lib/mysql
restart: always
expose:
- 3306
wordpress:
image: wordpress:latest
ports:
- 8080:80
restart: always
depends_on:
- db
volumes:
- /srv/wordpress/html:/var/www/html
environment:
- WORDPRESS_DB_HOST=db
For convenience, we will be focusing on using Portainer. This service can be installed through Docker and offers fully browser-based access for managing containers.