How to Install and Configure Docker & Docker Compose on Ubuntu Server

In the world of modern software development and system administration, few tools have revolutionized the landscape quite like Docker. Gone are the days of the dreaded “it works on my machine” excuse. With Docker, developers can package applications and their dependencies into standardized units called containers, ensuring they run consistently across any environment—from a developer’s laptop to a massive production server.

If you are managing an Ubuntu server, mastering Docker is practically a rite of passage. Whether you are deploying a simple WordPress blog, a complex microservices architecture, or a self-hosted media server, Docker provides the isolation, portability, and efficiency you need.

This guide will walk you through every step of installing and configuring Docker Engine and Docker Compose on an Ubuntu Server. We will cover not just the “how,” but the “why,” ensuring you understand the architecture you are building.


What are Docker and Docker Compose?

Before we dive into the terminal, it is crucial to understand what we are installing.

Docker Engine

At its core, Docker is a platform for developing, shipping, and running applications. It uses OS-level virtualization to deliver software in packages called containers. Unlike traditional Virtual Machines (VMs) which require a full operating system for each instance, containers share the host machine’s OS kernel but run in isolated userspaces. This makes them incredibly lightweight and fast.

Docker Compose

While Docker manages individual containers, Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. It is the orchestrator that makes managing complex stacks (like a web server + database + caching layer) simple and reproducible.


Prerequisites

To follow this tutorial, you will need:

  1. An Ubuntu Server: This guide is optimized for Ubuntu 20.04 LTS, 22.04 LTS, and the newer 24.04 LTS.
  2. User Privileges: You need a user account with sudo privileges.
  3. Internet Access: Your server must be able to download packages from the official Docker repositories.
  4. Terminal Access: You should be comfortable running commands in the command line interface (CLI).

Step 1: Preparing the System

Before installing any new software, it is best practice to ensure your existing system packages are up-to-date. This prevents potential conflicts with outdated dependencies.

Open your terminal and run the following command to update your package index and upgrade installed packages:

Bash

sudo apt update && sudo apt upgrade -y

Next, we need to install a few prerequisite packages that allow apt (the package manager) to use packages over HTTPS. This is a security requirement for accessing the official Docker repository.

Run the following command:

Bash

sudo apt install ca-certificates curl gnupg lsb-release -y
  • ca-certificates: Allows the system to check the validity of SSL certificates.
  • curl: A tool for transferring data with URLs, which we will use to download the Docker GPG key.
  • gnupg: The GNU Privacy Guard, used for verifying the authenticity of the software packages.

Step 2: Adding the Official Docker Repository

While Ubuntu’s default repositories often contain a version of Docker, it is frequently outdated. To ensure we get the latest features, security patches, and bug fixes, we will install Docker Community Edition (Docker CE) directly from Docker’s official repository.

1. Add Docker’s Official GPG Key

First, we need to add the GPG key to verify the integrity of the packages we are about to download.1

Bash

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

2. Set Up the Repository

Now, we will add the Docker repository to our system’s software sources list. Copy and paste the following command entirely; it automatically detects your specific Ubuntu version (using lsb_release -cs) so you don’t have to type it manually.

Bash

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Once this is done, update the package index again so your system recognizes the newly added Docker repository:

Bash

sudo apt update

Step 3: Installing Docker Engine

With the repository set up, installing Docker is straightforward. We will install the latest version of Docker Engine, the CLI (Command Line Interface), and containerd (the container runtime).

Run the following command:

Bash

sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

Note: You might notice we are installing docker-compose-plugin. In the past, Docker Compose was a separate standalone binary (docker-compose). The modern standard (Compose V2) is integrated directly into the Docker CLI as a plugin, allowing you to run docker compose (with a space) instead of docker-compose (with a hyphen).

Verify the Installation

Check that Docker is installed and the daemon is running:

Bash

sudo systemctl status docker

You should see an output indicating Active: active (running). Press q to exit the status view.


Step 4: Configuring Docker (Post-Installation)

By default, the Docker daemon runs as the root user. This means every time you want to run a docker command, you must preface it with sudo. This can be tedious and can lead to permission issues in your development workflow.

We can fix this by adding your current user to the docker group.

1. Create the Docker Group

The group may already exist from the installation, but let’s ensure it’s there:

Bash

sudo groupadd docker

2. Add Your User to the Group

Run this command to add your current user ($USER) to the docker group:

Bash

sudo usermod -aG docker $USER

3. Apply the Changes

To make these group changes effective, you usually need to log out and log back in. However, you can force the system to recognize the group change in your current session by running:

Bash

newgrp docker

Now, try running a simple test command without sudo:

Bash

docker ps

If you see a list of headers (CONTAINER ID, IMAGE, COMMAND, etc.) and no “permission denied” errors, you have successfully configured your user permissions.

4. Enable Docker on Boot

You likely want your containers to start automatically if your server reboots. Ensure the Docker systemd service is enabled:

Bash

sudo systemctl enable docker.service
sudo systemctl enable containerd.service

Step 5: Testing with “Hello World”

To ensure the entire pipeline—from the client to the daemon to the Docker Hub registry—is working correctly, we will run the famous “hello-world” image.

Bash

docker run hello-world

What happens when you run this?

  1. Local Check: Docker checks if the hello-world image exists locally on your machine.
  2. Download: Since you just installed Docker, it won’t find it. It reaches out to Docker Hub (the default registry), downloads the image, and stores it locally.
  3. Execution: It creates a container from that image and runs the application inside it.
  4. Output: The application prints a “Hello from Docker!” message and some explanation text, then exits.

If you see the message, your installation is perfect.


Step 6: Using Docker Compose (A Practical Example)

Installing Docker is just the beginning. The real power comes from Docker Compose, which allows you to define your infrastructure as code. Let’s create a simple web server setup to demonstrate how this works.

1. Create a Project Directory

Keep your home directory organized by creating a folder for this project.

Bash

mkdir ~/my-web-server
cd ~/my-web-server

2. Create the docker-compose.yml file

This file is the instruction manual for Docker Compose. We will use nano, a simple text editor, to create it.

Bash

nano docker-compose.yml

Paste the following configuration into the file:

YAML

version: '3.8'

services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ./html:/usr/share/nginx/html
    restart: always

Breaking down the file:

  • services: Defines the containers we want to run. We have one service named web.
  • image: Tells Docker to use the nginx image (a popular web server).
  • ports: Maps port 8080 on your host machine to port 80 inside the container. This means you will access the site via port 8080.
  • volumes: Maps a folder named html in your current directory to the default Nginx web folder. This allows you to edit files on your host and see changes instantly in the container.
  • restart: always: Ensures the container restarts automatically if it crashes or if the server reboots.

Save the file by pressing CTRL+O, Enter, and then CTRL+X to exit.

3. Create a Custom Index Page

Since we mapped a volume to ./html, we need to create that content so Nginx has something to serve.

Bash

mkdir html
nano html/index.html

Paste this simple HTML:

HTML

<!DOCTYPE html>
<html>
<head>
    <title>My Docker Site</title>
</head>
<body>
    <h1>Success! Docker Compose is running correctly.</h1>
    <p>This page is being served from an Nginx container.</p>
</body>
</html>

Save and exit (CTRL+O, Enter, CTRL+X).

4. Run Docker Compose

Now, bring the environment to life with a single command:

Bash

docker compose up -d

The -d flag stands for “detached,” meaning the containers will run in the background, leaving your terminal free for other tasks.

5. Verify the Result

Open your web browser and navigate to your server’s IP address followed by port 8080:

http://your_server_ip:8080

You should see the “Success! Docker Compose is running correctly” page.


Essential Docker Commands Cheat Sheet

As you continue your journey, these commands will become your daily bread and butter.

Managing Images

  • docker images: Lists all images stored locally on your machine.
  • docker pull <image_name>: Downloads an image from Docker Hub without running it.
  • docker rmi <image_id>: Deletes a specific image to free up space.
  • docker image prune: Removes all unused images (dangling images).

Managing Containers

  • docker ps: Lists currently running containers.
  • docker ps -a: Lists all containers, including those that have stopped.
  • docker stop <container_id>: Gracefully stops a running container.
  • docker rm <container_id>: Deletes a stopped container.
  • docker logs <container_id>: View the logs/output of a specific container (crucial for debugging).
  • docker exec -it <container_id> bash: Opens an interactive shell inside the container, allowing you to browse its file system.

Managing Compose Stacks

  • docker compose up -d: Starts the services defined in your docker-compose.yml.
  • docker compose down: Stops and removes all containers, networks, and volumes defined in the file.
  • docker compose logs -f: Follows the log output of all services in the stack in real-time.
  • docker compose restart: Restarts the services.

Why Use Docker on Ubuntu Server?

You might be wondering, “Why go through all this trouble instead of just installing Nginx directly on Ubuntu?”

  1. Isolation: If you install Node.js v14 for one app and Node.js v18 for another directly on the server, you will eventually face version conflicts. With Docker, App A gets its own container with Node 14, and App B gets a separate one with Node 18. They never touch each other.
  2. Cleanliness: Experimenting with software often leaves behind configuration files and dependencies even after uninstalling. With Docker, deleting a container removes everything associated with it. Your host OS stays pristine.
  3. Security: If a hacker compromises a web server running in a container, they are trapped inside that container. They do not automatically gain access to your entire host server (provided you have followed security best practices).
  4. Backup and Migration: moving a Docker setup to a new server is as simple as copying the docker-compose.yml file and the data volumes. You run docker compose up on the new server, and everything is restored exactly as it was.

Conclusion

Congratulations! You have successfully installed Docker Engine and Docker Compose on your Ubuntu server. You have also configured your user permissions, verified the installation, and even deployed a live web server using a Compose file.

You now possess the foundational tools to deploy virtually any modern software application. Whether you plan to host a Plex media server, a Nextcloud instance, or your own custom web application, the workflow remains the same: define it in Docker Compose, and spin it up.

Here are a few sets of tags for your blog post, formatted for different platforms:

Standard Tags

Docker, Docker Compose, Ubuntu, Linux, Server Administration, DevOps, Containers, System Admin, Installation Guide, Tutorial, Web Server, Nginx, Open Source, CLI

Hashtags

#Docker #Ubuntu #DevOps #Linux #SysAdmin #DockerCompose #Containerization #TechTutorial #ServerManagement #OpenSource #WebDev #Coding

SEO Keywords

How to install Docker on Ubuntu, Docker Compose tutorial, Ubuntu server setup, Containerization for beginners, Docker installation guide Linux, Manage Docker contain


Leave a Reply

Your email address will not be published. Required fields are marked *