Install Docker and Portainer in a VM using Ansible

Ákos Takács - Jun 2 - - Dev Community

Introduction

This episode is actually why I started this series in the first place. I am an active Docker user and Docker fan, but I like containers and DevOps topics in general. I am a moderator on the official Docker forums and I see that people often struggle with the installation process of Docker CE or Docker Desktop. Docker Desktop starts a virtual machine, and the GUI is to manage the Docker CE inside the virtual machine even on Linux. Even though I prefer not to use a GUI for creating containers, I admit it can be useful in some situations, but you always need to be ready to use the command line where all the commands are available. In this episode I will use Ansible to install Docker CE in the previously created virtual machine, and I will also install a web-based graphical interface, Portainer.

If you want to be notified about other videos as well, you can subscribe to my YouTube channel: https://www.youtube.com/@akos.takacs

Table of contents

Before you begin

Requirements

» Back to table of contents «

Download the already written code of the previous episode

» Back to table of contents «

If you started the tutorial with this episode, clone the project from GitHub:



git clone https://github.com/rimelek/homelab.git
cd homelab


Enter fullscreen mode Exit fullscreen mode

If you cloned the project now, or you want to make sure you are using the exact same code I did, switch to the previous episode in a new branch



git checkout -b tutorial.episode.8b tutorial.episode.9.1


Enter fullscreen mode Exit fullscreen mode

Have the inventory file

» Back to table of contents «

Copy the inventory template



cp inventory-example.yml inventory.yml


Enter fullscreen mode Exit fullscreen mode

Activate the Python virtual environment

» Back to table of contents «

How you activate the virtual environment, depends on how you created it. In the episode of The first Ansible playbook describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of The first Ansible role we created helper scripts as well, so if you haven't created it yet, you can create the environment by running



./create-nix-env.sh venv


Enter fullscreen mode Exit fullscreen mode

Optionally start an ssh agent:



ssh-agent $SHELL


Enter fullscreen mode Exit fullscreen mode

and activate the environment with



source homelab-env.sh


Enter fullscreen mode Exit fullscreen mode

Small improvements before we start today's main topic

You can skip this part too, if you joined the tutorial at this episode, and you don't want to improve other playbooks.

Disable gathering facts automatically

» Back to table of contents «

We discussed facts before in the "Using facts and the GitHub API in Ansible" episode, but we left this setting on default in other playbooks. Let's quickly add gather_facts: false to all playbooks, except playbook-hello.yml as that was to demonstrate how a playbook runs.

Create an inventory group for LXD playbooks

» Back to table of contents «

Now that we have a separate group for the virtual machines that run Docker, we can also create a new group for LXD, so when we add more machines, we will not install LXD on every single machine, and we will not remove it from a machine on which it was not installed. Let's add the following to the inventory.yml



lxd_host_machines:
  hosts:
    YOURHOSTNAME:


Enter fullscreen mode Exit fullscreen mode

NOTE: Replace YOURHOSTNAME with your actual hostname which you used in the inventory under the special group called "all".

In my case, it is the following:



lxd_host_machines:
  hosts:
    ta-lxlt:


Enter fullscreen mode Exit fullscreen mode

And now, replace hosts: all in playbook-lxd-install.yml and playbook-lxd-remove.yml with hosts: lxd_host_machines.

Reload ZFS pools after removing LXD to make the playbook more stable

» Back to table of contents «

When I wrote the "Remove LXD using Ansible" episode, the playbook worked for me every time. Since then, I noticed that sometimes it cannot delete the ZFS pool, because it's missing. I couldn't actually figure out why it happens, but a workaround can be implemented to make the playbook more stable. We have to restart the zfs-import-cache Systemd service, which will reload the ZFS pools so the next task can delete it and the disks can be wiped as well.

Open roles/zfs_destroy_pool/tasks/main.yml and look for the following task:



- name: Get zpool facts
  ignore_errors: true
  community.general.zpool_facts:
    name: "{{ zfs_destroy_pool_name }}"
  register: _zpool_facts_task


Enter fullscreen mode Exit fullscreen mode

All we have to do is add a new task before it:



# To fix the issue of missing ZFS pool after uninstalling LXD
- name: Restart ZFS import cache
  become: true
  ansible.builtin.systemd:
    state: restarted
    name: zfs-import-cache


Enter fullscreen mode Exit fullscreen mode

The built-in systemd module can restart a Systemd service if the "state" is "restarted".

Add a host to a dynamically created inventory group

» Back to table of contents «

Since we already configured our SSH client for the newly created virtual machine last time, we could create a new inventory group and add the virtual machine to that group. But sometimes you don't want to do that, or you can't. That's why I wanted to show you a way to create a new inventory group without changing the inventory file. Then we can add a host to this group. Since last time we also used Ansible to get the IP address of the new virtual machine, we can add that IP to the inventory group. The next task will show you how you can do that.



    # region task: Add Docker VM to Ansible inventory
    - name: Add Docker VM to Ansible inventory
      changed_when: false
      ansible.builtin.add_host:
        groups: _lxd_docker_vm
        hostname: "{{ vm_inventory_hostname }}"
        ansible_user: "{{ config_lxd_docker_vm_user }}"
        ansible_become_pass: "{{ config_lxd_docker_vm_pass }}"
        # ansible_host is not necessary, inventory hostname will be used
        ansible_ssh_private_key_file: "{{ vm_ssh_priv_key }}"
        #ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
        ansible_ssh_host_key_checking: false
    # endregion


Enter fullscreen mode Exit fullscreen mode

You need to add the above task to the "Create the VM" play in the playbook-lxd-docker-vm.yml playbook. The builtin add_host module is what we needed. Despite what you see in the task, it has only 2 parameters. Everything else is a variable which you could use in the inventory file as well. The groups and name parameters have aliases, and I thought that using the hostname alias of name would be better as we indeed add a hostname or an IP as its value. groups can be a list or a string. I defined it as a string as I have only one group to which I will add the VM.

I mentioned before that I like to start the name of helper variables with an underscore. The name of the group is not a variable, but I start it with an underscore, so I will know it is a temporary, dynamically created inventory group. We have some variables that we defined in the previous episode.

  • vm_inventory_hostname: The inventory hostname of the VM, which is also used in the SSH client configuration. It actually comes from the value of config_lxd_docker_vm_inventory_hostname with a default in case it is not defined.
  • config_lxd_docker_vm_user: The user that we created using cloud init and with which we can SSH into the VM.
  • config_lxd_docker_vm_pass: The sudo password of the user. It comes from a secret.
  • vm_ssh_priv_key: The path of the SSH private key used for the SSH connection. It is just a short alias for config_lxd_docker_vm_ssh_priv_key which can be defined in the inventory file.

Using these variables we could configure the SSH connection parameters like ansible_user, ansible_become_pass and ansible_ssh_private_key_file. We also have a new variable, ansible_ssh_host_key_checking. When we first SSH to a remote server, we need to accept the fingerprint of the server's SSH host key, which means we know and trust the server. Since we dynamically created this virtual machine and detected its IP address, we would need to accept it every time we recreate our VM, so I just disable host key checking by setting boolean false as value.

Use a dynamically created inventory group

» Back to table of contents «

We have a new inventory group, but we still don't use it. Now we need a new play in the paybook which will actually use this group. Add the following play skeleton to the end of playbook-lxd-docker-vm.yml.



# play: Configure the OS in the VM
- name: Configure the OS in the VM
  hosts: _lxd_docker_vm
  gather_facts: false
  pre_tasks:
  roles:
# endregion


Enter fullscreen mode Exit fullscreen mode

Even though we forced Ansible to wait until the virtual machine gets an IP address, having an IP address doesn't mean the SSH daemon is ready in the VM. So we need the following pre task:



    - name: Waiting for SSH connection
      ansible.builtin.wait_for_connection:
        timeout: 20
        delay: 0
        sleep: 3
        connect_timeout: 2


Enter fullscreen mode Exit fullscreen mode

The built-in wait_for_connection module can be used to retry connecting to the servers. We start checking it immediately, so we set the delay to 0. If we are lucky, it will be ready right away. If it does not connect in 2 seconds (connect_timeout), Ansible will "sleep" for 3 seconds and try again. If the connection is not made in 20 seconds (timeout), the task will fail.

While I was testing the almost finished playbook, I realized that sometimes the installation of some packages failed like if they were not in the APT cache yet, so I added a new pre task to update the APT cache before we start the VM. We already discussed this module in Using facts and the GitHub API in Ansible



    - name: Update APT cache
      become: true
      changed_when: false
      ansible.builtin.apt:
        update_cache: true


Enter fullscreen mode Exit fullscreen mode

If you don't want to add it, you can just rerun the playbook and it will probably work. Now we have an already written Ansible role which we will want to use here too, which is cli_tools. So this is how our second play looks like in playbook-lxd-docker-vm.yml:



# play: Configure the OS in the VM
- name: Configure the OS in the VM
  hosts: _lxd_docker_vm
  gather_facts: false
  pre_tasks:
    - name: Waiting for SSH connection
      ansible.builtin.wait_for_connection:
        timeout: 20
        delay: 0
        sleep: 3
        connect_timeout: 2
    - name: Update APT cache
      become: true
      changed_when: false
      ansible.builtin.apt:
        update_cache: true
  roles:
    - role: cli_tools
# endregion


Enter fullscreen mode Exit fullscreen mode

Now you could delete the virtual machine if you already created it last time, and run the playbook with the new play again to create the virtual machine and immediately install the command line tools in it.

Install Docker CE in a VM using Ansible

Using 3rd-party roles to install Docker

» Back to table of contents «

I admit, that there are already existing great roles we could use to Install Docker. For example the one made by Jeff Geerling which supports multiple Linux distributions, so feel free to use it, but we are still practicing writing our own roles, so I made a simple one for you, although that works only on Ubuntu. On the other hand, I will add something that even Jeff Geerling didn't do.

Default variables for the docker role

» Back to table of contents «

We will create some default variables in roles/docker/defaults/main.yml:



docker_version: "*.*.*"
docker_sudo_users: []


Enter fullscreen mode Exit fullscreen mode

The first, docker_version is to define which version we want to install. When you just start playing with Docker but don't want to use Play with Docker, you probably want to install the latest version. That's why the default value is "*.*.*", which means the latest major, minor and patch version of Docker CE. You will see the implementation soon. The second variable is docker_sudo_users, which is an empty list. We will be able to add users to the list who should be able to use Docker. We will discuss it later in more details.

Add docker to the Ansible roles

» Back to table of contents «

Before we continue, let's add "docker" as a new role to our second play in playbook-lxd-docker-vm.yml:



  # ...
  roles:
    - role: cli_tools
    - role: docker
      docker_sudo_users: "{{ config_lxd_docker_vm_docker_sudo_users | default([]) }}"


Enter fullscreen mode Exit fullscreen mode

You know very well now that this way we can add this new config variable to the inventory.yml



all:
  vars:
    # ...
    config_lxd_docker_vm_user: manager
    config_lxd_docker_vm_docker_sudo_users:
      - "{{ config_lxd_docker_vm_user }}"


Enter fullscreen mode Exit fullscreen mode

Note that config_lxd_docker_vm_user is probably already defined if you followed the previous episodes as well.

Install the dependencies of Docker

» Back to table of contents «

As I say it very often we should always start with the official documentation. It starts with uninstalling old and unofficial packages. Our role will not include it, so you will do it manually or write your own role as a homework. Then it updates the APT cache, which we just did as a pre task, so we will install the dependencies first. The official documentation says:



sudo apt-get install ca-certificates curl


Enter fullscreen mode Exit fullscreen mode

Which looks like this in Ansible in roles/docker/tasks/main.yml:



- name: Install dependencies
  become: true
  ansible.builtin.apt:
    name:
      - ca-certificates
      - curl


Enter fullscreen mode Exit fullscreen mode

Note: We know that the "cli_tools" role already installed curl, but we don't care, because when we create a role, we try to make it without depending on other roles. So even if we decide later not to use the "cli_tools" role, our "docker" role will still work perfectly.

Configure the official APT repository

» Back to table of contents «

The official documentation continues with creating a folder, /etc/apt/keyrings. It uses the



sudo install -m 0755 -d /etc/apt/keyrings


Enter fullscreen mode Exit fullscreen mode

command, but it really just creates a folder this time, which looks like this in Ansible:




- name: Make sure the folder of the keyrings exists
  become: true
  ansible.builtin.file:
    state: directory
    mode: 0755
    path: /etc/apt/keyrings


Enter fullscreen mode Exit fullscreen mode

The next step is downloading the APT key for the repository. Previously, the documentation used the apt-key command which was deprecated on Ubuntu, so it was replaced with the following:



sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc


Enter fullscreen mode Exit fullscreen mode

Which looks like this in Ansible:



- name: Install the APT key of Docker's APT repo
  become: true
  ansible.builtin.get_url:
    url: https://download.docker.com/linux/ubuntu/gpg
    dest: /etc/apt/keyrings/docker.asc
    mode: a+r


Enter fullscreen mode Exit fullscreen mode

Then the official documentation shows how you can add the repository to APT depending on the CPU architecture and Ubuntu release code name, like this:



# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update


Enter fullscreen mode Exit fullscreen mode

In the "Using facts and the GitHub API in Ansible" episode we already learned to get the architecture. We also need the release code name, so we gather the distribution_release subset of Ansible facts as well.



- name: Get distribution release fact
  ansible.builtin.setup:
    gather_subset:
      - distribution_release
      - architecture


Enter fullscreen mode Exit fullscreen mode

Before we continue with the next task, we will add some variables to roles/docker/vars/main.yml.



docker_archs:
  x86_64:  amd64
  amd64:   amd64
  aarch64: arm64
  arm64:   arm64
docker_arch: "{{ docker_archs[ansible_facts.architecture] }}"
docker_distribution_release: "{{ ansible_facts.distribution_release }}"


Enter fullscreen mode Exit fullscreen mode

Again, this is very similar to what we have done before to separate our helper variables from the tasks, and now we can generate docker.list under /etc/apt/sources.list.d/. To do that, we add the new task in roles/docker/tasks/main.yml:



- name: Add Docker APT repository
  become: true
  ansible.builtin.apt_repository:
    filename: docker
    repo: "deb [arch={{ docker_arch }} signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu {{ docker_distribution_release }} stable"
    state: present
    update_cache: true


Enter fullscreen mode Exit fullscreen mode

The built-in apt_repository module can also update APT cache after adding the new repo. The filename is automatically generated if we don't set it, but the "filename" parameter is actually a name without the extension, so do not add .list at the end of the name.

Install a specific version of Docker CE

» Back to table of contents «

The official documentation recommends using the following command to list the available versions of Docker CE on Ubuntu.



apt-cache madison docker-ce | awk '{ print $3 }'


Enter fullscreen mode Exit fullscreen mode

It shows the full package name which includes more than just the version of Docker CE. Fortunately, the actual version number can be parsed like this:



apt-cache madison docker-ce \
  | awk '$3 ~ /^([0-9]+:)([0-9]+\.[0-9]+\.[0-9]+)(-[0-9]+)?(~.*)$/ {print $3}'


Enter fullscreen mode Exit fullscreen mode

Where the version number is the second expression in parentheses.



[0-9]+\.[0-9]+\.[0-9]+


Enter fullscreen mode Exit fullscreen mode

We can replace it with an actual version number, but keeping the backslashes:



26\.1\.3


Enter fullscreen mode Exit fullscreen mode

Let's search for only that version:



apt-cache madison docker-ce \
  | awk '$3 ~ /^([0-9]+:)26\.1\.3(-[0-9]+)?(~.*)$/ {print $3}'


Enter fullscreen mode Exit fullscreen mode

Output:



5:26.1.3-1~ubuntu.22.04~jammy


Enter fullscreen mode Exit fullscreen mode

This is what we will implement in Ansible:



- name: Get full package version for {{ docker_version }}
  changed_when: false
  ansible.builtin.shell: |
    apt-cache madison docker-ce \
    | awk '$3 ~ /^([0-9]+:){{ docker_version | replace('*', '[0-9]+') | replace('.', '\.') }}(-[0-9]+)?(~.*)$/ {print $3}'
  register: _docker_versions_command


Enter fullscreen mode Exit fullscreen mode

I hope now it starts to make sense why the default value of docker_version was *.*.*. It was because we replace that with regular expressions. We also escape all dots as otherwise it would mean "any character" in the regular expression. This solution allows us to install the latest version unless we override the default value with an actual version number. Even if we override it, we can use a version like 26.0.* to get a list of available patch version of Docker CE 26.0 instead of the latest major version. Of course, this is still a list of versions unless we set a specific version number, but we can get the first line in the next task. According to the official documentation, we would install Docker CE and related packages like this:



VERSION_STRING=5:26.1.0-1~ubuntu.24.04~noble
sudo apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin


Enter fullscreen mode Exit fullscreen mode

Let's do it in Ansible:



- name: Install Docker CE
  become: true
  vars:
    _full_version: "{{ _docker_versions_command.stdout_lines[0] }}"
  ansible.builtin.apt:
    name:
      - docker-ce={{ _full_version }}
      - docker-ce-cli={{ _full_version }}
      - docker-ce-rootless-extras={{ _full_version }}
      - containerd.io
      - docker-buildx-plugin
      - docker-compose-plugin


Enter fullscreen mode Exit fullscreen mode

What is not mentioned in the documentation is marking Docker CE packages as held. In the terminal it would be like this:



apt-mark hold docker-ce docker-ce-cli docker-ce-rootless-extras containerd.io docker-compose-plugin


Enter fullscreen mode Exit fullscreen mode

It will be almost the same in Ansible as we need to use the built-in command module:



- name: Hold Docker CE packages
  become: true
  ansible.builtin.command: apt-mark hold docker-ce docker-ce-cli docker-ce-rootless-extras containerd.io docker-compose-plugin
  changed_when: false


Enter fullscreen mode Exit fullscreen mode

You can check the list of held packages:



apt-mark showheld


Enter fullscreen mode Exit fullscreen mode

Output:



containerd.io
docker-ce
docker-ce-cli
docker-ce-rootless-extras
docker-compose-plugin


Enter fullscreen mode Exit fullscreen mode

Note: I never actually saw an upgraded containerd causing problems, but this is a very important component of Docker CE, so I decided to hold that too. If it causes any problem, you can "unhold" that any time by running the following command:



apt-mark unhold containerd.io


Enter fullscreen mode Exit fullscreen mode

Allow non-root users to use the docker commands

» Back to table of contents «

This is the part where I don't follow the documentation. The official documentation mentions this on the Linux post-installation steps for Docker Engine page:

The Docker daemon binds to a Unix socket, not a TCP port. By default it's the root user that owns the Unix socket, and other users can only access it using sudo. The Docker daemon always runs as the root user.

If you don't want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group. On some Linux distributions, the system automatically creates this group when installing Docker Engine using a package manager. In that case, there is no need for you to manually create the group.

Of course, using the docker group is not really secure, which is also mentioned right after the previous quote in the documentation:

The docker group grants root-level privileges to the user. For details on how this impacts security in your system, see Docker Daemon Attack Surface.

If we want to have just a little bit more secure solution, we don't use the docker group which can directly access the docker socket, but we create another group like docker-sudo and we allow all users in this group to run the docker command as root by using sudo docker without a password. It would involve creating a new rule in /etc/sudoers.d/docker like:



%docker-sudo ALL=(root) NOPASSWD: /usr/bin/docker


Enter fullscreen mode Exit fullscreen mode

This would force users to always run sudo docker, not just docker and they will often forget it and get an error message. We could add an alias like



alias docker='sudo \docker'


Enter fullscreen mode Exit fullscreen mode

to ~/.bash_aliases, but that would work only when the user uses the bash shell. Instead of that, we can add a new script in /usr/local/bin/docker,
which usually overrides /usr/bin/docker and add this command in the script:



#!/usr/bin/env sh

exec sudo /usr/bin/docker "$@"


Enter fullscreen mode Exit fullscreen mode

This is what we will do with Ansible, so our new script will be executed which will call the original docker command as an argument of sudo. Now even when we use Visual Studio Code's Remote Explorer to connect to the remote virtual machine and use Docker in the VM from VSCode, /var/log/auth.log on Debian-based systems will contain exactly what docker commands were executed. If you don't find this file, it may be called /var/log/secure on your system. This is for example how browsing files from VSCode in containers looks like:



May 19 20:14:36 docker sudo:  manager : PWD=/home/manager ; USER=root ; COMMAND=/usr/bin/docker container exec --interactive d71cae80db867ee79ba66fa947ab126ac6f7b0e482ebb8b3320d9f3bfa3fb3e6 /bin/sh -c 'stat -c \'%f %h %g %u %s %X %Y %Z %n\' "/"* || true && stat -c \'%f %h %g %u %s %X %Y %Z %n\' "/".*'


Enter fullscreen mode Exit fullscreen mode

This is useful when you want to be able to investigate accidental damage when a Docker user executes a command that they should not have executed, and they don't even know what they executed. It will not protect you from intentional harm as an actual hacker could also delete the logs. On the other hand, if you have a remote logging server where you collect logs from all machines, you will probably have the logs to figure out what happened.

Now let's configure this in Ansible.

First we will create the docker-sudo group:



- name: Ensure group "docker-sudo" exists
  become: true
  ansible.builtin.group:
    name: docker-sudo
    state: present


Enter fullscreen mode Exit fullscreen mode

Now we can finally use our docker_sudo_users variable which we defined in roles/docker/defaults/main.yml and check if there is any user who doesn't exist.



- name: Check if docker sudo users are existing users
  become: true
  ansible.builtin.getent:
    database: passwd
    key: "{{ item }}"
  loop: "{{ docker_sudo_users }}"


Enter fullscreen mode Exit fullscreen mode

This built-in getent module basically calls the getent command on Linux:



getent passwd manager


Enter fullscreen mode Exit fullscreen mode

If the user exists, it returns the passwd record of the user, and it fails otherwise. Now let's add the groups to the users:



- name: Add users to the docker-sudo group
  become: true
  ansible.builtin.user:
    name: "{{ item }}"
    # users must be added to docker-sudo group without removing them from other groups
    append: true
    groups:
      - docker-sudo
  loop: "{{ docker_sudo_users }}"


Enter fullscreen mode Exit fullscreen mode

We used the built-in user module, to add the docker-sudo group to the users defined in docker_sudo_users. We are close to the end. The next step is creating the script using a similar solution we used in the hello_world role at the beginning of the series.



- name: Create a sudo wrapper for Docker
  become: true
  ansible.builtin.copy:
    content: |
      #!/usr/bin/env sh

      exec sudo /usr/bin/docker "$@"
    dest: /usr/local/bin/docker
    mode: 0755


Enter fullscreen mode Exit fullscreen mode

And finally, using the same method, we create the sudoers rule:



- name: Allow run execute /usr/bin/docker as root without password
  become: true
  ansible.builtin.copy:
    content: |
      %docker-sudo ALL=(root) NOPASSWD: /usr/bin/docker
    dest: /etc/sudoers.d/docker


Enter fullscreen mode Exit fullscreen mode

We could now run the playbook to install Docker in the virtual machine:



./run.sh playbook-lxd-docker-vm.yml


Enter fullscreen mode Exit fullscreen mode

Install Portainer CE, the web-based GUI for containers

» Back to table of contents «

Installing Portainer CE is the easy part, actually. We will need to use the built-in pip module to install a Python dependency for Ansible to be able to manage Docker, and then we will also use a community module, called docker_container. Let's create the tasks file first at roles/portainer/tasks/main.yml:



- name: Install Python requirements
  become: true
  ansible.builtin.pip:
    name: docker

- name: Install portainer
  become: true
  community.docker.docker_container:
    name: "{{ portainer_name }}"
    state: started
    container_default_behavior: no_defaults
    image: "{{ portainer_image }}"
    restart_policy: always
    ports:
      - "{{ portainer_external_port }}:9443"
    volumes:
      - "{{ portainer_volume_name }}:/data"
      - /var/run/docker.sock:/var/run/docker.sock


Enter fullscreen mode Exit fullscreen mode

After defining the variables, this will be basically equivalent of running the following in shell:



docker run \
  -p 9443:9443 \
  --name portainer \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce:2.20.2-alpine


Enter fullscreen mode Exit fullscreen mode

Let's add our defaults at roles/portainer/defaults/main.yml:



portainer_name: portainer
portainer_image: portainer/portainer-ce:2.20.2-alpine
portainer_external_port: 9443
portainer_volume_name: "{{ portainer_name }}_data"


Enter fullscreen mode Exit fullscreen mode

That's it, and now we add the role to the playbook:



  # ...
  roles:
    - role: cli_tools
    - role: docker
      docker_sudo_users: "{{ config_lxd_docker_vm_docker_sudo_users | default([]) }}"
    - role: portainer


Enter fullscreen mode Exit fullscreen mode

When you finished installing Portainer, you need to quickly open it in a web browser on port 9443. If you do it on a publicly available machine in your LAN network, you can simply open it like https://192.168.4.58:9443. In this tutorial, our virtual machine needs an SSH tunnel like below, so you can use https://127.0.0.1:9443:



ssh -L 9443:127.0.0.1:9443 -N docker.lxd.ta-lxlt


Enter fullscreen mode Exit fullscreen mode

Your hostname will be different. If you already have other containers with a forwarded port, you can add more ports to the tunnel:



ssh \
  -L 9443:127.0.0.1:9443 \
  -L 32768:127.0.0.1:32768 \
  -N \
  docker.lxd.ta-lxlt


Enter fullscreen mode Exit fullscreen mode

If you have containers without forwarded ports from the host, you can forward your local port directly to the container IP.



ssh \
  -L 9443:127.0.0.1:9443 \
  -L 32768:127.0.0.1:32768 \
  -L 8080:172.17.0.4:80 \
  -N \
  docker.lxd.ta-lxlt


Enter fullscreen mode Exit fullscreen mode

When you can finally open Portainer in your browser, create your first user and configure the connection to the local Docker environment. If you wait too long, the webinterface will show an error message, and you will need to go to the terminal in the virtual machine and restart portainer:



docker restart portainer


Enter fullscreen mode Exit fullscreen mode

After that, you can start the configuration. This way if you install Portainer on a publicly available server, there will be less time for others to log in before you do, and after you initialized Portainer, it is no longer possible to log in without a password.

Conclusion

» Back to table of contents «

I hope this episode helped you to install Docker and allow non-root users to use the Docker commands in a little more secure way than the documentation suggests. Now you can have a web-basd graphical interface for containers, however, Portainer is definitely not Docker Desktop, so you will not have the extra features like Docker Desktop extensions. Using Ansible can help to deploy your entire dev environment, destroy it and recreate any time. Using containers you can have pre-built and pre-configured applications that you can try and learn more about it to customize the configuration for your needs. When you have a production environment, you need to focus much more on security, but now that you have the tools to begin with a dev environment, you can make that new step more easily.

The final source code of this episode can be found on GitHub:

https://github.com/rimelek/homelab/tree/tutorial.episode.10

GitHub logo rimelek / homelab

Source code to create a home lab. Part of a video tutorial

README

This project was created to help you build your own home lab where you can test your applications and configurations without breaking your workstation, so you can learn on cheap devices without paying for more expensive cloud services.

The project contains code written for the tutorial, but you can also use parts of it if you refer to this repository.

Tutorial on YouTube in English: https://www.youtube.com/watch?v=K9grKS335Mo&list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv

Tutorial on YouTube in Hungarian: https://www.youtube.com/watch?v=dmg7lYsj374&list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421

Note: The inventory.yml file is not shared since that depends on the actual environment so it will be different for everyone. If you want to learn more about the inventory file watch the videos on YouTube or read the written version on https://dev.to. Links in the video descriptions on YouTube.

You can also find an example inventory file in the project root. You can copy that and change the content, so you will use your IP…

. . . . . . . . . . . . . . . . . .
Terabox Video Player