In the previous posts of the series we learned about Ansible basics by creating a hello world role. In an independent post I wrote about LXD to run virtual machines and containers without Docker. In this post I will show you how you can use Ansible to install LXD. This time I will focus on simplicity instead of the best way, so we can improve it later and understand more easily what we are doing.
You will also need to create a virtual Python environment. In this tutorial I used the "venv" Python module and the name of the folder of the virtual environment will be "venv".
You will also need an Ubuntu remote server. I recommend an Ubuntu 22.04 virtual machine.
Download the already written code of the previous episode
And change ansible_host to the IP address of your Ubuntu server that you use for this tutorial, and change ansible_user to the username on the remote server that Ansible can use to log in. If you still don't have an SSH private key, read the Generate an SSH key part of Ansible playbook and SSH keys
How you activate the virtual environment, depends on how you created it. In the episode of The first Ansible playbook describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of The first Ansible role we created helper scripts as well, so if you haven't created it yet, you can create the environment by running
An Ansible role can be very simple and very complicated as well. When I started to learn about Ansible, I thought I had to do everything with Ansible and nothing manually, but that's wrong. It is ideal if you can implement everything in Ansible roles, but the main goal is to use Ansible to help you simplify the deployment and make it repeatable. When something is simple enough and implementing it in an Ansible role would make it harder to maintain or less reliable, then documenting it and doing it manually is just fine.
You should also make sure that the role is doing what you need and not what you think it could do with a little more work which you would probably never use. For example, don't add a parameter just to create a role that you could even share and let other people customize it when it is unlikely that you will ever share it. Add a new parameter when you have a new use case, and you actually need it or when it is likely that you will need it soon, and you feel it is easier to add it now than later. Otherwise, it will be harder to maintain and nothing to gain.
We need a new role called "zfs_pool" which can create a new zfs pool for LXD and also installs dependencies that makes it possible. What the role has to be able to do besides installing dependencies is replacing the following command:
sudo zpool create "$name""${disks[@]}"
So we will need two default variables. You already know how a basic Ansible role looks like. We will need a task file and a file for the default variables.
zfs_pool/defaults/main.yml
zfs_pool_name:defaultzfs_pool_disks:[]
I use "default" as default value for the zfs pool name because this is a general role. Otherwise, I would have called it "lxd_zfs_pool". I know I have just told you not to implement a feature that you don't need, and you probably don't need to create other zfs pools which are not used by ansible, but separating the zfs pool creation from the LXD installation will actually make it simpler and easier to understand.
The default value of zfs_pool_disks is an empty list.
This is the role which is replacing apt-get install zfsutils-linux. The builtin module called "package" would work on other Linux distributions as well, but the package name could be different on those distributions. Since for now I support only Ubuntu, I don't need a more complicated task. Since I added become: true, it will be executed as root.
We should create the zfs pool now and there is actually some zfs support in Ansible 8.0.0, but it doesn't support creating pools. So we need to check if the pool already exists and create it only if it doesn't exist, so we will use some conditional roles which will require using the register keyword again that we have already learnt about.
First of all, we need to determine whether the pool exists or not:
We need ignore_errors: true because the task would fail otherwise if the pool doesn't exist. The zpool_facts module in the community.general collection will also set the ansible_zpool_facts and ansible_facts.zpool_facts variables, but we don't need that. However, we need to save the status information into a variable. That's why we use the register keyword again. By the way that status information also contains the facts, so you would have 3 ways to get them.
As a next step we need a conditional role that runs only if the pool is not created yet:
To make the logs more informative I used the zfs_pool_name variable in the task name. The when keyword expects a boolean value or a list of boolean values, but we need only one. The previously registered variable will container "failed" as a boolean property so the task will run when the previous task failed. And finally, we use the builtin command module to execute our zpool create command. The join(' ') filter will take the list of disks as argument and convert it to a string containing the disks separated by a space character.
We need a playbook to call this role. In our first playbook was simply called playbook.yml, but now let's rename it to playbook-hello.yml so we can have more playbooks.
mv playbook.yml playbook-hello.yml
Although we have a role for creating the zfs pool, our final goal is to install LXD, so our new playbook will be "playbook-lxd-install.yml".
We still have only one host, so the "hosts" parameter can refer to all the hosts. We have too parameters for the role but for now we want a statically set pool name. lxd-default will be fine, but obviously I can't include the paths of the disks, since it will be different for everyone and probably on every machine unless you already added aliases. It means we need some global parameters. Although you could easily set the zfs_pool_name and zfs_pool_disks in the inventory file, I usually find it a good practice to set role parameters in playbooks, and create project-level configuration variables. It is optional and setting role parameters in inventory files makes the playbooks shorter and cleaner, but it also makes it much harder to follow where the parameters are set and there are so many places where you can set them. so choose a way that you find more maintainable in your project.
In my case I had to change my inventory file, so the "inventory.yml" in the project root looks like this now:
If you don't understand what this inventory file is, please, read the previous posts to learn more about it. The new config_lxd_zfs_pool_disks variable has to contain the list of your disks. If you don't have a physical partition, you can create a virtual disk for testing and set the size in gigabytes after -s:
truncate-s 50G <PATH>/lxd-default.img
And refer to its absolute path in the inventory file. In the example I set 50G but make sure you set a size that is appropriate for your free disk space.
BECOME password:
PLAY [Install LXD] *******************************************************************************************
TASK [Gathering Facts] ***************************************************************************************
ok: [ta-lxlt]
TASK [zfs_pool : Install zfslinux-utils] *********************************************************************
ok: [ta-lxlt]
TASK [zfs_pool : Get zpool facts] ****************************************************************************
fatal: [ta-lxlt]: FAILED! => {"changed": false, "msg": "ZFS pool lxd-default does not exist!"}
...ignoring
TASK [zfs_pool : Create ZFS pool: lxd-default] ***************************************************************
changed: [ta-lxlt]
PLAY RECAP ***************************************************************************************************
ta-lxlt : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
Notice that we had a fatal error which we ignored, detected it and created the zfs pool. If you run the same command again, the error will not appear and the pool creation will be skipped.
As a next step, we will install LXD using a config file. If you don't hav the config file yet, please, read Creating virtual machines with LXD first. I will use the same config file that I exported in that post.
As you can see I saved the LTS version as default channel, but we can override it. Using LTS version as default value could give you a more stable LXD, but you can still override it in the playbook. We also need to initialize LXD after installing it, but you might not want to initialize it. Again, this is something that does not add a real value to our role, but we can practice conditional tasks. By default, the init config will be copied to /opt/lxd/init.yml and you need to override this value if you don't like it. It's time to create our task file.
A block is a list of tasks. Since we have multiple tasks that we have to skip if lxd_install_init_enabled is not true, it is easier to set the condition for the block. We also use "| bool" after the variable name, because the variable can also come from the command line passing -e lxd_install_init_enabled=false and it will always be a string, so we have to convert it to a boolean type. If you don't do that, "false" will mean boolean "true" as well. In the block we have to indent the tasks of course. The first task in the block will create the config directory:
The builtin file module is for creating directories, setting permissions and ownerships, and creating links as well. We don't want anyone to read our config, so we allow only the file owner (root in this case) to access files in the directory.
Now that the file is created, we can use the copy module to copy init config to the remote server:
TASK [lxd_install : Install LXD snap package] ****************************************************************
[DEPRECATION WARNING]: The DependencyMixin is being deprecated. Modules should use
community.general.plugins.module_utils.deps instead. This feature will be removed from community.general in
version 9.0.0. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [ta-lxlt]
TASK [lxd_install : Create LXD config folder] ****************************************************************
changed: [ta-lxlt]
TASK [lxd_install : Copy LXD config] *************************************************************************
changed: [ta-lxlt]
TASK [lxd_install : Apply LXD config] ************************************************************************
changed: [ta-lxlt]
PLAY RECAP ***************************************************************************************************
ta-lxlt : ok=7 changed=4 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
The only problem is that if you run the playbook again, it will initialize LXD again, even though the init config hasn't changed. Let's modify the last two tasks:
All we added was the register keyword to the copy task and the when keyword to the apply task.
Note: This is usually a good way to determine whether a file changed or not, but if for some reason the second task fails, and you need to rerun the playbook, the config file will already exist and the "Apply LXD config" task will not run. If that happens, you need to remove or change the init config on the remote server and rerun the playbook again.
We finally learned how we can install LXD using Ansible, but we still need to remove it manually. That's okay, but we use Ansible to create our home lab that we probably want to reinstall many times, so next time we will learn how we can use Ansible to remove LXD.
The final source code of this episode can be found on GitHub:
Source code to create a home lab. Part of a video tutorial
README
This project was created to help you build your own home lab where you can test
your applications and configurations without breaking your workstation, so you can
learn on cheap devices without paying for more expensive cloud services.
The project contains code written for the tutorial, but you can also use parts of it
if you refer to this repository.
Note: The inventory.yml file is not shared since that depends on the actual environment
so it will be different for everyone. If you want to learn more about the inventory file
watch the videos on YouTube or read the written version on https://dev.to. Links in
the video descriptions on YouTube.
You can also find an example inventory file in the project root. You can copy that and change
the content, so you will use your IP…