Remove LXD using Ansible

Ákos Takács - Oct 7 '23 - - Dev Community

Intro

The last time we installed LXD using Ansible, but that is not enough, since we wanted to be able to install, remove and reinstall dev environments multiple times a day if it is necessary, so we need to be able to remove everything we installed. When you create a virtual machine to install something in it, you can just remove the virtual machine, but in this case we configure the physical host, so we can run virtual machines.

When we installed LXD, we used two roles. One for configuring the zfs pool for LXD and another to install LXD itself and initialize the configuration. Now we need roles to the opposite. One for removing the LXD package and for removing the ZFS pool. We also want to wipe the filesystem signatures on the disks, so we could use them again for anything else. Note that this will not be a secure way to destroy data on the disk. We just remove the information about the filesystem it had.

If you want to be notified about my other videos, please, subscribe to my YouTube channel: https://www.youtube.com/@akos.takacs

Table of contents

Before you begin

Requirements

» Back to table of contents «

  • The project requires Python 3.11. If you have an older version, and you don't know how you could install a new version, read about Nix in Install Ansible 8 on Ubuntu 20.04 LTS using Nix
  • You will also need to create a virtual Python environment. In this tutorial I used the "venv" Python module and the name of the folder of the virtual environment will be "venv".
  • You will also need an Ubuntu remote server. I recommend an Ubuntu 22.04 virtual machine.

Download the already written code of the previous episode

» Back to table of contents «

If you started the tutorial with this episode, clone the project from GitHub:

git clone https://github.com/rimelek/homelab.git
cd homelab
Enter fullscreen mode Exit fullscreen mode

If you cloned the project now, or you want to make sure you are using the exact same code I did, switch to the previous episode in a new branch

git checkout -b tutorial.episode.5b tutorial.episode.5
Enter fullscreen mode Exit fullscreen mode

Have the inventory file

» Back to table of contents «

Copy the inventory template

cp inventory-example.yml inventory.yml
Enter fullscreen mode Exit fullscreen mode
  • Change ansible_host to the IP address of your Ubuntu server that you use for this tutorial,
  • and change ansible_user to the username on the remote server that Ansible can use to log in.
  • If you still don't have an SSH private key, read the Generate an SSH key part of Ansible playbook and SSH keys
  • If you want to run the playbook called playbook-lxd-instal.yml, you will need to configure a physical or virtual disk which I wrote about in The simplest way to install LXD using Ansible. If you don't have a usable physical disk, Look for truncate -s 50G <PATH>/lxd-default.img to create a virtual disk.

Activate the Python virtual environment

» Back to table of contents «

How you activate the virtual environment, depends on how you created it. In the episode of The first Ansible playbook describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of The first Ansible role we created helper scripts as well, so you haven't created it yet, you can create the environment by running

./create-nix-env.sh venv
Enter fullscreen mode Exit fullscreen mode

Optionally start an ssh agent:

ssh-agent $SHELL
Enter fullscreen mode Exit fullscreen mode

and activate the environment with

source homelab-env.sh
Enter fullscreen mode Exit fullscreen mode

Ansible role to remove LXD

» Back to table of contents «

We will create a role to remove LXD that we installed with our other role and not to properly remove any kind of LXD installation. Removing the snap package on Ubuntu would be the same on every machine, but again, you may use a different distribution even without snap, or you don't configure LXD with ZFS storage, so let's keep that in mind.

As always, we will need a task file and the most obvious thing we need to do is remove the LXD snap package. We used the snap module before, we need it again.

roles/lxd_remove/tasks/main.yml

- name: Remove LXD snap package
  become: true
  community.general.snap:
    name: lxd
    state: absent
Enter fullscreen mode Exit fullscreen mode

The difference is that the state is "absent" and not "present". Normally I would also pass the --purge option so snap will not save a snapshot before removing the package, but I couldn't figure out how it should be done with this module, so we are going to remove the snapshot in another task.

We will need to use the builtin "command" module and run "snap forget <snap_id>", but we need to figure out the snap id. So before running snap forget, we should not forget to list the saved snapshots. We could run "snap saved" in Ansible and parse the output, but I always try to avoid parsing texts. Fortunately the Snap daemon has an API that returns json, and we can use it. On the machine where the snap daemon is running, you can run the following command:

curl -sS --unix-socket /run/snapd.socket http://localhost/v2/snapshots
Enter fullscreen mode Exit fullscreen mode

This returns all snapshots as json

{"type":"sync","status-code":200,"status":"OK","result":[{"id":17,"snapshots":[{"set":17,"time":"2023-09-21T11:22:50.293321518Z","snap":"yq","revision":"2243","snap-id":"b1xa1ED1Aw4HN9BnJVP3Je95pyEVN6gu","epoch":{"read":[0],"write":[0]},"summary":"","version":"v4.35.1","sha3-384":{"archive.tgz":"112ec12c5cd74fad5e3f29eb41717943f51131c105b7b5ce9897808ca3c26e4b2d2d003011a64fc5a51d5acc2ec3d2c5","user/root.tgz":"4869608dedea6589ee189e478887944dfac4ea3050b01060946edef6396e16cb9d759978d4fdc694b44a618d1a5452e4","user/ta.tgz":"93cfc7725b225a4071da24dc048f64e1777d386bb6b5746a2054ec51e5cdc72ff241939e5fa977663157ae54b06958da"},"size":373,"auto":true}]}]}
Enter fullscreen mode Exit fullscreen mode

That is not very user-friendly, and we need only the IDs, so I also use jq to get the IDs:

curl -sS --unix-socket /run/snapd.socket http://localhost/v2/snapshots \
      | jq -r '[.result[].id]'
Enter fullscreen mode Exit fullscreen mode

Of course, it shows all the IDs of all snapshots and I need only the snapshots of LXD so wee need to add snaps=lxd as argument to the URL:

curl -sS --unix-socket /run/snapd.socket http://localhost/v2/snapshots?snaps=lxd \
      | jq -r '[.result[].id]'
Enter fullscreen mode Exit fullscreen mode

Now if you don't have any LXD snapshot yet, the result will be an empty json list

[]
Enter fullscreen mode Exit fullscreen mode

If you have LXD snapshots, then you get the IDs:

[
  23,
  24
]
Enter fullscreen mode Exit fullscreen mode

Yes, you can have multiple LXD snapshots, so we will delete all of them not just the one that was saved by the last LXD uninstallation. If we want to remove one, we probably didn't want to keep the other either. If you want to keep the snapshots, don't add the following snapshot-related tasks.

So we need curl and jq to communicate with the API, which means we need to make sure that these packages are installed. Let's use the builtin "package" module again, but in this case we will pass multiple package names as a list:

- name: Install requirements to use the snapd API
  ansible.builtin.package:
    name:
      - curl
      - jq
    state: present
Enter fullscreen mode Exit fullscreen mode

Now we can finally run the curl command through Ansible:

- name: Get the IDs of the saved LXD snapshots
  changed_when: false
  # snap saved
  ansible.builtin.shell: |
    curl -sS --unix-socket /run/snapd.socket http://localhost/v2/snapshots?snaps=lxd \
      | jq -r '[.result[].id]'
  register: _snap_lxd_snapshot_command
Enter fullscreen mode Exit fullscreen mode

We needed the "shell" module, so we could use pipes. Using the "register" keyword is not new, so you know that we will get the output from that variable. I also used "changed_when: false" again so this task will not be reported as "changed", since there is nothing to change here.

We will have a list of IDs as a json string, so we will learn about a new filter called "from_json". It will convert the json string to a list object that Ansible can work with in the next task like this (don't add it yet):

  loop: "{{ _snap_lxd_snapshot_command.stdout | from_json }}"
Enter fullscreen mode Exit fullscreen mode

So it turns out Ansible can read json strings. Why did we need "jq" then? We could have probably used Ansible to get the IDs, but since jq is often useful in the command line, it's likely that we already have it, and it makes using Ansible a little easier. You don't have to be an Ansible pro in one day and as I stated before, you don't have to do everything with Ansible. Keep it simple when you can.

The task is the following:

- name: Forget saved snapshots
  become: true
  ansible.builtin.command: "snap forget {{ item }}"
  loop: "{{ _snap_lxd_snapshot_command.stdout | from_json }}"
Enter fullscreen mode Exit fullscreen mode

"item" is the default loop variable, and we need to use it in the command.

There is one thing left. If we finish this role now, the init config file will be left at /opt/lxd/init.yml. If we leave it there, the next time you reinstall LXD, it will not be initialized, since the initialization depends on the changed state of the saved init config. Let's remove the file then:

- name: Remove init config
  become: true
  ansible.builtin.file:
    path: "{{ lxd_remove_init_config_file_path }}"
    state: absent
Enter fullscreen mode Exit fullscreen mode

Last time we used the "file" module to create the base directory for the init config. Now we use it to remove a file. For that we need to pass the path of the file and set the state as "absent" instead of "directory". We could also remove the directory, but it doesn't affect the installation and if you set an existing directory for the init config, you may delete something you don't want. Creating it was a requirement but removing it is not.

There is a variable there to set the path of the init config, so we need to set the default value at least. We can just set the same value that we set in the "lxd_install" role. As long as these are our roles to deploy our home lab, we don't immediately need to pass parameters from external configs. Not immediately, but eventually it is probably better to do that, so we don't have to remember how many places we need to change the same value.

roles/lxd_remove/defaults/main.yml

lxd_remove_init_config_file_path: /opt/lxd/init.yml
Enter fullscreen mode Exit fullscreen mode

Before we use the role, I will run the LXD installation again. If LXD is already installed with the same config, nothing will happen. If it is not installed yet, it will be, and if the config is different it will be reinitialized, so don't run it on any machine, run it only on a machine where you used the installer we created in the previous post!

ansible-playbook playbook-lxd-install.yml \
  -i inventory.yml \
  --ask-become-pass
Enter fullscreen mode Exit fullscreen mode

We could run the playbook now, but it doesn't exist yet. So let's create it:

playbook-lxd-remove.yml

- name: Remove LXD
  hosts: all
  roles:
    - role: lxd_remove
Enter fullscreen mode Exit fullscreen mode

Output:

BECOME password:

PLAY [Remove LXD] ********************************************************************************************

TASK [Gathering Facts] ***************************************************************************************
ok: [ta-lxlt]

TASK [lxd_remove : Remove LXD snap package] ******************************************************************
[DEPRECATION WARNING]: The DependencyMixin is being deprecated. Modules should use
community.general.plugins.module_utils.deps instead. This feature will be removed from community.general in
version 9.0.0. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [ta-lxlt]

TASK [lxd_remove : Install requirements to use the snapd API] ************************************************
ok: [ta-lxlt]

TASK [lxd_remove : Get the IDs of the saved LXD snapshots] ***************************************************
ok: [ta-lxlt]

TASK [lxd_remove : Forget saved snapshots] *******************************************************************
changed: [ta-lxlt] => (item=27)

TASK [lxd_remove : Remove init config] ***********************************************************************
changed: [ta-lxlt]

PLAY RECAP ***************************************************************************************************
ta-lxlt                    : ok=6    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
Enter fullscreen mode Exit fullscreen mode

Ansible role to delete a ZFS pool

Overview of the zfs pool destroyer role

» Back to table of contents «

Basically we need two commands ran by Ansible to delete the ZFS pool and also remove filesystem signatures on the disks used in the pool.

zpool destroy <pool_name>
wipefs --all <disk_path>
Enter fullscreen mode Exit fullscreen mode

Since this is a dangerous operation, we can't just let the Ansible role to delete everything without precaution, but let's see how it would work without confirmation.

Destroying the pool without confirmation

» Back to table of contents «

roles/zfs_destroy_pool/tasks/main.yml

- name: Destroy ZFS pool
  become: true
  ansible.builtin.command: "zpool destroy {{ zfs_destroy_pool_name }}"

- name: Wipe filesystem signatures
  become: true
  ansible.builtin.command: "wipefs --all {{ item }}"
  loop: "{{ zfs_destroy_pool_disks }}"
Enter fullscreen mode Exit fullscreen mode

It shows that we would need at least two variables. One for the pool name and one for the list of disks. The same as we used in the zfs_pool role to create the pool, but we already have the list of disks in the pool, so why don't we get it from the pool? I couldn't find a way to get the list as a json, so I had to parse the output of zpool list, but at least I could make it easier to parse by using some parameters:

zpool list -H -P -v lxd-default \
  | tail -n +2 \
  | awk '{print $1}' \
  | grep '^/'
Enter fullscreen mode Exit fullscreen mode

Let me explain it.

  • We can pass the pool name, so we will get a list only for the specified pool.
  • We use -v so we get a verbose output including the disks.
  • We use -P so we get the full path of the disk
  • And -H removes the header from the output and replaces spaces with tabs.

At the end I get this output:

lxd-default 298G    604K    298G    -   -   0%  0%  1.00x   ONLINE  -
    /dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6   298G    604K    298G    --  0%  0.00%   -   ONLINE
Enter fullscreen mode Exit fullscreen mode

Since I need the disks only and not the pool name, I use tail -n +2 to skip the first line so then awk '{print $1}' can get the disks for me without statistics. The grep command at the end is just to make sure I won't get anything else but the disks in case the output changes in the future. Also, if there is no disk in the output, but we had the zfs pool, grep will make the task fail. Let's get the disks then:

- name: "Get disks in zpool: {{ zfs_destroy_pool_name }}"
  ansible.builtin.shell: |
    zpool list -H -P -v {{ zfs_destroy_pool_name }} \
      | tail -n +2 \
      | awk '{print $1}' \
      | grep '^/'
  register: _zpool_disks_command

- name: Destroy ZFS pool
  become: true
  ansible.builtin.command: "zpool destroy {{ zfs_destroy_pool_name }}"

- name: Wipe filesystem signatures
  become: true
  ansible.builtin.command: "wipefs --all {{ item }}"
  loop: "{{ _zpool_disks_command.stdout_lines }}"
Enter fullscreen mode Exit fullscreen mode

Note that we obviously had to place the ZFS pool destroyer after the one that lists the disks or otherwise there would be no pool to get the disks from. But what if something happens, and the zfs pool is gone, and you still need the wipe the disks. Maybe we should support defining the disks not just automatically detecting it. The second problem is that the "zpool destroy" command would always run, not just when we have a pool to destroy. So we need our old "zpool_facts" module that we used to create the pool. So this is how the new task file would look like:

- name: Get zpool facts
  ignore_errors: true
  community.general.zpool_facts:
    name: "{{ zfs_destroy_pool_name }}"
  register: _zpool_facts_task

- name: "Get disks in zpool: {{ zfs_destroy_pool_name }}"
  when: not _zpool_facts_task.failed
  # ...
Enter fullscreen mode Exit fullscreen mode

Thi means the second task will run only if the previous task didn't fail. That's good because we also need a parameter to enable or disable autodetection:

- name: Get zpool facts
  ignore_errors: true
  community.general.zpool_facts:
    name: "{{ zfs_destroy_pool_name }}"
  register: _zpool_facts_task

- name: "Get disks in zpool: {{ zfs_destroy_pool_name }}"
  when:
    - zfs_destroy_pool_disks_autodetect | bool
    - not _zpool_facts_task.failed
  # ...
Enter fullscreen mode Exit fullscreen mode

now we have all our variables so let's create the defaults:

roles/zfs_destroy_pool/defaults/main.yml

zfs_destroy_pool_name:
zfs_destroy_pool_disks_autodetect: true
zfs_destroy_pool_disks: []
Enter fullscreen mode Exit fullscreen mode

In this case I don't want to set a default pool name. A role should not delete something by default. Make sure the user always defines exactly what should be deleted. We will solve that too soon, but now let's see how our new task file looks like:

- name: Get zpool facts
  ignore_errors: true
  community.general.zpool_facts:
    name: "{{ zfs_destroy_pool_name }}"
  register: _zpool_facts_task

- name: "Get disks in zpool: {{ zfs_destroy_pool_name }}"
  when:
    - zfs_destroy_pool_disks_autodetect | bool
    - not _zpool_facts_task.failed
  ansible.builtin.shell: |
    zpool list -H -P -v {{ zfs_destroy_pool_name }} \
      | tail -n +2 \
      | awk '{print $1}' \
      | grep '^/'
  register: _zpool_disks_command

- name: Destroy ZFS pool
  when: not _zpool_facts_task.failed
  become: true
  ansible.builtin.command: "zpool destroy {{ zfs_destroy_pool_name }}"

- name: Wipe filesystem signatures
  become: true
  ansible.builtin.command: "wipefs --all {{ item }}"
  loop: |
    {{
      _zpool_disks_command.stdout_lines | default([])
        if zfs_destroy_pool_disks_autodetect | bool
        else zfs_destroy_pool_disks
    }}
Enter fullscreen mode Exit fullscreen mode

In the above task file, we finally check if the pool exists and get the lists of disks only when there was a pool and when autodetection was enabled. In any other case, nothing happens. Destroying the pool requires an existing pool only, which means the zpool facts didn't fail, and wiping the disks depends on the autodetection setting only. I also had to use the default filter after getting the lines from the standard input, so it will not through an error when the zfs pool doesn't exist but the autodetection was enabled.

Require confirmation before dangerous operations

Checking empty parameters

» Back to table of contents «

The zfs pool name should not be empty, but that is the default value. If the empty value makes the commands which use it invalid, at least you don't do something you don't want, but it is not always easy to interpret the error messages in Ansible, so let's create our own by checking if the pool name is empty or not:

- name: Fail if pool name is not provided
  when: zfs_destroy_pool_name | default('', true) | trim == ''
  ansible.builtin.fail:
    msg: "zfs_destroy_pool_name must not be empty"
Enter fullscreen mode Exit fullscreen mode

The builtin "fail" module can stop the execution of the playbook anywhere, and it also lets you add your error message to explain why it was stopped. The problem is that the pool name can b wrong in multiple different ways. It can be null, None, empty string or whitespaces. Of course, it could also have invalid characters, but let's just deal with these more obvious situations. Passing the | default('', true) filter will set empty string as default value when the pool name is null or None or empty string. But if it is not empty, but a space character, you need to trim it with the trim filter. IF the final result is an empty string, that means the definition is missing.

Ask for confirmation before deleting

» Back to table of contents «

We can also ask for confirmation. It is a little bit tricky, since we don't have a "confirm" module, but we have "pause" which is actually using the "wait_for" module.

- name: Confirmation of destroying the pool and purging the disks
  ansible.builtin.pause:
    prompt: 'Type "yes" and press ENTER to continue or press CTRL+C and "a" to abort'
  register: _confirmation_prompt

- name: 'Fail if the user did not type "yes"'
  when: _confirmation_prompt.user_input != "yes"
  ansible.builtin.fail:
    msg: 'User input was: {{ _confirmation_prompt.user_input | to_json }}, not "yes". Aborting.'
Enter fullscreen mode Exit fullscreen mode

This way we pause the execution until the user types "yes". If the user types anything else before pressing ENTER, the next task will fail, since we can get the "user_input" from the result of the "pause" task.

We could also implement a new parameter to skip the confirmation when we need to run it in a non-interactive way, but I don't think we would need it for our home lab, so let's stop here for now and see the whole task file:

- name: Fail if pool name is not provided
  when: zfs_destroy_pool_name | default('', true) | trim == ''
  ansible.builtin.fail:
    msg: "zfs_destroy_pool_name must not be empty"

- name: Confirmation of destroying the pool and purging the disks
  ansible.builtin.pause:
    prompt: 'Type "yes" and press ENTER to continue or press CTRL+C and "a" to abort'
  register: _confirmation_prompt

- name: 'Fail if the user did not type "yes"'
  when: _confirmation_prompt.user_input != "yes"
  ansible.builtin.fail:
    msg: 'User input was: {{ _confirmation_prompt.user_input | to_json }}, not "yes". Aborting.'

- name: Get zpool facts
  ignore_errors: true
  community.general.zpool_facts:
    name: "{{ zfs_destroy_pool_name }}"
  register: _zpool_facts_task

- name: "Get disks in zpool: {{ zfs_destroy_pool_name }}"
  when:
    - zfs_destroy_pool_disks_autodetect | bool
    - not _zpool_facts_task.failed
  ansible.builtin.shell: |
    zpool list -H -P -v {{ zfs_destroy_pool_name }} \
      | tail -n +2 \
      | awk '{print $1}' \
      | grep '^/'
  register: _zpool_disks_command

- name: Destroy ZFS pool
  when: not _zpool_facts_task.failed
  become: true
  ansible.builtin.command: "zpool destroy {{ zfs_destroy_pool_name }}"

- name: Wipe filesystem signatures
  become: true
  ansible.builtin.command: "wipefs --all {{ item }}"
  loop: |
    {{
      _zpool_disks_command.stdout_lines | default([])
        if zfs_destroy_pool_disks_autodetect | bool
        else zfs_destroy_pool_disks
    }}
Enter fullscreen mode Exit fullscreen mode

Let's add the role to the playbook and set "lxd-default" as pool name in playbook-lxd-remove.yml:

- name: Remove LXD
  hosts: all
  roles:
    - role: lxd_remove
    - role: zfs_destroy_pool
      zfs_destroy_pool_name: lxd-default
Enter fullscreen mode Exit fullscreen mode

Run the playbook

» Back to table of contents «

And now we are ready to run it:

ansible-playbook playbook-lxd-remove.yml \
  -i inventory.yml \
  --ask-become-pass
Enter fullscreen mode Exit fullscreen mode

The output will be something similar:

BECOME password:

PLAY [Remove LXD] ********************************************************************************************

TASK [Gathering Facts] ***************************************************************************************
ok: [ta-lxlt]

TASK [lxd_remove : Remove LXD snap package] ******************************************************************
[DEPRECATION WARNING]: The DependencyMixin is being deprecated. Modules should use
community.general.plugins.module_utils.deps instead. This feature will be removed from community.general in
version 9.0.0. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [ta-lxlt]

TASK [lxd_remove : Install requirements to use the snapd API] ************************************************
ok: [ta-lxlt]

TASK [lxd_remove : Get the IDs of the saved LXD snapshots] ***************************************************
ok: [ta-lxlt]

TASK [lxd_remove : Forget saved snapshots] *******************************************************************
skipping: [ta-lxlt]

TASK [lxd_remove : Remove init config] ***********************************************************************
ok: [ta-lxlt]

TASK [zfs_destroy_pool : Fail if pool name is not provided] **************************************************
skipping: [ta-lxlt]

TASK [zfs_destroy_pool : Confirmation of destroying the pool and purging the disks] **************************
[zfs_destroy_pool : Confirmation of destroying the pool and purging the disks]
Type "yes" and press ENTER to continue or press CTRL+C and "a" to abort:
ok: [ta-lxlt]

TASK [zfs_destroy_pool : Fail if the user did not type "yes"] ************************************************
skipping: [ta-lxlt]

TASK [zfs_destroy_pool : Get zpool facts] ********************************************************************
ok: [ta-lxlt]

TASK [zfs_destroy_pool : Get disks in zpool: lxd-default] ****************************************************
changed: [ta-lxlt]

TASK [zfs_destroy_pool : Destroy ZFS pool] *******************************************************************
changed: [ta-lxlt]

TASK [zfs_destroy_pool : Wipe filesystem signatures] *********************************************************
changed: [ta-lxlt] => (item=/dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6)

PLAY RECAP ***************************************************************************************************
ta-lxlt                    : ok=10   changed=3    unreachable=0    failed=0    skipped=3    rescued=0    ignored=0
Enter fullscreen mode Exit fullscreen mode

Wrapper script for running Ansible playbooks

» Back to table of contents «

This bonus tip is completely optional. You can set all the parameters by simply running the ansible commands in the terminal, but you can also create a small script which sets the default parameters. The other option would be using an automatically detected ansible configuration file, but for now, a script will be perfectly fine. Let's call it run.sh in the project root:

#!/usr/bin/env bash

ansible-playbook \
  -i inventory.yml \
  --ask-become-pass \
  "$@"
Enter fullscreen mode Exit fullscreen mode

Now you can run playbooks like this:

./run.sh playbook-lxd-remove.yml
Enter fullscreen mode Exit fullscreen mode

Conclusion

» Back to table of contents «

Now you can reinstall Ansible, delete it and repeat it as many times you want. You can imagine how many times I had to reinstall Ansible while I was developing the roles. I made mistakes, I had to correct them and test everything again.

These roles helped be to show you a relatively simple way to install and remove LXD, but in my environment I needed a more complex installation which lets me install an LXD cluster. The playbook-lxd-remove.yml playbook could work for me too, since I'm using ZFS. Once I decide to test other configurations as well, I will need to improve my installer and I would need to support deleting the other storage backend too.

Sometimes you will need multiple roles for similar purposes instead of using a lot of conditions in one role. Even while I was working on this playbook, there were some conditions where I couldn't explain how the task was supposed to work. Eventually I realized it was because the condition was completely wrong and I almost kept it in the tutorial.

So as a final note, use Ansible when it helps, try to keep it simple, and be careful when you need to complicate it. And one more thing. Test, test, test, test... you get the idea.

The final source code of this episode can be found on GitHub:

https://github.com/rimelek/homelab/tree/tutorial.episode.6

GitHub logo rimelek / homelab

Source code to create a home lab. Part of a video tutorial

README

This project was created to help you build your own home lab where you can test your applications and configurations without breaking your workstation, so you can learn on cheap devices without paying for more expensive cloud services.

The project contains code written for the tutorial, but you can also use parts of it if you refer to this repository.

Tutorial on YouTube in English: https://www.youtube.com/watch?v=K9grKS335Mo&list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv

Tutorial on YouTube in Hungarian: https://www.youtube.com/watch?v=dmg7lYsj374&list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421

Note: The inventory.yml file is not shared since that depends on the actual environment so it will be different for everyone. If you want to learn more about the inventory file watch the videos on YouTube or read the written version on https://dev.to. Links in the video descriptions on YouTube.

You can also find an example inventory file in the project root. You can copy that and change the content, so you will use your IP…

. . . . . . . . . . . . . . . . . .
Terabox Video Player