4 Mistakes to Avoid When Setting Up a CI/CD Pipeline

Wilson - Aug 15 - - Dev Community

Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying code to remote servers, streamlining software delivery. Having built CI/CD pipelines for many applications in real business environments, I’ve made mistakes and seen colleagues make them too — each providing valuable lessons.

These experiences build up the expertise of any DevOps engineer, so I wanted to share what I’ve learned.

Making these mistakes can ruin your projects, disrupt your application environment, or cause your projects to drag on far longer than they should.

Based on my experience, here’s what you should do — or watch out for — when setting up your CI/CD pipeline. These tips are vendor-agnostic, meaning they apply whether you’re using GitHub Actions, Jenkins, TravisCI, AmplifyCI, CircleCI, or others.

Let’s explore these common mistakes and how to avoid them.

It’s common knowledge to use environment variables or secrets in your pipeline to avoid manually typing in passwords, SSH keys, connection strings, and other sensitive details. Since this is widely understood, I won’t dwell on it here.

Take a Snapshot of Your Server Before You Begin

If you’re planning to make changes to your server environment — such as adding or deleting files — especially if you’re new to this, it’s crucial to take a snapshot of your server before setting up your CI/CD pipeline. Snapshots are quick and straightforward to create, and they provide an easy way to restore your server to its previous state if something goes wrong.

I once witnessed a colleague accidentally delete our server environment and critical system files while configuring a CI/CD pipeline to sync code changes. This could have been easily avoided with a simple snapshot.

Set Up Your SSH Key Properly

To connect to your remote server where you’ll deploy your application, you’ll need both a private key and a public key. You can either create a new key pair specifically for your pipeline or use the existing public key you already use to access your instance. Either option works, but you’ll need to copy the contents of your private key file and paste them into GitHub Secrets.

Connecting to server through SSH

Here are two important things to keep in mind:

  1. Avoid Opening Your Key File as a Text File
    When copying your private key, it’s better to display its contents in your terminal and copy it from there, especially if you’re using a Windows system. Opening the file directly as a text file can lead to formatting issues. Use the following commands to display the private key file content: cat key.pem for Linux or type key.pem for Windows.

  2. Copy the Entire Key Content, Including Headers
    Ensure you copy the entire content of the key file, including the headers and footers like; — –BEGIN RSA PRIVATE KEY — — and — –END RSA PRIVATE KEY — –. Paste the complete content into your GitHub Secrets box without any spaces.

Additionally, it’s better to append the key content to a file using commands like echo or cat rather than copying and pasting it manually into a text editor. For example, if you need to add a new SSH public key to your ~/.ssh/authorized_keys file, using the command line approach is more reliable and less error-prone. Here’s how to do it:

echo "ssh-rsa ***your key content***" >> ~/.ssh/authorized_keys
Enter fullscreen mode Exit fullscreen mode

or using cat:

cat ~/.ssh/rsa_key.pub >> ~/.ssh/authorized_keys
Enter fullscreen mode Exit fullscreen mode

This method reduces the risk of errors compared to manual editing.

Carefully Review Your Deployment Path

It’s crucial to thoroughly review and understand the deployment path where your code or files will be affected, especially when deleting or syncing files on your server. If you’re resyncing files to your server, create a dedicated folder where your code files will be resynced, and always double-check the path before testing the pipeline.

I once worked on a project where a colleague mistakenly resynced files directly to the root directory (/home/bitnami/). This error resulted in the code being deployed in the root environment and inadvertently deleting other essential folders, including our /.ssh/ directory, environment paths, and other critical files.

This led to significant work to regain SSH access to the server and recreate the SSH public and private keys. Since we had done extensive configurations on the server, starting from scratch would have been far more stressful and time-consuming.

- name: Upload new files 
        run: |
          rsync -avz --no-times --delete-after --exclude '.git' ./ bitnami@${{ secrets.YOUR_SERVER_IP }}:/home/bitnami
Enter fullscreen mode Exit fullscreen mode

As example, the code above will delete all files in the root directory (/home/bitnami) including your SSH key which is usually located in that directory.

To avoid such scenarios, always keep a snapshot of your server as a backup.

Here are the high-level steps to recreate SSH keys for your server if you find yourself in a similar situation:

Prerequisite: Ensure you can connect to the server via SSH through an alternative method, such as browser-based sessions.

Generate an SSH RSA Key Pair:
Use your terminal to generate a new SSH RSA key pair.

Add the Public Key Content to Your Server:
Append the public key file content (.pub) to your ~/.ssh/authorized_keys file. It’s recommended to use the echo or cat commands, as discussed earlier:

echo "ssh-rsa ***your key content***" >> ~/.ssh/authorized_keys
Enter fullscreen mode Exit fullscreen mode

or

cat ~/.ssh/rsa_key.pub >> ~/.ssh/authorized_keys
Enter fullscreen mode Exit fullscreen mode

Add the Private Key to Your Local Machine or Pipeline Secrets:
Store the private key content on your local machine or copy it to the secrets environment of your pipeline.

Test the New Key:
Attempt to connect to your server using the new SSH key. It should work now.
By following these steps and being meticulous about your deployment paths, you can avoid costly mistakes and ensure smoother operations.

Use Server Configuration Management Tools for Your Environment

In one of the challenging experiences I mentioned earlier, we could have saved ourselves a lot of stress if we had set up our environment using configuration management tools like Ansible, Chef, or Puppet.

These tools would have allowed us to easily replicate the same configuration on another server when we lost SSH access to the previous one.

Instead of struggling to regain access, we could have simply spun up a new server and run the configuration playbook or cookbook to restore our setup.

Although DevOps engineers typically don’t create configuration scripts for a single server, it’s still a best practice to do so. It’s not only important but also incredibly useful for recreating your server configuration in various scenarios.

Build Fast, Fail Fast, and Enable Detailed Monitoring

Building your code quickly, testing it promptly, and making necessary changes is essential. This approach enables you to deploy more often, reducing context switching, which is a best practice in DevOps. Regular deployments ensure that code is tested in staging and production as soon as possible.

Detailed monitoring of your builds and deployments allows you to quickly spot issues and address them directly, minimizing guesswork. Trust me, this will save you a significant amount of time.

Some Additional Useful Tips

Build Once: Ensure you build your code once, run your tests, and deploy the same artifact to staging and production if successful. Avoid building the code separately for each stage, as this might introduce inconsistencies. You can store your artifacts or outputs in repositories like Docker, ECR, or S3. Also, make sure to version your code appropriately, ensuring that the code you deploy is the same as what you built and tested, so it will perform consistently.

Code, Build, and Deploy Frequently: Frequent coding, building, and deployment are at the heart of DevOps. This approach ensures that mistakes and errors are spotted and corrected quickly, and it provides immediate feedback from both testing teams and customers.
These are the tips I have for you. I have personally experienced how these practices can save you time, improve your DevOps experience, prevent unnecessary mistakes, and help you quickly remediate any errors that do occur.

Conclusion

By implementing these best practices, you’ll streamline your DevOps workflow, minimize costly errors, and enhance your deployment efficiency. Embrace these tips to improve your DevOps experience, ensuring faster, more reliable, and consistent software delivery.

Please share any additional tips you might have, or let me know if I missed something important!

. . . .
Terabox Video Player