Playwright testing on CI using Github Actions and Vercel

Sergey Labut - Aug 7 - - Dev Community

In this article, we will share our experience of testing on CI using Playwright and GitHub Actions.

Intro

If you are new to Playwright, please check out the previous article. If you are new to the concept of CI/CD (Continuous Integration and Continuous Deployment), don't worry, this article doesn't require any special knowledge and introduces each concept step by step.

Overview of GitHub Actions

GitHub Actions is a CI/CD tool integrated directly into GitHub. It allows developers to automate their workflows for building, testing, and deploying code.

Key concepts:

  • workflow: A workflow is an automated process made up of one or more jobs. It's defined by a YAML file in the .github/workflows directory of your repository. Workflows can be triggered by various events, such as pushing code, creating pull requests, or on a schedule.
  • job: A job is a set of steps executed on the same runner. Jobs run in parallel by default, but can be configured to run sequentially if dependencies exist between them.
  • step: A step represents a single task to be performed, such as running a command or using an action.
  • action: Actions are custom applications for the GitHub Actions platform that can be used to perform complex tasks. Actions can be created by anyone and shared via GitHub Marketplace. Action can have inputs and outputs.
  • runner: Runners are the servers that run your workflows. GitHub provides hosted runners with commonly used software preinstalled, or you can use self-hosted runners.
  • event: A specific activity that triggers a workflow to run. Examples include push, pull_request, schedule, and workflow_dispatch.
  • artifact: Files generated during a workflow that can be retained and shared between jobs or used as a record of workflow execution. Artifacts are useful for persisting build outputs, test results, and other files for later analysis or download.

Default example

Now we can look at the example from the official documentation and understand what's going on there.

# Workflow name
name: Playwright Tests

# Trigger the workflow on push and pull request events to the main or master branches
on:
  push:
    branches: [ main, master ]
  pull_request:
    branches: [ main, master ]

jobs:
# Job name
  test:
    # Set a timeout of 60 minutes for the job
    timeout-minutes: 60
    # Use the latest Ubuntu runner
    runs-on: ubuntu-latest

    steps:
      # Step to check out the repository
      - uses: actions/checkout@v4

      # Step to set up Node.js environment
      - uses: actions/setup-node@v4
      # Inputs for the action
        with:
          node-version: 18  # Specify Node.js version 18

      # Step to install project dependencies using npm
      - name: Install dependencies
        run: npm ci

      # Step to install Playwright browsers along with necessary dependencies
      - name: Install Playwright Browsers
        run: npx playwright install --with-deps

      # Step to run Playwright tests
      - name: Run Playwright tests
        run: npx playwright test

      # Step to upload Playwright test report as an artifact
      - uses: actions/upload-artifact@v4
        if: ${{ !cancelled() }}  # Upload only if the job is not cancelled
      # Inputs for the action
        with:
          name: playwright-report  # Name of the artifact
          path: playwright-report/  # Path to the report
          retention-days: 30  # Retention period for the artifact
Enter fullscreen mode Exit fullscreen mode

This workflow is basic, but it works for almost everything except snapshots testing. Snapshot testing can be part of regression testing - you take a screenshot of a page/component as a reference (master branch or production) and compare it to the current version (changes in PR). This means that for this type of testing we first need to create control screenshots, then create test screenshots and compare them. We need a successful deployment of preview to start testing.

There are four main steps to consider here:

  • preview deployment,
  • run tests on deployed master branch,
  • run tests on deployed current PR branch,
  • upload artifact to see detailed report.

The simplest version would look like this.

.github/workflows/e2e-test.yml

name: Playwright Tests
on:
  deployment_status:
jobs:
  test:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    # We need a successful deployment of preview to start testing.
    if: github.event.deployment_status.state == 'success'
    steps:
    - uses: actions/checkout@v4
    - uses: actions/setup-node@v4
      with:
        node-version: 18
    - name: Install dependencies
      run: npm ci
    - name: Install Playwright
      run: npx playwright install --with-deps
    - name: Create control screenshots
      run: npx playwright test --update-snapshots
      env:
       # We should assign baseURL in playwright.config.ts to the value of this variable. You also need to set the action secret named E2E_MASTER_BRANCH_PREVIEW_URL to the URL of your deployed master branch.
        PLAYWRIGHT_TEST_BASE_URL: ${{ secrets.E2E_MASTER_BRANCH_PREVIEW_URL }}
    - name: Run Playwright tests
      run: npx playwright test
      env:
      # In this case we can get the deployed preview URL from the event
        PLAYWRIGHT_TEST_BASE_URL: ${{ github.event.deployment_status.target_url }}
    - uses: actions/upload-artifact@v4
      with:
        name: playwright-report
        path: playwright-report/
        retention-days: 30
Enter fullscreen mode Exit fullscreen mode

What are the potential downsides here? The most obvious one is that for each test, we re-run the test on the master branch to get screenshots. This is not very efficient. It is much better to run these tests once on the master branch, save the results, and then use them for all subsequent tests. Also, caching node_modules can save some time. Playwright caching itself is not easy, but we found a way to reduce the time using the official Docker image. We also added HTML report deployment and notifications for failed tests in Slack. For recurring tasks, we used composite actions.

We can still keep everything in one workflow, but we will have jobs that will be triggered by different events: master branch preview deployment and current branch preview deployment.

We will have two completly independent jobs:

  • on master preview deployment: run tests to create control screenshots, save screenshots and node_modules as artifact,
  • on current branch preview deployment: pull artifacts with screenshots and node_modules, run tests, upload artifact to see detailed report

Note: Cache may seem more natural to use as a storage for such things. But not quite. If we check the official documentation, we will see that for screenshots, for example, we cannot use cache. So if we can no longer use cache everywhere, the choice was made in favor of artifacts.

name: Playwright Tests
on:
  deployment_status:
env:
  PLAYWRIGHT_TEST_BASE_URL: ${{ secrets.E2E_MASTER_BRANCH_PREVIEW_URL }}
  NEXT_PREVIEW_TOKEN: ${{ secrets.NEXT_PREVIEW_TOKEN }}
  SLACK_INCOMING_WEBHOOK_URL_SNAPSHOTS: ${{ secrets.SLACK_INCOMING_WEBHOOK_URL_SNAPSHOTS }}
jobs:
  create-screenshots-master:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    # Run only on successful deployment of your master branch preview. your-vercel-project should be replaced with the name of your project.
    if: ${{ github.event.deployment_status.state == 'success' && github.event.deployment.environment == 'Production – your-vercel-project'}}
    steps:
      - name: Install dependencies
        run: yarn --prefer-offline
      # We use the official Docker image to speed things up
      - name: Run Playwright tests
        uses: docker://mcr.microsoft.com/playwright:v1.45.0-jammy
        with:
          args: npx playwright test --update-snapshots
      # Delete all ts files to leave only images
      - name: Prepare a screenshots folder
        run: |
          cd e2e
          find . -name "*.ts" -type f -delete
          cd ..
      - uses: actions/upload-artifact@v4
        with:
          name: 'screenshots-master'
          path: e2e/
          retention-days: 7
      - uses: actions/upload-artifact@v4
        with:
          name: node_modules
          path: node_modules/
          retention-days: 7
  create-screenshots-pull-request:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    # Run only on successful deployment of your current branch preview. your-vercel-project should be replaced with the name of your project.
    if: ${{ github.event.deployment_status.state == 'success' && github.event.deployment.environment == 'Preview – your-vercel-project' && github.event.deployment_status.deployment.ref != 'refs/heads/master'}}
    steps:
      # Find the ID of the last successful run. Where e2e-test.yml is the name of the workflow. And store it to a var run_id
      - name: Get Run ID
        id: get_run_id
        run: |
          # Fetch the latest run of the workflow
          response=$(curl -X GET \
          -H "Authorization: Bearer ${{ github.token }}" \
          -H "Accept: application/vnd.github.v3+json" \
          "https://api.github.com/repos/${{ github.repository }}/actions/workflows/e2e-test.yml/runs?status=success&branch=master&per_page=1")

          # Extract the run ID from the response
          run_id=$(echo "$response" | jq -r '.workflow_runs[0].id')

          echo "run_id=${run_id}" >> $GITHUB_OUTPUT
      - name: Print Run ID
        run: |
          echo "Latest Run ID: ${{ steps.get_run_id.outputs.run_id }}"
      - name: Check if artifact with master screenshots exists
        # Now we need to check if the run with the id has artifacts with screenshots. And store it to a var is_exist
        if: steps.get_run_id.outputs.run_id != 'null'
        id: check_screenshots_artifact
        run: |
          response=$(curl -s \
          -H "Authorization: Bearer ${{ github.token }}" \
          -H "Accept: application/vnd.github.v3+json" \
          "https://api.github.com/repos/${{ github.repository }}/actions/runs/${{ steps.get_run_id.outputs.run_id }}/artifacts" \
          | jq ".artifacts[] | select(.name == \"screenshots-master\")")

          if [ "$(echo "$response" | jq -e '. != null')" ]; then
            echo "The artifact exists."
            echo "is_exist=true" >> $GITHUB_OUTPUT
          else
            echo "The artifact does not exist."
            echo "is_exist=false" >> $GITHUB_OUTPUT
          fi
        # Download artifact if exists
      - name: Download screenshots artifact
        if: steps.check_screenshots_artifact.outputs.is_exist != 'false'
        uses: actions/download-artifact@v4
        with:
          name: screenshots-master
          github-token: ${{ github.token }}
          repository: ${{ github.repository }}
          run-id: ${{ steps.get_run_id.outputs.run_id }}
          path: e2e/
      # Check if the run with the id has artifacts with node_modules. And store it to a var is_exist
      - name: Check if artifact with node_modules exists
        if: steps.get_run_id.outputs.run_id != 'null'
        id: check_modules_artifact
        run: |
          response=$(curl -s \
          -H "Authorization: Bearer ${{ github.token }}" \
          -H "Accept: application/vnd.github.v3+json" \
          "https://api.github.com/repos/${{ github.repository }}/actions/runs/${{ steps.get_run_id.outputs.run_id }}/artifacts" \
          | jq ".artifacts[] | select(.name == \"node_modules\")")

          if [ "$(echo "$response" | jq -e '. != null')" ]; then
            echo "The artifact exists."
            echo "is_exist=true" >> $GITHUB_OUTPUT
          else
            echo "The artifact does not exist."
            echo "is_exist=false" >> $GITHUB_OUTPUT
          fi
      - name: Download node_modules artifact
        if: steps.check_modules_artifact.outputs.is_exist != 'false'
        uses: actions/download-artifact@v4
        with:
          name: node_modules
          github-token: ${{ github.token }}
          repository: ${{ github.repository }}
          run-id: ${{ steps.get_run_id.outputs.run_id }}
          path: node_modules
      # We need to explicitly set permissions on this file to make it possible to run in a Docker container. But there is one catch, please check the note right after the code block.
      - run: chmod +x node_modules/.bin/playwright
        if: steps.check_modules_artifact.outputs.is_exist != 'false'
      # If node_modules is not cached, install it as usual.
      - name: Cached dependencies are not found. Install dependencies
        if: steps.check_modules_artifact.outputs.is_exist == 'false'
        run: yarn --prefer-offline
      # Here we run the screenshot tests again, even if the main screenshots are loaded. The only reason for this is to cover the case where we add a new test that is not in the master, and therefore no screenshot is created for it. If this is not your case, just skip this step, but if it is, you should handle this in your test code to skip a specific test if a screenshot already exists, otherwise the screenshot will be overwritten.
      - name: Run Playwright tests
        uses: docker://mcr.microsoft.com/playwright:v1.45.0-jammy
        with:
          args: npx playwright test --update-snapshots
      - name: Prepare a screenshots folder
        run: |
          cd e2e
          find . -name "*.ts" -type f -delete
          cd ..
      - uses: actions/upload-artifact@v4
        with:
          name: screenshots
          path: e2e/
          retention-days: 7
  test:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    needs: create-screenshots-pull-request
    if: success()
    steps:
      - uses: actions/checkout@v4
      - name: Get Run ID
        id: get_run_id
        run: |
          # Fetch the latest run of the workflow
          response=$(curl -X GET \
          -H "Authorization: Bearer ${{ github.token }}" \
          -H "Accept: application/vnd.github.v3+json" \
          "https://api.github.com/repos/${{ github.repository }}/actions/workflows/e2e-test.yml/runs?status=success&branch=master&per_page=1")

          # Extract the run ID from the response
          run_id=$(echo "$response" | jq -r '.workflow_runs[0].id')

          echo "run_id=${run_id}" >> $GITHUB_OUTPUT
      - name: Print Run ID
        run: |
          echo "Latest Run ID: ${{ steps.get_run_id.outputs.run_id }}
      - name: Check if artifact with node_modules exists
        if: steps.get_run_id.outputs.run_id != 'null'
        id: check_modules_artifact
        run: |
          response=$(curl -s \
          -H "Authorization: Bearer ${{ github.token }}" \
          -H "Accept: application/vnd.github.v3+json" \
          "https://api.github.com/repos/${{ github.repository }}/actions/runs/${{ steps.get_run_id.outputs.run_id }}/artifacts" \
          | jq ".artifacts[] | select(.name == \"node_modules\")")

          if [ "$(echo "$response" | jq -e '. != null')" ]; then
            echo "The artifact exists."
            echo "is_exist=true" >> $GITHUB_OUTPUT
          else
            echo "The artifact does not exist."
            echo "is_exist=false" >> $GITHUB_OUTPUT
          fi
      - name: Download node_modules artifact
        if: steps.check_modules_artifact.outputs.is_exist != 'false'
        uses: actions/download-artifact@v4
        with:
          name: node_modules
          github-token: ${{ github.token }}
          repository: ${{ github.repository }}
          run-id: ${{ steps.get_run_id.outputs.run_id }}
          path: node_modules
      - run: chmod +x node_modules/.bin/playwright
        if: steps.check_modules_artifact.outputs.is_exist != 'false'
      - name: Cached dependencies are not found. Install dependencies
        if: steps.check_modules_artifact.outputs.is_exist == 'false'
        run: yarn --prefer-offline
      - name: Download artifacts
        uses: actions/download-artifact@v4
        with:
          name: screenshots
          path: e2e/
      - name: Run Playwright tests
        uses: docker://mcr.microsoft.com/playwright:v1.45.0-jammy
        with:
          # The expression in args means: run the tests and, if they fail, create a is_test_failed.txt file and set the permissions on the playwright-report. Think of this file as a flag or artifact that allows other code to see whether the tests failed or not.
          args: >
            sh -c "
             yarn test:e2e || echo "is_test_failed=true" > is_test_failed.txt &&
             chmod -R 755 playwright-report


        env:
          PLAYWRIGHT_TEST_BASE_URL: ${{ github.event.deployment_status.target_url }}
        id: screenshot_test
      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: playwright-report
          path: playwright-report/
          retention-days: 1
      # Manually check if the file exists and throw an error if it does
      - name: Check if it is failed
        run: |
          if [ -f "is_test_failed.txt" ]; then
            echo "Test failed"
            exit 1
          else
           echo "Test was succesfull"
          fi
  deploy-report:
    # Run only if tests fail
    if: failure()
    env:
      VERCEL_ORG_ID: ${{ secrets.VERCEL_TEAM_ID }}
      VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
    needs: test
    runs-on: ubuntu-latest
    environment:
      name: preview
      url: ${{ env.UI_TEST_REPORT_URL }}
    steps:
      - uses: actions/checkout@v4
      - name: Check if artifact with playwright-report exists
        if: ${{ github.run_id  }} != 'null'
        id: check_artifact
        run: |
          response=$(curl -s \
          -H "Authorization: Bearer ${{ github.token }}" \
          -H "Accept: application/vnd.github.v3+json" \
          "https://api.github.com/repos/${{ github.repository }}/actions/runs/${{ steps.get_run_id.outputs.run_id }}/artifacts" \
          | jq ".artifacts[] | select(.name == \"playwright-report\")")

          if [ "$(echo "$response" | jq -e '. != null')" ]; then
            echo "The artifact exists."
            echo "is_exist=true" >> $GITHUB_OUTPUT
          else
            echo "The artifact does not exist."
            echo "is_exist=false" >> $GITHUB_OUTPUT
          fi
      # No report. Nothing to deploy.
      - name: Check if report exists
        if: steps.check_artifact.outputs.is_exist == 'false'
        run: exit 1
      - name: Install Vercel CLI
        run: npm install --global vercel@latest
      - name: Pull Vercel Environment Information
        run: vercel pull --yes --environment=preview --token=${{ secrets.VERCEL_UI_TEST_REPORT_TOKEN }}
      - name: Download artifacts
        uses: actions/download-artifact@v4
        with:
          name: playwright-report
      # Prepare report for static deploy
      - name: Copy artifacts
        run: |
          mkdir -p .vercel/output/static
          mv * .vercel/output/static/
      # Create a config.json reqiured for static deployment
      - name: Create config.json
        run: |
          touch .vercel/output/config.json
          echo '{"version": 3}'  >> .vercel/output/config.json
      - name: Deploy Project Artifacts to Vercel
        run: echo "UI_TEST_REPORT_URL=$(vercel deploy --prebuilt --token=${{ secrets.VERCEL_UI_TEST_REPORT_TOKEN }})" >> $GITHUB_ENV
Enter fullscreen mode Exit fullscreen mode

Note: A few words about a potential issue with playwright CLI. Playwright has two packages with similar structure: playwright and @playwright. And if you use both packages and try to use the CLI to run tests, you may get an error. In our case, we started getting an error with the npx playwright test command without any changes from our side. For some reason, we had both packages installed, and it seems that at some point the CLI was linked to the wrong package. And the fix for our case was as described in these comments: first and second.

The code above is too verbose. We can use composite actions to make it look prettier. A composite action is basically a way to externalize a piece of repetitive code to make it reusable across jobs. An action can have inputs and outputs. An action can be in the same repository or from any public repository. We've seen composite actions before: when we download or download artifacts for example. We can create our own for checking run id, checking artifact and so on.

Below is a real-world example of using composite actions.

Real world example

name: E2E test
on:
  deployment_status:
env:
  PLAYWRIGHT_TEST_BASE_URL: ${{ secrets.E2E_MASTER_BRANCH_PREVIEW_URL }}
  NEXT_PREVIEW_TOKEN: ${{ secrets.NEXT_PREVIEW_TOKEN }}
  SLACK_INCOMING_WEBHOOK_URL_SNAPSHOTS: ${{ secrets.SLACK_INCOMING_WEBHOOK_URL_SNAPSHOTS }}
  CI_E2E_TEST: true
jobs:
  seed-screenshots-master:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    if: ${{ github.event.deployment_status.state == 'success' && github.event.deployment.environment == 'Production – easy-park-front'}}
    steps:
      - uses: actions/checkout@v4
      - uses: ./.github/actions/e2e-seed
        with:
          store-period: 7
          artifact-name: 'screenshots-master'
          is-master: true
      - uses: actions/upload-artifact@v4
        with:
          name: node_modules
          path: node_modules/
          retention-days: 7
  seed-screenshots-pull-request:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    if: ${{ github.event.deployment_status.state == 'success' && github.event.deployment.environment == 'Preview – easy-park-front' && github.event.deployment_status.deployment.ref != 'refs/heads/master'}}
    steps:
      - uses: actions/checkout@v4
      - uses: ./.github/actions/e2e-seed
  test:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    needs: seed-screenshots-pull-request
    if: success()
    steps:
      - uses: actions/checkout@v4
      - uses: ./.github/actions/e2e-get-last-run-id
        id: last-run
      - uses: ./.github/actions/e2e-install
        with:
          run-id: ${{ steps.last-run.outputs.run_id }}
      - name: Download artifacts
        uses: actions/download-artifact@v4
        with:
          name: screenshots
          path: e2e/
      - name: Run Playwright tests
        uses: docker://mcr.microsoft.com/playwright:v1.45.0-jammy
        with:
          args: >
            sh -c "
             yarn test:e2e || echo "is_test_failed=true" > is_test_failed.txt &&
             chmod -R 755 playwright-report &&
             node src/utils/patchPlaywrightReport.js"
        env:
          PLAYWRIGHT_TEST_BASE_URL: ${{ github.event.deployment_status.target_url }}
        id: screenshot_test
      - uses: actions/upload-artifact@v4
        with:
          name: is_test_failed
          path: is_test_failed.txt
          retention-days: 1
      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: playwright-report
          path: playwright-report/
          retention-days: 1
  deploy-report:
    if: success() || failure()
    env:
      VERCEL_ORG_ID: ${{ secrets.VERCEL_TEAM_ID }}
      VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
    needs: test
    runs-on: ubuntu-latest
    environment:
      name: preview
      url: ${{ env.UI_TEST_REPORT_URL }}
    steps:
      - uses: actions/checkout@v4
      - name: Install Vercel CLI
        run: npm install --global vercel@latest
      - name: Pull Vercel Environment Information
        run: vercel pull --yes --environment=preview --token=${{ secrets.VERCEL_UI_TEST_REPORT_TOKEN }}
      - name: Download artifacts
        uses: actions/download-artifact@v4
        with:
          name: playwright-report
      - name: Copy artifacts
        run: |
          mkdir -p .vercel/output/static
          mv * .vercel/output/static/
      - name: Create config.json
        run: |
          touch .vercel/output/config.json
          echo '{"version": 3}'  >> .vercel/output/config.json
      - name: Deploy Project Artifacts to Vercel
        run: echo "UI_TEST_REPORT_URL=$(vercel deploy --prebuilt --token=${{ secrets.VERCEL_UI_TEST_REPORT_TOKEN }})" >> $GITHUB_ENV
      - uses: ./.github/actions/e2e-check-artifact
        with:
          run-id: ${{ github.run_id  }}
          artifact-name: 'is_test_failed'
        id: check-if-test-failed
      - name: Get Deployment Info
        id: get-info
        run: |
          # Fetch the commit info from the API
          commit_name=$(curl -s -H "Authorization: token ${{ github.token }}" \
            "https://api.github.com/repos/${{ github.repository }}/commits/${{ github.event.deployment.sha }}" | jq -r '.commit.message')
          # Fetch the PR info if it exists
          pr_number=$(curl -s -H "Authorization: token ${{ github.token }}" \
            "https://api.github.com/repos/${{ github.repository }}/commits/${{ github.event.deployment.sha }}/pulls" | jq -r '.[0].number')
          pr_link=""
          if [ "$pr_number" != "null" ]; then
            pr_link="https://github.com/${{ github.repository }}/pull/${pr_number}"
          fi
          # Print the results
          echo "Commit name: ${commit_name}"
          echo "Commit SHA: ${{ github.event.deployment.sha }}"
          echo "PR Link: ${pr_link}"
          # Set the output variables
          echo "COMMIT_NAME=${commit_name}" >> "$GITHUB_ENV"
          echo "COMMIT_SHA=${{ github.event.deployment.sha }}" >> "$GITHUB_ENV"
          echo "PR_LINK=${pr_link}" >> "$GITHUB_ENV"
      - uses: actions/checkout@v4
      - uses: ./.github/actions/report-error
        if: steps.check-if-test-failed.outputs.is_exist != 'false'
        env:
          SLACK_INCOMING_WEBHOOK_URL: ${{ secrets.SLACK_INCOMING_WEBHOOK_URL }}
          NEXT_PUBLIC_PRODUCTION_DOMAIN: ${{ secrets.NEXT_PUBLIC_PRODUCTION_DOMAIN }}
          ERROR_EMAIL_LIST: ${{ secrets.ERROR_EMAIL_LIST }}
        with:
          args: "'Snapshot testing' 'Changes detected. Click link in slug to review changes' ${{ env.UI_TEST_REPORT_URL }} ${{ env.PR_LINK }} '${{ env.COMMIT_NAME }}'"
          mode: CI_SNAPSHOTS_REPORT
Enter fullscreen mode Exit fullscreen mode

Composite action to check for the presence of an artifact.

name: E2E check artifact
description: Check if an artifact with a specific name exists
inputs:
  artifact-name:
    description: 'Name of the artifact'
    required: true
  run-id:
    description: 'Last run id'
    required: true
    default: 'null'
outputs:
  is_exist:
    description: 'Does the artifact exist?'
    value: ${{ steps.check_artifact.outputs.is_exist }}
runs:
  using: 'composite'
  steps:
    - name: Check if Artifact Exists
      if: inputs.run-id != 'null'
      id: check_artifact
      run: |
        response=$(curl -s \
        -H "Authorization: Bearer ${{ github.token }}" \
        -H "Accept: application/vnd.github.v3+json" \
        "https://api.github.com/repos/${{ github.repository }}/actions/runs/${{ inputs.run-id }}/artifacts" \
        | jq ".artifacts[] | select(.name == \"${{ inputs.artifact-name }}\")")
        if [ "$(echo "$response" | jq -e '. != null')" ]; then
          echo "The artifact exists."
          echo "is_exist=true" >> $GITHUB_OUTPUT
        else
          echo "The artifact does not exist."
          echo "is_exist=false" >> $GITHUB_OUTPUT
        fi
      shell: bash
Enter fullscreen mode Exit fullscreen mode

Composite action to get the ID of the last successful run in the master branch.

name: E2E get last run id
description: Get id of the last successful seed run on master branch
outputs:
  run_id:
    description: 'Last run id'
    value: ${{ steps.get_run_id.outputs.run_id }}
runs:
  using: 'composite'
  steps:
    - name: Get Run ID
      id: get_run_id
      run: |
        # Fetch the latest run of the workflow
        response=$(curl -X GET \
        -H "Authorization: Bearer ${{ github.token }}" \
        -H "Accept: application/vnd.github.v3+json" \
        "https://api.github.com/repos/${{ github.repository }}/actions/workflows/e2e-test.yml/runs?status=success&branch=master&per_page=1")
        # Extract the run ID from the response
        run_id=$(echo "$response" | jq -r '.workflow_runs[0].id')
        echo "run_id=${run_id}" >> $GITHUB_OUTPUT
      shell: bash
    - name: Print Run ID
      run: |
        echo "Latest Run ID: ${{ steps.get_run_id.outputs.run_id }}"
      shell: bash
Enter fullscreen mode Exit fullscreen mode

Composite action to handle installation of node_modules.

name: E2E dependencies
description: Install all necessary dependencies to run e2e tests
inputs:
  run-id:
    description: 'Last run id'
    required: true
    default: 'null'
  is-master:
    description: 'Is master seed'
    required: true
    default: 'false'
runs:
  using: 'composite'
  steps:
    - uses: ./.github/actions/e2e-check-artifact
      with:
        run-id: ${{ inputs.run-id }}
        artifact-name: 'node_modules'
      id: check-dependencies-cache
    - name: Download artifacts
      if: inputs.is-master == 'false' && steps.check-seed-cache.outputs.is_exist != 'false'
      uses: actions/download-artifact@v4
      with:
        name: node_modules
        github-token: ${{ github.token }}
        repository: ${{ github.repository }}
        run-id: ${{ inputs.run-id }}
        path: node_modules
    - run: chmod +x node_modules/.bin/playwright
      if: inputs.is-master == 'false' && steps.check-seed-cache.outputs.is_exist != 'false'
      shell: bash
    - name: Cached dependencies are not found. Install dependencies
      if: inputs.is-master != 'false'
      run: yarn --prefer-offline
      shell: bash
Enter fullscreen mode Exit fullscreen mode

Composite action for taking screenshots in the main branch.

name: E2E seed
description: Seed master screenshots
inputs:
  store-period:
    description: 'Artifact retention period in days'
    required: true
    default: 1
  artifact-name:
    description: 'Name of the artifact'
    required: true
    default: 'screenshots'
  is-master:
    description: 'Is master seed'
    required: true
    default: 'false'
runs:
  using: 'composite'
  steps:
    - uses: ./.github/actions/e2e-get-last-run-id
      id: last-run
    - uses: ./.github/actions/e2e-install
      with:
        run-id: ${{ steps.last-run.outputs.run_id }}
        is-master: inputs.is-master
    - uses: ./.github/actions/e2e-check-artifact
      with:
        run-id: ${{ steps.last-run.outputs.run_id }}
        artifact-name: 'node_modules'
      id: check-seed-cache
    - name: Download artifacts
      if: inputs.is-master == 'false' && steps.check-seed-cache.outputs.is_exist != 'false'
      uses: actions/download-artifact@v4
      with:
        name: screenshots-master
        github-token: ${{ github.token }}
        repository: ${{ github.repository }}
        run-id: ${{ steps.last-run.outputs.run_id}}
        path: e2e/
    - name: Run Playwright tests
      uses: docker://mcr.microsoft.com/playwright:v1.45.0-jammy
      with:
        args: yarn test:seed
    - name: Prepare a screenshots folder
      run: |
        cd e2e
        find . -name "*.ts" -type f -delete
        cd ..
      shell: bash
    - uses: actions/upload-artifact@v4
      with:
        name: ${{ inputs.artifact-name }}
        path: e2e/
        retention-days: ${{ inputs.store-period }}
Enter fullscreen mode Exit fullscreen mode

Summary

There are tools for regression testing, but they seem to be paid. With Playwright you can start testing your app today and not everyone needs a solution as complex as the one above, you can start simple. With some of the optimizations presented above, your code will have reasonable execution time.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player