Automated Testing Made Simple: Setting Up and Exploring pytest

Harshil Patel - Nov 6 - - Dev Community

This blog is about integrating a testing framework into my ResumeEnhancer CLI tool. As my work on ResumeEnhancer has progressed, I realized that adding test cases would be essential to maintain code quality as the project grows. I considered a few options like pytest, nose, and unittest but chose pytest since it’s widely used and a great choice for learning a popular testing framework.

Setting up pytest is straightforward—just run pip install pytest, and you’re ready to go.

Testing

I began by researching how to write and run test cases with pytest. I created a tests folder and added my first test file called test_utils.py. I decided to follow the test_filename.py naming convention and wrote my initial test case for a utility function called read_txt_file(). This function reads text from a .txt file, as the name suggests. I explored three different ways to test this:

  1. Mock the open function – This approach pretends to open a file without actually creating one.
  2. Create a temporary text file – Here, I’d create a small .txt file in the test specifically for this function.
  3. Use a test.txt file in the project folder – I’d store a test file in the main folder to use as needed.

Finally, I decided to mock the open function and proceeded to write the test cases.

Image description

Image description

Next, I explored ways to run specific test cases. I found that you can run a particular test case with pytest path_to_test_file::class_name::test_name. If you haven’t used classes, you can omit the class name, like this:

Image description

pytest also has a Watch Mode, which reruns only failing tests automatically whenever the code changes. To use it, add the --looponfail option:

Image description

Mocking API Responses

Next, I wanted to add test cases for the get_response() method. This function takes a resume and job description as inputs, along with settings to customize the response. It uses an AI model to review the resume and suggests ways to make it a better fit for the job description. The feedback focuses on highlighting important skills, experiences, and keywords that could be added or emphasized to improve alignment with the job requirements.

Since get_response() uses the Groq API, I needed to mock the API call for testing. Here’s how I initially tried it:

import responses

sample_llm_response = {"choices": [{"delta": {"content": "Mocked response content"}}]}
responses.add(
    responses.POST,
    "https://api.groq.com/openai/v1/chat/completions",  # Mock this URL
    json=sample_llm_response,
    status=200,
)
Enter fullscreen mode Exit fullscreen mode

However, when I ran the test, it failed with an error saying the API_KEY was invalid.

Image description

After some investigation, I realized that the code initializes the Groq client first with client = Groq(api_key), and then makes the API call. Mocking this constructor call directly didn’t work, so I decided to mock the entire Groq client like this:

@mock.patch("resume_enhancer.Groq")  # Mock Groq client initialization
def test_get_response_valid(mock_groq):
    # Mock the client instance and its chat.completions.create method
    mock_client_instance = mock.Mock()
    mock_groq.return_value = mock_client_instance  # Groq() returns the mock client instance
    mock_client_instance.chat.completions.create.return_value = [  # Mock response
        mock.Mock(
            choices=[mock.Mock(delta=mock.Mock(content="Mocked response content"))]
        )
    ]

    # Capture stdout to check the printed output
    with mock.patch("sys.stdout", new_callable=StringIO) as mock_stdout:
        get_response(
            resume="Sample Resume",
            description="Sample Job Description",
            api_key="test_api_key",
        )
        output = mock_stdout.getvalue()
    # Assert that the expected content is printed in the output
    assert "Mocked response content" in output
Enter fullscreen mode Exit fullscreen mode

I wrote 11 test cases to check different scenarios for get_response(), and all the tests passed. However, I found myself repeating the Groq mocking code in each test, which made things messy. I looked for ways to avoid this repetition and found I could use a class-based setup to share common mocking logic among all tests. Here’s the improved setup:

class Test_get_response:

    def setup_method(self):
        # Mock the Groq client and its completions method
        self.patcher = mock.patch("resume_enhancer.Groq")
        self.mock_groq = self.patcher.start()  # Start the patch

        self.mock_client_instance = mock.Mock()
        self.mock_groq.return_value = self.mock_client_instance
        self.mock_client_instance.chat.completions.create.return_value = [
            mock.Mock(
                choices=[mock.Mock(delta=mock.Mock(content="Mocked response content"))],
                x_groq=mock.Mock(
                    usage=mock.Mock(
                        completion_tokens=100,
                        prompt_tokens=50,
                        total_tokens=150,
                        completion_time=0.35,
                        prompt_time=0.15,
                        queue_time=0.1,
                        total_time=0.6,
                    )
                ),
            )
        ]

    def teardown_method(self):
        self.patcher.stop()  # Stop all patches after each test
Enter fullscreen mode Exit fullscreen mode

This setup removed the repetitive mock code, and everything worked smoothly.

Conclusion

I learned a lot about pytest, including how to mock API calls, function calls, and even third-party classes like Groq. Testing has shown me how useful it is for keeping the code reliable as I add features.

Before this, I had written test cases in JavaScript using Jest, but this was my first experience mocking both functions and an API. Previously, I only did basic tests without much setup for dependencies, so learning to mock an entire API client in pytest was new for me. This process has helped me understand the value of mocking for isolating parts of the code in testing.

Now, I see why testing is so valuable, and I’ll make it a habit in future projects. It catches issues early, makes the code easier to update, and builds confidence that everything works as expected. This experience has shown me that putting effort into testing upfront is well worth it for creating better, more reliable projects.

. . . . . . . . . . . . .
Terabox Video Player