Hey and welcome back to The Adventures of Blink! We're continuing Season 2 today, building Hangman in Python, exploring tons of software engineering concepts and DevOps techniques as we go.
TL/DR: Youtube
Don't feel like reading? Come join me on Youtube where we'll build our way through this episode. Leave me a like, a subscribe, and make sure you don't miss upcoming episodes!
A quick architecture review
Our app is going to be built something like this:
We have a database in a container. We have a frontend app. And we made a containerized Flask API to sit between them and provide the "glue" from front-end to back-end.
Today we're going to see how to use our API in our front-end application, and use a little Test-Driven Development (TDD) to do it!
Step 1: Start up the backend systems
Remember how to do this? We haven't touched this since Episode 3:
First make sure you've started Docker on your system. Then you need one command:
# on Windows & Linux
docker-compose up --build
# on Mac
docker compose up --build
Pretty simple! This should leave you with a working database and API.
Step 2: Manually call the API to confirm it works
Next we want to know that our API will respond correctly. Here's how you test it from your terminal:
curl http://localhost:5001/random
This should respond with an object containing a board entry.
Step 3: Write a test
Now in true TDD fashion, we want to create a test describing the end-to-end behavior we want to see.
We have two database interactions: /add
and /random
. We only need to test one to ensure we've established our connectivity correctly... but they both present some interesting challenges, don't they?
/random
is going to be hard to test because it's random. We can't determine what the response is, so how will we assert that it's behaved successfully? If we can solve that problem, this method will allow us to make a simple call that reaches all the way from the frontend code into the database.
/add
is also going to be hard to test because it will alter our database every time it runs. We need a way to ensure we're passing the correct data without actually executing the command, or every test cycle will flood our database with junk data. As this isn't desirable for our database, we won't use this for integration testing.
Mocks
If you've ever done a lot of unit testing, you might be thinking "Wait a sec... we should be mocking our database responses!"
That is true in the context of unit tests. But we're talking about integration testing today... we want to run something end-to-end that actually communicates with the database. It is definitely worthwhile to also unit test our frontend too, and in those cases we would want to mock the response because we're validating frontend logic - but today's assignment is to verify that everything connects.
Separating Unit & Integration Tests in Pytest
The big challenge here is that if we just write more unit tests, they're going to try to run whenever we call pytest
. We need to be able to specify that our integration tests are not part of the unit test suite, and we will call them separately. Here's how we do that:
# We need pytest and the class that contains the database api calls
# (which we haven't written yet)
import pytest
from hangman.db_api import HangmanDB_Integration
# This fixture will instantiate the class that calls the API for us
@pytest.fixture
def db_integration():
return HangmanDB_Integration()
# This annotation marks the test as being part of the "integration"
# test suite. This will allow us flexibility in how we call the
# tests.
@pytest.mark.integration
def test_example_integration(db_integration):
api = db_integration
# The contents of the api aren't being tested, just our ability
# to retrieve them. So we'll just look for a 200 response
# from the API call
assert api.random().status_code == 200
Now when we're ready to run our integration test, we run
pytest -m integration
This command will skip everything in our test suite that isn't decorated with the @pytest.mark.integration
decoration. If we just want to run our unit tests, we can still do that with
pytest
But we'll have to add a pytest.ini
file to our project root and configure it as follows:
# pytest.ini
[pytest]
markers =
integration: mark a test as an integration test.
addopts = -m "not integration"
This will ensure that our annotated tests DO NOT run when we call pytest
and ONLY our annotated tests run if we call pytest -m integration
.
Step 4: Code until we pass the tests
Our db_api.py file looks like the following
# We need requests because we're issuing HTTP calls
import requests
# We're writing this code into a class so we have an easy handle
# to grab hold of the database by in our frontend code
class HangmanDB_Integration:
def __init__(self):
ROOT_URL = 'http://localhost:5001'
self.random_url = f"{ROOT_URL}/random"
self.add_url = f"{ROOT_URL}/add"
def random(self):
return requests.get(self.random_url)
Wrapping up
Today's adventure may not feel like a lot of progress toward a working app, but it's a necessary side quest! Testing is a critical part of releasing your app to the world and an end-to-end integration test is a great way to ensure that all the various parts of the app are wired up to each other correctly.
You might be wondering when we're going to get to the user interface... well, it's what's coming up next! Tune in next time as we continue the adventures with our first app screen build!