Different types of testing explained

jess unrein - Dec 6 '18 - - Dev Community

In standup the other day, my team's DBA was talking about running smoke tests for his most recent project. I've heard people talk about smoke tests before, but for some reason it never really clicked that I have no idea what a smoke test is. How is it different than a unit test? An integration test? A regression test?

It feels a little embarrassing at this point that I can't articulate the difference between these things, so I decided to do a little research and write up an explainer so that I can reference it in the future and not feel like an ignorant dingus. I figured, since I've been working as a dev for almost 5 years and had this question, there are probably others out there who are similarly too shy to ask.

After reading a bunch of different blog posts, stack overflow questions, and random resources I've constructed a Frankenstein approximation of a consensus for several different categories of tests. After a little bit of time spent googling, I think there are three good things to think about to understand different kinds of testing.

1.) What kind of thing do they test?
2.) When are these tests written and run?
3.) What information does a test failure provide?

Different people have different definitions, and a single test suite might include multiple types of tests. For example, you might have a set of tests you run that combine integration tests and regression tests into a single suite. That's fine. There are grey areas, and teams have a habit of developing their own, team-specific vocabulary. You don't need to have a comprehensive suite for each of these categories. You should test at the level that makes sense for:

  • the complexity of your app
  • the amount of traffic your app sees
  • the size of your team

If you think I've radically mischaracterized or omitted something important, especially if you work in testing, please let me know in the comments!

Unit tests

What do they test?

Unit tests evaluate that each atomic unit of code performs the way it's supposed to. Ideally, when you're planning and writing unit tests, you should isolate functionality that can't be broken down any further, and then test that.

Unit tests should not test external dependencies or interactions. You should definitely mock out api calls. Unit test purists would also have you mock out database calls and only ensure that your code operates correctly given correct inputs from outside sources. Depending on your existing codebase or your manager's preferences, this might not be possible. If you aren't able to exclude database functionality from your unit test suite, make sure you are mindful of performance and look for potential optimizations. I can tell you from experience that long running unit test suites are extremely unpleasant and slow down development significantly.

When do I run them?

You should write and run unit tests in parallel with your code. When people refer to Test Driven Development, they're referring to unit tests, and using the tests as the spec for what your code should accomplish.

What happens when they fail?

A failing unit test lets you know that a specific piece of code is busted. If you've broken it down far enough, your failure should zoom in on the exact piece of code that isn't working as intended.

Failures should help you identify and fix problems quickly, and let you know when your specs need to be updated. They're probably a good guide for when to update your code documentation as well.

Integration tests

What do they test?

Integration tests check the interaction between two or more atomic units of code. Your application is composed of individual units that perform specific small functions, and each of those small functions might work in isolation but break when you knit them together.

Integration tests also test the integration of your code with outside dependencies, like database connections or third party APIs.

When do I run them?

Integration tests should be the next step after unit tests.

What happens when they fail?

When an integration test fails, it tells you that two or more core functions of your application aren't working together. These might be two modules you've written that clash in some complicated business logic, or a failure resulting from a third party API changing the structure of their response. It might alert you to bad error handling in the case of a database connection failure.

Failures might be easy to identify, or they might require some manual validation and experimentation to identify. Difficult to solve integration test failures are an indication of where you can improve your logging and error handling.

Regression testing

What do they test?

Regression tests check a set of scenarios that worked in the past and should be relatively stable.

When do I run them?

You should run your regression tests after your integration tests pass. Do not add your new feature to the regression test suite until existing regression tests pass.

What happens when they fail?

A regression test failure means that new functionality has broken some existing functionality, causing a regression.

The failure should let you know what old capabilities are broken, and indicate that you need to write additional integration tests between your new feature and the old, broken feature.

A regression test failure might also indicate that you have inadvertently reintroduced a bug that you fixed in the past.

Smoke testing

What do they test?

Smoke tests are a high level, tightly curated set of automated tests that live somewhere in the space between integration and regression tests. They're there as a sanity check that your site's core functionality isn't wrecked.

The term smoke test seems to be a holdover from plumbing. If you could see smoke or steam coming out of a pipe, it was leaky and needed to be fixed.

When do I run them?

Smoke tests should be a test of your whole system together, ensuring that core functionality remains intact. These shouldn't be comprehensive. These are your significant, big picture, no-go test failures. You should run them early and often, ideally daily, in both staging and production environments.

What happens when they fail?

If a smoke test fails there's a significant problem with your site's functionality. You should not deploy the new changes until these failures are addressed. If they fail in production, fixing these should be very high priority.

Acceptance testing

(I've also heard this called QA/BV/Manual testing, etc.)

What do they test?

Acceptance testing is usually a set of manual tests performed after the end-to-end development is finished. They check to make sure that the feature as written actually meets all of the initial specifications, or acceptance criteria.

What happens when they fail?

Looks like you missed a bit of functionality when writing your code. You'll need to go back to development and fix that. :(

If acceptance tests fail you probably need to decide on acceptance criteria earlier in your planning process next time.

When do I run them?

Since these are manual tests, not tests run as code, the timing is a little different. You and your project owner should draft a set of acceptance criteria before work begins on a project. Any additional scope that's discovered or added to the project should be reflected in the acceptance criteria.

Acceptance tests should happen fairly quickly after development is complete so that you can go back and iterate quickly if something isn't quite right. It makes sense to do these right after unit or integration testing, before you've gone too far in the testing process before significant changes need to be made.

Performance testing

What do they test?
Performance tests check stability, scalability, and usability of your product and infrastructure. You might check things like number of errors per second or how long it takes to load a page. There isn't necessarily pass/fail criteria associated with a performance test. This stage is more about data gathering and looking for areas of improvement.

What happens when they fail?
Performance tests don't exactly fail in the same way that a unit test suite would fail. Instead you collect a set of benchmarks and assess them against where you want those numbers to be. If your performance test fails, it might tell you that you need to pay more attention to infrastructure scaling, database query time, etc.

When do I run them?
Performance tests are a good idea after major releases and refactors.

Load testing

What do they test?
Load testing is a kind of specialized performance test that specifically checks how your product performs under significant stress over a predetermined period of time.

What happens when they fail?
Load tests assess how prepared you are for a significant increase in traffic. If a load test fails, it doesn't mean that your site is broken, but it does mean that you aren't prepared for a viral hit or a DDOS attack. This is probably not a big deal for small products just starting out, but failure should be a concern as your userbase starts to scale.

When do I write them?
Load tests should not be your first concern right out of the gate, but as your product becomes bigger and more established, you should probably run load tests on new features to see if they will affect the overall performance of the site and see if they can be optimized.

I can no longer say I don't know what a smoke test is, and hopefully you learned something along the way too! As I mentioned above, I am not a tester, so if you notice something I've missed or misinterpreted, let me know in the comments!

:)

. . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player