Unit and Integration Testing

bob.ts - Jun 14 '19 - - Dev Community

Introduction

With my current client, we are examining how to write effective Unit and Integration Tests as well as rework the code to make it more testable. They are moving toward a Continuous Development / Continuous Integration pipeline. My intent here is to show where we are heading. This article will have a front-end focus, but the concepts can certainly be applied in other areas.

First, they have front-end tests; there was a drive a while back to introduce testing. From what I can see initially, they learned TDD (test-driven development) and to focus on getting the code coverage to 100%. While it's good that they got this information, they are apparently fairly strict in the implementation of TDD (to the point they would rather not test if they have to use TDD) and the strong push for 100% coverage seems to have resulted in brittle tests in the upper range (final 10-20% that are generally hard to test).

Additional, there was little consideration of the type of code being written which is not impacting the types of tests they are writing.

The Future ...

I expect that I will write in more detail on many of the sections that follow as more details come to light or I come up with better explanations.

Testing Dorito

Adapted from Kent C. Dodds (Twitter Post)

I don't plan to go into the testing pyramid here (most of us have seen it). Just realize that we are differentiating between Developer Integration Testing and QA Integration Testing (the latter is not discussed anymore in this article).

Our Goals

  • To have CONFIDENCE in code being released without defects,
  • And PROOF that it works.

  • Be PRACTICAL about how we test;

  • Be PRUDENT about what should be tested.

  • To DECREASE defects (find BUGS earlier) getting to QA,

  • And FASTER, EASIER deployments.

DEFINITION: Integration Testing

  • Does not require a live version of all services, environments, and access (parts can be OK).
  • Not necessarily BROAD IN SCOPE, can be more effective with a NARROW SCOPE.
  • Independent units of code work correctly when they are connected to each other.
  • Can use a “faithful” test double

DEFINITION: Unit Testing

  • ATOMIC, Lowest Level (small and fast).
  • Single Responsibility Principle (SRP): “do one thing well.”

  • Repeatable, Reliable, and Deterministic.

  • Demonstrate concrete progress.

  • Fails on a bug or changed requirements.

  • Easy to understand why it fails.

  • Reduce the cost of bugs.

Standards

  • Easy to write
  • Readable
  • Reliable
  • Fast

Overall Testing Strategies

There are no “SET IN STONE” rules for testing.

  • Testing is not about finding bugs. However, bugs should trigger additional tests along with the bug fix
  • Tests developer’s understanding of requirements.
  • Separate business logic from server-side or client-side-integration logic.
  • Think through how to test before writing code (design for testability).
  • Improving production code often simplifies test code.
  • Do NOT test the prototype, a proof-of-concept, or experimental code (until you know it will become production code) WHILE still designing for testability.
  • BEST PRACTICE in production code does not equal BEST PRACTICE in testing.

Industry Testing BETTER Practices

  1. Know where the pain points are in the code.
  2. Work and Test in parallel.
  3. Make it part of the workflow (providing accountability / review).
  4. Naming Conventions (1st line of defense / understanding).
  5. Embrace magic numbers.
  6. Duplication is OK (violate the DRY principle).
  7. Test trivial cases
  8. Boundary cases
    • Negative cases
    • Catch known bugs
  9. Priorities
    • Algorithm Engines
    • Utility Methods
    • Core Business Logic Methods
    • High-Risk Services

Complexity

CODE SIZE (“This function is doing too much.”) ...

  • The amount of project code will not change, but the number of statements in a method can.
  • Less than 20 lines: 28% contain an error.
  • 20 lines or more: 78% contain an error.

JSLINT ...

  • Identifying bad style, syntax, and semantics; Refactoring bad code and replacing it with good code.

CYCLOMATIC COMPLEXITY ...

  • Number of independent paths through the code.
  • Also, the minimum number of Unit Tests that should be needed to exercise the code.

REUSE ...

  • Code repeated more than two times should be a candidate where we can pull the code into its own function.

THE HUMAN TEST ...

  • Complexity boils down to the difficulty someone else will have read the code.
  • Code Review Process: Shown to find 60-90% of all defects.

Metrics

(beyond Code Coverage and Branches)

There are two metrics for tests to be concerned about:

  1. Percentage of TEST FAILURES that are only a problem with the test, not a problem with the code. This often happens when various elements are mocked.
  2. Percentage of times that a BUG FIX or otherwise NON-BREAKING CHANGE requires an update to a unit test.

The goal would be to have both of these percentages at 0%. This would indicate that every test failure represents a real issue in the code and that only feature changes require updates to the tests.

What Makes Code Hard To Test?

  • When it is tightly coupled
  • Hidden or embedded dependencies
  • Required data & databases
  • Insane amounts of setup code for the test

Design Patterns

These are the Design Patterns we want to be concerned with. The developers need to be writing code for testability; the particular pattern they use should flex and change as the process moves forward.

Test Driven Development (TDD)

  • Inside Out Perspective (unit level first).
  • Code via refactoring, tests stay one step ahead of development.

Issues:

  • Can miss breaking patterns.
  • Integration Tests before code is daunting.
  • Tests duplicating bugs are nearly impossible to write before understanding the cause.

Behavior Driven Development (BDD)

  • Outside-In Perspective
  • Based on requirements and scenarios
  • Expected behavior of the user

Boundary Testing

We are advocating for testing the "boundaries" of any third-party code. The boundary should be "how" we are using their code (hopefully they have good testing on their own code). What we want to see is where our implementation (the gate) may have changed when an upgrade occurs.

  • 3rd-Party Use Cases: difficult to upgrade, in the current state.
  • Test where and how we USE THEIR functionality.

  • Reduce the pain of upgrades with BOUNDARY TESTING, issues identified by running boundary tests.

  • Don't need to test 3rd-Party code ... they do that.

Conclusion

Writing effective Unit and Integration Tests begins with writing TESTABLE code. When moving toward a Continuous Development / Continuous Integration pipeline, there needs to be solid testing to show that the code works as expected.

Leading EDJE


Smart EDJE Image

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player