The developer’s primary responsibility is to ensure the delivery of software that functions correctly. In the realm of Continuous Integration and Continuous Development (CI/CD), testing plays a pivotal role in enhancing software quality. It serves not only as a means of verifying our solutions and preventing errors but also as a valuable tool for shaping software design, identifying architectural flaws, and fostering the creation of superior solutions.
In this session, Marit has covered how, as a developer, you can get the right feedback at the right time to speed up the software developer faster, how to classify what to test, how to test, and how the tests can be automated.
About the Speaker
Marit van Dijk is a Developer Advocate at JetBrains, having 20 years of experience as a software developer in different roles in different companies. She is passionate about building amazing software with some amazing people and has contributed to open-source projects like Cucumber and various other projects. Along with developing software, she loves to learn new technologies and things and is happy to share her knowledge on programming, test automation, Cucumber/BDD, and software engineering.
Marit is a speaker at international conferences, webinars, and podcasts. She occasionally writes blog posts and has contributed to the book “97 Things Every Java Programmer Should Know” (O’Reilly Media).
If you couldn’t catch all the sessions live, don’t worry! You can access the recordings at your convenience by visiting the LambdaTest YouTube Channel.
The Importance of Testing
Marit provided an overview of testing, emphasizing its importance in ensuring software quality. She noted that there were instances when she encountered projects lacking any tests, which was less than ideal. As a developer, she empathized with the temptation to postpone testing until after completing a feature. However, she pointed out that adding tests at the end could become tedious and lead to tests that prioritized implementation details over intended behavior.
Marit emphasized the importance of building quality into software by starting to consider testing from the beginning. She stressed that achieving quality required the inclusion of testing activities throughout the development process.
She highlighted various testing activities that could be carried out at each stage of the DevOps cycle. By incorporating these activities, she and her team received the appropriate feedback at the right times, allowing them to proceed confidently in their development process.
Marit acknowledged that the required confidence level could vary depending on the context.
For instance, in her experience working on a retail platform, any system failure could lead to customers being unable to place orders or financial losses. Conversely, when dealing with medical equipment or self-driving cars, the consequences of failure were significantly more severe.
Building Quality Into Software
Marit discussed various strategies for building quality into software. One approach she mentioned involved asking questions before embarking on feature development. This process helped the team thoroughly understand the feature’s purpose and the problem it aimed to address, ultimately leading to more effective solutions.
Another technique she touched upon was Test-Driven Development (TDD), where tests were created before implementing a feature. TDD was a valuable tool for reproducing and comprehending bugs, ensuring the correct issues were addressed, and preventing regressions.
Marit stressed that even if TDD was not strictly followed, it remained crucial to consider what aspects to test before starting implementation. This entailed evaluating both the expected behavior and potential edge cases or failure scenarios.
By integrating these practices and conducting testing activities throughout the DevOps cycle, Marit and her team were able to expedite the development of higher-quality software.
Have I Tried Enough Weird Stuff?
Testers played a crucial role in identifying problems that could disrupt an application. She mentioned Elizabeth Sagroba’s insightful blog post titled “Have I Tried Enough Weird Stuff?” as a valuable resource for testing.
Marit also highlighted the impact of programmers’ misconceptions about elements like names, addresses, email addresses, and time zones on code quality. It was crucial to consider domain objects and rectify any misconceptions. The Big List of Naughty Strings was a useful resource for testing various inputs the system may or may not handle.
What does the Testing Pyramid suggest about the necessity of UI tests?
She shared an interesting scenario where a QA engineer ordered different quantities of beer and other items at a bar, generating diverse inputs for testing various scenarios.
Marit discussed the testing pyramid, emphasizing the importance of unit tests for rapid testing, the need for integration tests at the surface level, and the lower priority of UI tests due to longer rendering and interaction times. However, she noted that testing functionality closer to its location in the code was ideal, though some UI testing might be necessary for specific cases.
What does the Honeycomb Model suggest about testing for microservices?
Marit introduced the Honeycomb model, which suggested focusing on integration testing for microservices. However, she acknowledged that the approach could vary depending on the project’s context, and it was essential to determine what made sense for each specific situation.
Unit tests had flaws, but their integration into the development process was crucial. She underscored the importance of building testability into code from the outset, especially when dealing with poorly designed codebases.
What should developers be involved in creating and maintaining?
Marit emphasized the involvement of developers in creating and maintaining acceptance tests, which improved testability and ensured developers’ commitment to fixing and managing those tests. She also stressed the significance of using the right tools for testing, including automated testing frameworks and continuous integration tools.
Right Tools for the Job
Using the right tools for testing, including automated testing frameworks and continuous integration tools, was crucial, as emphasized by Marit.
Marit discussed various testing frameworks she had worked with, including:
JUnit and TestNG for unit testing and integration testing
Mojito and WireMock for mocking
RestAssured and Postman for API testing
Serenity BDD and Cucumber for behavior-driven development
Spring Cloud and Pact for contract testing
Test containers for interacting with databases and other resources
JMeter, Gatling, and Locust for performance testing
Cypress and Selenium for UI testing
She noted that these were just a few examples, and the availability of frameworks might vary depending on the programming language and ecosystem.
Using Tools as Intended
As Marit pointed out, it was important to use testing tools for their intended purpose. For example, while it might have been possible to use Cucumber for performance testing, it was not built specifically for that purpose and might not have had the necessary functionality. It was best to use tools that aligned with their intended purpose to get the most benefit.
Increasing Confidence
Marit emphasized that tests should have increased confidence in the software being developed. When automated tests passed, teams could have been confident that their software was releasable. The primary concern of testers should have been to work with their team to increase confidence in the product being built.
Reliable Tests
Flaky tests that intermittently failed were frustrating and time-consuming to debug. It was essential to have reliable tests that provided accurate results, as Marit mentioned during the session. If a test consistently failed, it might have been worth investing time fixing it or considering if it was necessary.
What are some practical tips for test automation?
Marit shared some valuable tips for test automation during the session:
Always run your tests to ensure they provide value.
If tests are slow, consider optimizing them or moving some tests down the testing pyramid.
If tests are flaky, invest time in making them more reliable.
Didn’t disable or comment out tests without a good reason and explained if necessary.
Never trust a test that has never failed. Tests without assertions might have added to code coverage but wouldn’t have caught any bugs.
As Marit pointed out, these tips were essential for effective test automation.
Coding Practice
Marit demonstrated how the code should have been written and what coding standards should have been followed. She highlighted the importance of using meaningful names for functions, adding appropriate comments, and writing commands in a way that ensured easy readability of the code for developers.
Ignore or Not, this is the tester’s dilemma when they come across disabled tests? Marit van Dijk recommends at least add comments that highlight ‘why’ tests were ignored from the test execution. A wise advice that is often ignored by developers & testers! It’s time to revisit… pic.twitter.com/hiFwmn6ojv
— LambdaTest (@lambdatesting) August 24, 2023
Readability Tests
Marit emphasized the importance of considering the readability of tests. Tests sometimes served as documentation for what the system should do, and it was crucial to describe the intended behavior rather than the current implementation. This approach facilitated an understanding of the current behavior both at present and in the future.
Furthermore, Marit highlighted the need to account for the costs associated with tests, including not only their initial development but also the resources required to run them, analyze failures, and perform debugging. She emphasized the importance of maintenance for both test code and production code.
Marit also pointed out that testing the implementation, rather than the behavior, could lead to complications when implementing changes. This issue was a reason why some individuals had reservations about unit tests, as they could become too tightly coupled to the implementation. Quick and informative test feedback was crucial to expedite debugging processes.
She shared insights from her ex-coworker Mark and Clark, who stressed that test code was at least as important as production code.
This perspective highlighted the necessity of maintaining test code with the same diligence as production code, although it didn’t necessarily imply that they had to adhere to all the same principles.
Marit winded up the session by answering a few questions posted by attendees.
Q & A Session
Q. How do you recommend maintaining a strong focus on software security during a rapid development cycle? What role does security testing play in this scenario?
Marit: There are lots of tools that can help you in security testing or in keeping your application secure. One is to ensure that you keep your dependencies up to date and don’t lag with older versions that might have known vulnerabilities.
Q. How can performance testing be interwoven with development to proactively address performance issues and prevent delays?
Marit: Performance testing can be integrated with development to proactively address performance issues and prevent delays. Tools like JMeter and Gatling can be used for Java applications. These tools can be integrated into your build process to check the performance of your application. However, it’s important to note that the performance on actual production hardware may differ. Depending on the nature of your application, full-on performance tests may be required on production hardware to ensure everything performs as needed.
Have a question? Feel free to drop it on the LamdaTest Community.