Contents
I did not get any feedback on the approach I had used to write a Data-Driven test that depends on the output of another Data-Driven test, so I decided to do a self-evaluation using the excellent xUnit Test Patterns Refactoring Test Code by Gerard Meszaros.
Goals of Test Automation
First I looked at the overlap between my test requirements (below) and Meszaros’s Chapter 3: Goals of Test Automation.
Click to view the original test requirements.
- Goal: Separation of Concerns
The testing of web service status was a separate concern than the verification of the properties of the XML payload, so each concern had its own test (Req 1.1, 1.2). - Goal: Simple Tests
Tests should be small and strive to Verify One Condition per Test. Each property was tested as a separate test (Req 1.2). - Goal: Defect Localization
Keeping the tests simple allows you to quickly pinpoint the bug depending on which test fails, what Mezaros calls Defect Localization. Req 2.2 was that each web service failure and each property failure was separately visible on the test report.
Test Smells
Next, I examined whether my approach had any Test Smells.
A smell is a symptom of a problem. A smell doesn’t necessarily tell us what is
wrong, because a particular smell may originate from any of several sources. – Gerard Meszaros
Fortunately, it did not have the most common behavior smell,Fragile Test, nor any of the following code smells:
- – Conditional Test Logic
- – Hard-Coded Test Data
- – Test Code Duplication
- – Fragile Fixtures
- – Hard-to-Test Code
It also did not suffer from being a Slow Test , because regardless of the number of properties that were tested, each web service was only called once. However, this solution, which is the most frequently used solution to a Slow Test , of using a Shared Fixture , could itself result in behavior smells e.g Erratic Tests like Lonely Tests and Interacting Tests.
My tests were Interacting Tests, one test depending in some way on the outcome of another test, but by design. They are generally frowned upon because most test frameworks do not support it. Meszaros does note that
TestNG, however, promotes interdependencies between tests by providing
features to manage the dependencies
The testWebserviceProperties test was also a Lonely Test, a test that cannot be run by itself because it depends on something in a Shared Fixture, but this too was by design – I did not want to test a property of a web service with a bad status code.
In my tests the Shared Fixture , the XMLPayload, was declared final, and hence an Immutable Shared Fixture , which prevents test interaction.
Other things that I learned
From Ch. 10, Result Verification, I learned that I was using State Verification, but no Behavior Verification e.g. with a Test Spy or Mock Object.
From Ch. 18, Test Strategy Patterns, I learned more about Data-Driven Test:
We store all the information needed for each test in a data file and write an interpreter that reads the file and executes the tests
This was a critical requirement because the intent was to have testers (not developers) be able to easily add new web services and properties without involving developers.
From Ch. 20, Fixture Setup Patterns, I learned that I was using a Chained Test
We let the other tests in a test suite set up the test fixture.
Conclusion
Overall, I could not find any egregious issues with the test approach I had taken (but if you do, please let me know).
More importantly, I got a chance to re-read parts of xUnit Test Patterns Refactoring Test Code by Gerard Meszaros. It allowed me to describe my approach using the Pattern names from the book, making it easier for others to understand.
It is an excellent reference book that I recommend to anyone interested in testing, from testers to developers to project managers and even IT executives.
References/Further Reading
xUnit Test Patterns Refactoring Test Code