I think there is a fundamental problem in our industry’s approach to test case design. It may seem obvious that software testing is all about verifying that your application does what it is intended to do or as I call it ‘Should-Do’. However, in my experience, testing is more about verifying the running application is not broken. This is especially true in automated testing.
Designing tests of the intended behavior can only be well achieved if the business requirements are clear and unambiguous which is often challenging. More on that in a future post.
This approach of testing to requirements is somewhat counter to how testing has been done since the beginning of the computer age. Most testing in the past and sadly, to this day, does not start until the system to be tested is mostly built and running.
Tests are traditionally designed by using the running system and learning what it does then creating tests that follow the behavior learned. Then try different combinations to see if it breaks or if it does the same thing it did last time. If a test doesn’t break or it worked the same as before: PASS. Far too often, testers assume the developers and configurators have read, understood, and implemented the application based on the requirements at hand. I call this testing the Does-Do behavior. This can cause serious quality issues if the requirements were not clear, or development missed some of the intended behaviors.
A better approach to testing is centered around designing tests directly from requirements which results in testing of intended behavior or Should-Do. Test design can and should be done before or in parallel with the development. My premise is that if the requirements are good enough to begin code and configuration, they are good enough to begin test design.
There are two key results from this approach to test design.
Better Requirements - If the requirements are not clear enough to design tests, we need to ask questions and get clarification from the business. Finding and addressing ambiguities in requirements will help prevent defects in the delivered system. It will also provide the same clarifications to development which will impact the delivered functionality as to better match the clarified Should-Do behavior.
Better Testing – Should-Do test design avoids the pitfalls of traditional Does-Do testing. Does-Do test design may inadvertently include misunderstood requirements, missing functionality or just plain coding bugs that were not intended but seem to work right anyway. Tests designed directly from clear requirements will find those deviations from intended behavior.
Your application will be better if you think more about Should-Do vs. Does-Do and recognize the difference.
I encourage Business Analysist to think about writing testable requirements that can be understood in a Should-Do context. This will have a very positive impact on the success of implementation.
I encourage Test Analysts to think about Should-Do vs. Does-Do design and to recognize ambiguity in requirements and seek clarification.