We recently went
through a round of test spec reviews on my team. Having read a good
number of test specs in a short period of time, I came to a
realization. It is imperative to know the failure condition in order to
write a good test case. This is at least as important if not more
important than understanding what success looks like.
Too often I saw a test case described by calling out what it would do, but not listing or even implying what the failure would look like. If a case cannot fail, passing has no meaning. I might see a case such as (simplified): "call API to sort 1000 pictures by date." Great. How is the test going to determine whether the sort took place correctly?
The problem is even more acute in stress or performance cases. A case such as "push buttons on this UI for 3 days" isn't likely to fail. Sure, the UI could fault, but what if it doesn't? What sort of failure is the author intending to find? Slow reaction time? Resource leaks? Drawing issues? Without calling these out, the test case could be implemented in a manner where failure will never occur. It won't be paying attention to the right state. The UI could run slow and the automation not notice. How slow is too slow anyway? The tester would feel comfortable that she had covered the stress scenario but in reality, the test adds no new knowledge about the quality of the product.
Another example: "Measure the CPU usage when doing X." This isn't a test case. There is no failure condition. Unless there is a threshold over which a failure is recorded, it is merely collecting data. Data without context is of little value.
When coming up with test cases, whether writing them down in a test spec or immediately when writing or executing them, consider the failure condition. Knowing what success looks like is insufficient. It must also be possible to enumerate what failure looks like. Only when testing for the failure condition and not finding it does a passing result gain value.
http://blogs.msdn.com/b/steverowe/archive/2008/06/04/test-for-failure-not-success.aspx
Too often I saw a test case described by calling out what it would do, but not listing or even implying what the failure would look like. If a case cannot fail, passing has no meaning. I might see a case such as (simplified): "call API to sort 1000 pictures by date." Great. How is the test going to determine whether the sort took place correctly?
The problem is even more acute in stress or performance cases. A case such as "push buttons on this UI for 3 days" isn't likely to fail. Sure, the UI could fault, but what if it doesn't? What sort of failure is the author intending to find? Slow reaction time? Resource leaks? Drawing issues? Without calling these out, the test case could be implemented in a manner where failure will never occur. It won't be paying attention to the right state. The UI could run slow and the automation not notice. How slow is too slow anyway? The tester would feel comfortable that she had covered the stress scenario but in reality, the test adds no new knowledge about the quality of the product.
Another example: "Measure the CPU usage when doing X." This isn't a test case. There is no failure condition. Unless there is a threshold over which a failure is recorded, it is merely collecting data. Data without context is of little value.
When coming up with test cases, whether writing them down in a test spec or immediately when writing or executing them, consider the failure condition. Knowing what success looks like is insufficient. It must also be possible to enumerate what failure looks like. Only when testing for the failure condition and not finding it does a passing result gain value.
http://blogs.msdn.com/b/steverowe/archive/2008/06/04/test-for-failure-not-success.aspx
No hay comentarios.:
Publicar un comentario