One of my favorite TV shows right now is House.
In it a brilliant but antisocial doctor and his staff try to solve
medical mysteries. If you don't watch it, you should. The writing is
great. Earlier this season, there was an episode
where a journalist hits his head and has slurred speech. The team is
trying to diagnose this and having no luck. Toward the end, House tells
them to go do the blood work again and "don't use a computer." When
they look at the blood under a microscope, the cause becomes readily
apparent. There are parasites in the blood. The patient has malaria.
Just like the computers examining the blood were not programmed to
look for parasites, so too the software that we write to test a
program is often not programmed to look for all of the potential
failures. When we write test automation, we are focused on one thing.
If testing a video renderer, the test will make sure that video output
is correct. What if that output causes the UI to be distorted? What if
it causes audio to glitch? The test wasn't programmed to look for
that.
About a year ago I first visited the concept of test automation.
In that article I gave several reasons why the best testing must be a
mix of both manual testing and test automation. The idea that all tests
should be automated continues to pervade the industry. It is thought
that testers are expensive and automation is cheap. Over the long haul,
that may be true. It is especially true when projects are in
sustaining mode that you want primarily automated tests. However, when
developing a new product, relying solely on automated tested can be
disastrous. In addition to the issues I talked about in my last post,
there is a danger I didn't discuss. That is the danger of missing
something obvious but unforseen.
Before diving in, let me get some definitions out of the way.
Manual testing is just that: manual. It involves a human being
interacting with the program and observing the results. Test automation
is the use of a programming language to drive the program and
automatically determine whether the right actions are taking place.
In addition to the higher cost and thus higher latency of
automated testing, it is also possible that automated testing will just
miss things. Sometimes really obvious things. Just like the computers
in House that missed the parasites, so too will test automation miss
things. Just recently I came across a bug that I'm convinced no amount
of automation would ever find. The issue was this: while playing a CD,
pressing next song caused the volume to maximize. To a human, this
jumped out. To a computer designed to test CD playback, all would seem
normal. The next song did indeed play. Even a sophisticated program
that knew what each chapter sounds like would probably not notice that
the volume was too high. A programmer would specifically have to go
looking for this.
There are a near-infinite amount of things that can go wrong in
software. Side effects are common. Automation cannot catch all of
these. Each one has to be specifically programmed for. At some point,
the returns diminish and the test is set in stone. Any bugs which lie
outside that circle will never be found. At least, not until the
program is shipped and people try to actually use it.
The moral of this story: never rely solely on automation. It is
costly to have people look at your product but it is even costlier to
miss something. You have to fix it late in the process--perhaps after
you ship--which is really expensive. You lose credibility which is even
more expensive. Deciding the mix of manual and automated testing is a
balancing act. If you go too far in either direction, you'll fall.
http://blogs.msdn.com/b/steverowe/archive/2006/03/17/553905.aspx
No hay comentarios.:
Publicar un comentario