Should testing drive development or development drive testing?
Almost no-one disagrees with the idea of testing, writes David Norfolk. but many people fail to follow an uncompromising test-centric process. Recently, I had the chance to ask Richard Collins, Development Strategist at a specialist software vendor, why he believes that test-driven development is the way to build better software. Interestingly, most of Richard's ideas probably feature in Comp Sci courses, or did when I did one, which isn't to say Comp Sci grads remember those parts. However, it is good to have it confirmed that "good practice" isn't just dull theory used to pass exams, but actually helps to keep a real software company in business.
Over to you Richard:
Well, would you trust a Comp Sci grad?
Developers are special; special in terms of being outside the usual economic boundaries that constrain the rest of us. I'm thinking particularly of software engineers with a Computer Science background (as opposed to Physics majors who seem to understand which side of a piece of buttered bread to lick).Working with Computer Science grads in a variety of areas and sectors I've found that they can be intensely competitive [they can be pretty bright too – Ed]; however, when it comes to competition at company level they are liable to hand over the crown jewels to even a half-witted industrial spy. There's not a lot of 'Them and Us' in the average developer mindset [I might disagree with that in part; sometimes, "them" is anybody in the business management hierarchy and "us", as Richard goes on to point out, is anyone in IT; but that situation is often a symptom of poor management – Ed] .
If you work in the games sector or in building office desktop applications, say, then you may have a strong understanding that if your output is buggy or non-intuitive to use, it simply won't sell.
But where the end user is a programmer, someone who needs a tool to make a tool, the temptation on the vendors' part is often to assume, "Hey we're all smart guys. You're going to have some fun getting this sucker to work properly. In fact I wouldn't want to patronise you by assuming that you'd need any help in making it work".
Paying customers
Why should software developers be treated as anything other than paying customers? If they're writing commercial code then they have some of the toughest deadlines around, with on-time bonuses attached. Is there something funny with the colour of their money?The main excuse for not treating them as "real" customers is cost. If test engineering is left out, for example – and a great way to leave it out is to call it QA – then monthly salary bills can be almost halved [I'm not actually sure that this approach is unique to software engineering tools vendors - Ed]. Many companies enjoy this apparent 'cost benefit', only to find it's no benefit at all. In fact, it turns up as an unquantifiable cost: poor reputation, which is likely to cost you more in the long-run than the "overheads" of rigorous test engineering.
Software tools are generally more subject to word-of-mouth (excited recommendation and its opposite, dire warning) than any other branch of IT. It's partly the collegiate way of thinking amongst developers – they have a need to feel part of a cutting-edge peer group and therefore to share information – and it's partly the flipside of this: the relative isolation and lonely nature of knocking out code in a language no-one speaks aloud.
So sending a link, with "Check this out, it's awesome!" to a colleague is then the most natural way to do three things in one: a) make a friend, b) solicit a reciprocal recommendation, and last but not least, c) assert 'insider' status.
Reputation risk
The way for a vendor to ensure that its reputation is (at the very least) untarnished on this word-of-mouth circuit is for it to employ sufficient test engineering talent for customers not to end up feeling like they're doing the vendor's testing for it. Irrespective of the function and ingenuity of a "cool tool", if it hasn't been crashed at high speed a thousand times to see which bits fly off or get jammed, then customers are never going to develop a thoughtless dependence on the product. Being "taken for granted" is, after all, just about the very best a brand can achieve.However, moving up a gear from 'at least it doesn't crash your machine' and into the positive recommendation space requires more than just keeping an eye on the bugs. It means that at the very beginning, at the back-of-envelope stage of any project, someone important/respected has asked the question: "How do we design this so that bugs, which are probably inevitable, at least have nowhere to hide; anything less wouldn't be fair to our customers. How do we design software so that it is completely transparent?" This key design question distinguishes small software vendors relying on a reputation for reliability and resilience, in order to compete with the big players in this game.
Jonathan Watts is a Lead Test Engineer at Red Gate and has been instrumental in ensuring that its developers design nothing that his test team can't get full access to at any time in the development process. It's a test-driven process, which is, basically, a local, closed-circuit equivalent of the Open Source mantra: 'release early and often'.
One tester per developer…
With a ratio of one test engineer for every development engineer it's hard for design flaws to stay flawed. Particularly as, from 'Day One' of the development cycle Watts and his colleagues submit all new builds to an overnight hosedown of test data.But it's more than process design – it's about how software comes apart. When tools are performing a relatively straightforward function - comparing two databases - the temptation is to cut straight to the chase and build the logic into the UI, a single testable entity. While this seems sensible and economical, the downside is that it's only testable after the whole thing is finished. Imagine trying to build a motorbike without being able to test that the parts are sound before you bolt them in place.
This process of 'relentless testing' also stretches beyond the development and test engineers. A team of usability engineers who work with customers and designers from the earliest phase of developing a new tool through to the polishing of the final button are also part of the process. They ensure that from the splash-screen onwards, the tool is completely self-explanatory and that it needs no more thinking about than using a pair of scissors.
This relentless focus on testing probably results in more time spent on making sure that internal team relationships are working as well as they should be. As mentioned earlier a software developer is quite often prone to thinking in a rather "community of geeks" kind of way, and has a hard time seeing the commercial wood for the trees of fascinating code.
You only have to compare Software Developers with Test Engineers and you see the difference instantly. The Test Engineer sees everything in terms of 'Them and Us'; 'Them' being the soft-hearted software developers and 'Us' being the hard-bitten bastids whose job it is to make them cry. And cry they do. I would too if I'd spent days coding up a spiffy new interface or regular expression, only to have the person sitting at the desk across from me break it in the first five minutes I might even be driven to build it right in the first place – Ed]. It must be like having the Cousin from Hell coming visiting on Christmas morning and grabbing your lovingly assembled F-117, with an evil glint in his eye; except that, in the test engineer’s case s/he also arrives with a lovingly packed toolbox of infernal instruments to help speed the disassembly along.
Along with the Cruise Control continuous-integration build rig and the equally widely used NUnit framework (used for automated regression testing) all the testers have their own preferred pliers, callipers and drills to hand: hard-wearing ‘building site’ tools with very specific jobs. The main ‘using point’ (and its curious how often this is different from the selling point) is that the tools work first time and don’t stop. Michelle Taylor, for example, is one of Jonathan Watts’ fellow test engineers and uses on a regular basis: Xenu’s LinkSleuth, PassMark’s TestLog and StudioSpell, an add-in for Visual Studio from Keyoti. The sight of any of these can bring a developer out in hives.
Happy endings
There is a happy ending: typically, when you put these two attitudes in close proximity, the good money drives out the bad and a shared concern for durability prevails. Developers quickly learn to build stuff that defeats their colleagues' meanest efforts and a healthy quality arms race ensues [well, it does if management understands the people issues involved and sets suitable goals and rewards - Ed].In many software companies, after unit testing, nothing happens until system testing begins. But there's a real opportunity to test earlier if you set things up as I recommend, because all products have an API that allows access for functionality testing without the UI being in place; and the earlier issues are found, the better the payback.
It all feeds into the bottom line. It's well known that, if a problem can be fixed at design level or requirements level, it will save a lot more money than if something was found later on. Investment at each stage turns into payback: a simpler, more usable product that is easier to test, therefore less likely to let customers down, which means they will then make recommends to colleagues and a virtuous circle then starts to turn.
http://www.theregister.co.uk/2007/01/08/test_test_red_gate/
No hay comentarios.:
Publicar un comentario