Joel Spolsky, of Joel On Software, has a series on management styles. He details three styles of management: Command and Control, Econ 101, and Identity.
Command and Control is where management dictates everything that
happens. Econ 101 is where management uses economic incentives to get
the right behavior. Identity is getting people to do what you want by
making them like you, personally. These are all straw men and no one
really uses only one style but they are useful to understand and help to
shape a good mixture of styles. The command and control model, at its
extreme, is micromanagement. It doesn't allow for any innovation in the
leaf nodes. The polar opposite is Econ 101 where the management
identifies some outcomes and pays people to achieve them. It does not,
however, give much instruction on how to achieve that. Identity is
getting people to like you and the organization so they'll do what is
best. This has a similar pitfall as Econ 101 in that it doesn't
necessitate helping employees.
The best solution is a mixture. Some people are motivated by money
and rewarding people economically for contributing to the company can
help them. That cannot be the only motivation, however. Internal
motivation can be an even more powerful motivator. If people like
you, they'll give you their best. If people are excited about the
company's direction, they'll give it their all. A little command and
control can be useful too. It helps to give the team more than
direction. A very experienced team can get by with only direction but
most teams have inexperienced people and they will benefit from being
told how to get things done. The important thing is to make sure the
instruction is done with an eye to making an independent worker rather
than just getting the immidiate work done. One of the biggest jobs of a
manager is to grow his or her team. Helping the team mature will help
the individuals on the team but also help the output of the whole team.
http://blogs.msdn.com/b/steverowe/archive/2006/08/26/725560.aspx
lunes, diciembre 31
jueves, diciembre 27
The Identity Management Method
When you’re trying to get a team all working in the same direction, we’ve
seen that Command and Control management and Econ 101 management both fail
pretty badly in high tech, knowledge- oriented teams.
That leaves a technique that I’m going to have to call The Identity Method. The goal here is to manage by making people identify with the goals you’re trying to achieve. That’s a lot trickier than the other methods, and it requires some serious interpersonal skills to pull off. But if you do it right, it works better than any other method.
The problem with Econ 101 management is that it subverts intrinsic motivation. The Identity Method is a way to create intrinsic motivation.
To be an Identity Method manager, you have to summon all the social skills you have to make your employees identify with the goals of the organization, so that they are highly motivated, then you need to give them the information they need to steer in the right direction.
How do you make people identify with the organization?
It helps if the organizational goals are virtuous, or perceived as virtuous, in some way. Apple creates almost fanatic identification, almost entirely through a narrative that started with a single Superbowl ad in 1984: we are against totalitarianism. Doesn’t seem like a particularly bold position to take, but it worked. Here at Fog Creek, we stand bravely in opposition to killing kittens. Yaaaay!
A method I’m pretty comfortable with is eating together. I’ve always made a point of eating lunch with my coworkers, and at Fog Creek we serve catered lunches for the whole team every day and eat together at one big table. It’s hard to understate what a big impact this has on making the company feel like a family, in the good way, I think. In six years, nobody has ever quit.
I’m probably going to freak out some of our summer interns by admitting this, but one the goals of our internship program is to make people identify as New Yorkers, so they’re more comfortable with the idea of moving here after college and working for us full-time. We do this through a pretty exhausting list of extra-curricular summer activities: two Broadway shows, a trip to the Top of the Rock, a boat ride around Manhattan, a Yankees game, an open house so they can meet more New Yorkers, and a trip to a museum; Michael and I host parties in our apartments, both as a way of welcoming the interns but also as a way for interns to visualize living in an apartment in New York, not just the dorm we stuck them in.
In general, Identity Management requires you to create a cohesive, jelled team that feels like a family, so that people have a sense of loyalty and commitment to their coworkers.
The second part, though, is to give people the information they need to steer the organization in the right direction.
Earlier today Brett came into my office to discuss ship dates for FogBugz 6.0. He was sort of leaning towards April 2007; I was sort of leaning towards December 2006. Of course, if we shipped in April, we would have time to do a lot more polishing, and improve a lot of areas of the product; if we shipped in December, we’d probably have to cut a bunch of nice new features.
What I explained to Brett, though, is that we want to hire six new people in the spring, and the chances that we’ll be able to afford them without FogBugz 6.0 are much smaller. So the way I concluded the meeting with Brett was to make him understand the exact financial motivations I have for shipping earlier, and now that he knows that, I’m confident he’ll make the right decision... not necessarily my decision. Maybe we’ll have a big upswing in sales without FogBugz 6.0, and now that Brett understands the basic financial parameters, he’ll realize that maybe that means we can hold 6.0 for a few more features. The point being that by sharing information, I can get Brett to do the right thing for Fog Creek even if circumstances change. If I tried to push him around by offering him a cash reward for every day before April that he ships, his incentive would be to dump the existing buggy development build on the public tonight. If I tried to push him around using Command and Control management by ordering him to ship bug free code on time, dammit, he might do it, but he’d hate his job and leave.
Conclusion
There are as many different styles of management as there are managers. I’ve identified three major styles: two easy, dysfunctional styles and one hard, functional style, but the truth is that many development shops manage in more of an ad-hoc, “whatever works” way that may change from day to day or person to person.
http://www.joelonsoftware.com/items/2006/08/10.html
That leaves a technique that I’m going to have to call The Identity Method. The goal here is to manage by making people identify with the goals you’re trying to achieve. That’s a lot trickier than the other methods, and it requires some serious interpersonal skills to pull off. But if you do it right, it works better than any other method.
The problem with Econ 101 management is that it subverts intrinsic motivation. The Identity Method is a way to create intrinsic motivation.
To be an Identity Method manager, you have to summon all the social skills you have to make your employees identify with the goals of the organization, so that they are highly motivated, then you need to give them the information they need to steer in the right direction.
How do you make people identify with the organization?
It helps if the organizational goals are virtuous, or perceived as virtuous, in some way. Apple creates almost fanatic identification, almost entirely through a narrative that started with a single Superbowl ad in 1984: we are against totalitarianism. Doesn’t seem like a particularly bold position to take, but it worked. Here at Fog Creek, we stand bravely in opposition to killing kittens. Yaaaay!
A method I’m pretty comfortable with is eating together. I’ve always made a point of eating lunch with my coworkers, and at Fog Creek we serve catered lunches for the whole team every day and eat together at one big table. It’s hard to understate what a big impact this has on making the company feel like a family, in the good way, I think. In six years, nobody has ever quit.
I’m probably going to freak out some of our summer interns by admitting this, but one the goals of our internship program is to make people identify as New Yorkers, so they’re more comfortable with the idea of moving here after college and working for us full-time. We do this through a pretty exhausting list of extra-curricular summer activities: two Broadway shows, a trip to the Top of the Rock, a boat ride around Manhattan, a Yankees game, an open house so they can meet more New Yorkers, and a trip to a museum; Michael and I host parties in our apartments, both as a way of welcoming the interns but also as a way for interns to visualize living in an apartment in New York, not just the dorm we stuck them in.
In general, Identity Management requires you to create a cohesive, jelled team that feels like a family, so that people have a sense of loyalty and commitment to their coworkers.
The second part, though, is to give people the information they need to steer the organization in the right direction.
Earlier today Brett came into my office to discuss ship dates for FogBugz 6.0. He was sort of leaning towards April 2007; I was sort of leaning towards December 2006. Of course, if we shipped in April, we would have time to do a lot more polishing, and improve a lot of areas of the product; if we shipped in December, we’d probably have to cut a bunch of nice new features.
What I explained to Brett, though, is that we want to hire six new people in the spring, and the chances that we’ll be able to afford them without FogBugz 6.0 are much smaller. So the way I concluded the meeting with Brett was to make him understand the exact financial motivations I have for shipping earlier, and now that he knows that, I’m confident he’ll make the right decision... not necessarily my decision. Maybe we’ll have a big upswing in sales without FogBugz 6.0, and now that Brett understands the basic financial parameters, he’ll realize that maybe that means we can hold 6.0 for a few more features. The point being that by sharing information, I can get Brett to do the right thing for Fog Creek even if circumstances change. If I tried to push him around by offering him a cash reward for every day before April that he ships, his incentive would be to dump the existing buggy development build on the public tonight. If I tried to push him around using Command and Control management by ordering him to ship bug free code on time, dammit, he might do it, but he’d hate his job and leave.
Conclusion
There are as many different styles of management as there are managers. I’ve identified three major styles: two easy, dysfunctional styles and one hard, functional style, but the truth is that many development shops manage in more of an ad-hoc, “whatever works” way that may change from day to day or person to person.
http://www.joelonsoftware.com/items/2006/08/10.html
martes, diciembre 25
Software Testing Certifications
As a test engineer or QA engineer it is very important to have at
least one of the Software testing certifications. This will help to
broaden the software testing knowledge, also helpful for testing
employees to get the promotion in their respective field in large MNC’s.
Here I am listing some important software testing certifications:
See the description to know why should one go for the respective certification.
CQA-Certified Quality Analyst:
For Professional level of competence in the principles and practices of quality assurance in the IT profession.
CSTE-Certified Software Test Engineer:
Intended to establish standards for initial qualification and provide direction for the testing function through an aggressive educational program.
CSTP- Certified Software Test Professional:
To teach individuals from different disciplines sound and effective testing techniques and methods and to certify them as Software Testing Professionals.
CQE- Quality Engineer Certificate:
CQE is designed for those who understand the principles of product and service quality evaluation and control.
Quality Manager Certification:
For those who understand quality principles and standards in relation to organization and human resource management
CSQE- Certified Software Quality Engineer:
CSQE is designed for those who have a comprehensive understanding of software quality development and implementation; have a thorough understanding of software inspection and testing, verification, and validation; and can implement software development and maintenance processes and methods
CQIA- Quality Improvement Associate Certificate:
CQIA is designed to assess basic knowledge of quality tools and their uses by individuals who are involved in quality improvement projects, but do not necessarily come from traditional quality areas.
ASQ American society for quality Certified Software Quality Engineer (CSQE)
American society for quality Quality Improvement Associate
SSBB Six sigma black belt certification
ISEBQualification in software testing
ISTQB Certified tester International software quality institute
Mercury tools certifications
CSQA Certified software quality analyst
Rational certifications
Segue tools certifications
International Certifications:
1) Foundation Level :
Know more at : http://www.istqb.org/fileadmin/media/SyllabusFoundation.pdf
2) Advanced Level :
Know more at : http://www.istqb.org/fileadmin/media/SyllabusAdvanced.pdf
QA institute certifications :
1) Certified Software Tester (CSTE) :
know more at : http://www.softwarecertifications.org/qai_cste.htm
2) Certified Software Quality Analyst (CSQA) :
know more at : http://www.softwarecertifications.org/qai_cqa.htm
3) Certified Software Project Manager (CSPM) :
know more at : http://www.softwarecertifications.org/qai_cspm.htm
International institute of software testing:
1) Certified Software Test Professional (CSTP) :
know more at : http://www.testinginstitute.com/cstp.php
2) Certified Test Manager (CTM) :
know more at : http://www.testinginstitute.com/ctm.php
http://www.softwaretestinghelp.com/software-testing-certifications-2/
Here I am listing some important software testing certifications:
See the description to know why should one go for the respective certification.
CQA-Certified Quality Analyst:
For Professional level of competence in the principles and practices of quality assurance in the IT profession.
CSTE-Certified Software Test Engineer:
Intended to establish standards for initial qualification and provide direction for the testing function through an aggressive educational program.
CSTP- Certified Software Test Professional:
To teach individuals from different disciplines sound and effective testing techniques and methods and to certify them as Software Testing Professionals.
CQE- Quality Engineer Certificate:
CQE is designed for those who understand the principles of product and service quality evaluation and control.
Quality Manager Certification:
For those who understand quality principles and standards in relation to organization and human resource management
CSQE- Certified Software Quality Engineer:
CSQE is designed for those who have a comprehensive understanding of software quality development and implementation; have a thorough understanding of software inspection and testing, verification, and validation; and can implement software development and maintenance processes and methods
CQIA- Quality Improvement Associate Certificate:
CQIA is designed to assess basic knowledge of quality tools and their uses by individuals who are involved in quality improvement projects, but do not necessarily come from traditional quality areas.
ASQ American society for quality Certified Software Quality Engineer (CSQE)
American society for quality Quality Improvement Associate
SSBB Six sigma black belt certification
ISEBQualification in software testing
ISTQB Certified tester International software quality institute
Mercury tools certifications
CSQA Certified software quality analyst
Rational certifications
Segue tools certifications
International Certifications:
1) Foundation Level :
Know more at : http://www.istqb.org/fileadmin/media/SyllabusFoundation.pdf
2) Advanced Level :
Know more at : http://www.istqb.org/fileadmin/media/SyllabusAdvanced.pdf
QA institute certifications :
1) Certified Software Tester (CSTE) :
know more at : http://www.softwarecertifications.org/qai_cste.htm
2) Certified Software Quality Analyst (CSQA) :
know more at : http://www.softwarecertifications.org/qai_cqa.htm
3) Certified Software Project Manager (CSPM) :
know more at : http://www.softwarecertifications.org/qai_cspm.htm
International institute of software testing:
1) Certified Software Test Professional (CSTP) :
know more at : http://www.testinginstitute.com/cstp.php
2) Certified Test Manager (CTM) :
know more at : http://www.testinginstitute.com/ctm.php
http://www.softwaretestinghelp.com/software-testing-certifications-2/
lunes, diciembre 24
Software testing FAQ
What is software Testing? A basic to start with:
Software testing is the process used to help identify the correctness, completeness, security, and quality of developed computer software.
Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate.
This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of software quality assurance, which encompasses all business process areas, not just testing.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is “the process of questioning a product in order to evaluate it”, where the “questions” are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester.
Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces. The quality of the application can, and normally does, vary widely from system to system but some of the common quality attributes include capability, reliability, efficiency, portability, maintainability, compatibility and usability. A good test is sometimes described as one which reveals an error; however, more recent thinking suggests that a good test is one which reveals information of interest to someone who matters within the project community.
http://www.softwaretestinghelp.com/software-testing-faq/
Software testing is the process used to help identify the correctness, completeness, security, and quality of developed computer software.
Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate.
This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of software quality assurance, which encompasses all business process areas, not just testing.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is “the process of questioning a product in order to evaluate it”, where the “questions” are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester.
Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces. The quality of the application can, and normally does, vary widely from system to system but some of the common quality attributes include capability, reliability, efficiency, portability, maintainability, compatibility and usability. A good test is sometimes described as one which reveals an error; however, more recent thinking suggests that a good test is one which reveals information of interest to someone who matters within the project community.
http://www.softwaretestinghelp.com/software-testing-faq/
The Econ 101 Management Method
Joke: A poor Jew lived in the shtetl in 19th
century Russia. A Cossack comes up to him on horseback.
“What are you feeding that chicken?” asks the Cossack.
“Just some bread crumbs,” replies the Jew.
“How dare you feed a fine Russian chicken such lowly food!” says the Cossack, and hits the Jew with a stick.
The next day the Cossack comes back. “Now what are you feeding that chicken?” ask the Jew.
“Well, I give him three courses. There’s freshly cut grass, fine sturgeon caviar, and a small bowl of heavy cream sprinkled with imported French chocolate truffles for dessert.”
“Idiot!” says the Cossack, beating the Jew with a stick. “How dare you waste good food on a lowly chicken!”
On the third day, the Cossack again asks, “What are you feeding that chicken?”
“Nothing!” pleads the Jew. “I give him a kopeck and he buys whatever he wants.”
(pause for laughter)
(no?)
(ba dum dum)
(still no laughter)
(oh well).
I use the term “Econ 101” a little bit tongue-in-cheek. For my non-American readers: most US college departments have a course numbered “101” which is the basic introductory course for any field. Econ 101 management is the style used by people who know just enough economic theory to be dangerous.
The Econ 101 manager assumes that everyone is motivated by money, and that the best way to get people to do what you want them to do is to give them financial rewards and punishments to create incentives.
For example, AOL might pay their call-center people for every customer they persuade not to cancel their subscription.
A software company might give bonuses to programmers who create the fewest bugs.
It works about as well as giving your chickens money to buy their own food.
One big problem is that it replaces intrinsic motivation with extrinsic motivation.
Intrinsic motivation is your own, natural desire to do things well. People usually start out with a lot of intrinsic motivation. They want to do a good job. They want to help people understand that it’s in their best interest to keep paying AOL $24 a month. They want to write less-buggy code.
Extrinsic motivation is a motivation that comes from outside, like when you’re paid to achieve something specific.
Intrinsic motivation is much stronger than extrinsic motivation. People work much harder at things that they actually want to do. That’s not very controversial.
But when you offer people money to do things that they wanted to do, anyway, they suffer from something called the Overjustification Effect. “I must be writing bug-free code because I like the money I get for it,” they think, and the extrinsic motivation displaces the intrinsic motivation. Since extrinsic motivation is a much weaker effect, the net result is that you’ve actually reduced their desire to do a good job. When you stop paying the bonus, or when they decide they don’t care that much about the money, they no longer think that they care about bug free code.
Another big problem with Econ 101 management is the tendency for people to find local maxima. They’ll find some way to optimize for the specific thing you’re paying them, without actually achieving the thing you really want.
So for example your customer retention specialist, in his desire to earn the bonus associated with maintaining a customer, will drive the customer so crazy that the New York Times will run a big front page story about how nasty your customer “service” is. Although his behavior maximizes the thing you’re paying him for (customer retention) it doesn’t maximize the thing you really care about (profit). And then you try to reward him for the company profit, say, by giving him 13 shares of stock, and you realize that it’s not really something he controls, so it’s a waste of time.
When you use Econ 101 management, you’re encouraging developers to game the system.
Suppose you decide to pay a bonus to the developer with the fewest bugs. Now every time a tester tries to report a bug, it becomes a big argument, and usually the developer convinces the tester that it’s not really a bug. Or the tester agrees to report the bug “informally” to the developer before writing it up in the bug tracking system. And now nobody uses the bug tracking system. The bug count goes way down, but the number of bugs stays the same.
Developers are clever this way. Whatever you try to measure, they’ll find a way to maximize, and you’ll never quite get what you want.
Robert Austin, in his book Measuring and Managing Performance in Organizations, says there are two phases when you introduce new performance metrics. At first, you actually get what you wanted, because nobody has figured out how to cheat. In the second phase, you actually get something worse, as everyone figures out the trick to maximizing the thing that you’re measuring, even at the cost of ruining the company.
Worse, Econ 101 managers think that they can somehow avoid this situation just by tweaking the metrics. Dr. Austin’s conclusion is that you just can’t. It never works. No matter how much you try to adjust the metrics to reflect what you think you want, it always backfires.
The biggest problem with Econ 101 management, though, is that it’s not management at all: it’s really more of an abdication of management. A deliberate refusal to figure out how things can be made better. It’s a sign that management simply doesn’t know how to teach people to do better work, so they force everybody in the system to come up with their own way of doing it.
Instead of training developers on techniques of writing reliable code, you just absolve yourself of responsibility by paying them if they do. Now every developer has to figure it out on their own.
For more mundane tasks, working the counter at Starbucks or answering phone calls at AOL, it’s pretty unlikely that the average worker will figure out a better way of doing things on their own. You can go into any coffee shop in the country and order a short soy caramel latte extra-hot, and you’ll find that you have to keep repeating your order again and again: once to the coffee maker, again to the coffee maker when they forgot what you said, and finally to the cashier so they can figure out what to charge you. That’s the result of nobody telling the workers a better way. Nobody figures it out, except Starbucks, where the standard training involves a complete system of naming, writing things on cups, and calling out orders which insures that customers only have to specify their drink orders once. The system, invented by Starbucks HQ, works great, but workers at the other chains never, ever come up with it on their own.
Your customer service people spend most of the day talking to customers. They don’t have the time, the inclination, or the training to figure out better ways to do things. Nobody in the customer retention crew is going to be able to keep statistics and measure which customer retention techniques work best while pissing off the fewest bloggers. They just don’t care enough, they’re not smart enough, they don’t have enough information, and they are too busy with their real job.
As a manager it’s your job to figure out a system. That’s Why You Get The Big Bucks.
If you read a little bit too much Ayn Rand as a kid, or if you took one semester of Economics, before they explained that utility is not measured in dollars, you may think that setting up simplified bonus schemes and Pay For Performance is a pretty neat way to manage. But it doesn’t work. Start doing your job managing and stop feeding your chickens kopecks.
“Joel!” you yell. “Yesterday you told us that the developers should make all the decisions. Today you’re telling us that the managers should make all the decisions. What’s up with that?”
Mmm, not exactly. Yesterday I told you that your developers, the leaves in the tree, have the most information; micromanagement or Command and Control barking out orders is likely to cause non-optimal results. Today I’m telling you that when you’re creating a system, you can’t abdicate your responsibility to train your people by bribing them. Management, in general, needs to set up the system so that people can get things done, it needs to avoid displacing intrinsic motivation with extrinsic motivation, and it won’t get very far using fear and barking out specific orders.
Now that I’ve shot down Command and Control management and Econ 101 management, there’s one more method managers can use to get people moving in the right direction. I call it the Identity method and I’ll talk about it more tomorrow.
http://www.joelonsoftware.com/items/2006/08/09.html
“What are you feeding that chicken?” asks the Cossack.
“Just some bread crumbs,” replies the Jew.
“How dare you feed a fine Russian chicken such lowly food!” says the Cossack, and hits the Jew with a stick.
The next day the Cossack comes back. “Now what are you feeding that chicken?” ask the Jew.
“Well, I give him three courses. There’s freshly cut grass, fine sturgeon caviar, and a small bowl of heavy cream sprinkled with imported French chocolate truffles for dessert.”
“Idiot!” says the Cossack, beating the Jew with a stick. “How dare you waste good food on a lowly chicken!”
On the third day, the Cossack again asks, “What are you feeding that chicken?”
“Nothing!” pleads the Jew. “I give him a kopeck and he buys whatever he wants.”
(pause for laughter)
(no?)
(ba dum dum)
(still no laughter)
(oh well).
I use the term “Econ 101” a little bit tongue-in-cheek. For my non-American readers: most US college departments have a course numbered “101” which is the basic introductory course for any field. Econ 101 management is the style used by people who know just enough economic theory to be dangerous.
The Econ 101 manager assumes that everyone is motivated by money, and that the best way to get people to do what you want them to do is to give them financial rewards and punishments to create incentives.
For example, AOL might pay their call-center people for every customer they persuade not to cancel their subscription.
A software company might give bonuses to programmers who create the fewest bugs.
It works about as well as giving your chickens money to buy their own food.
One big problem is that it replaces intrinsic motivation with extrinsic motivation.
Intrinsic motivation is your own, natural desire to do things well. People usually start out with a lot of intrinsic motivation. They want to do a good job. They want to help people understand that it’s in their best interest to keep paying AOL $24 a month. They want to write less-buggy code.
Extrinsic motivation is a motivation that comes from outside, like when you’re paid to achieve something specific.
Intrinsic motivation is much stronger than extrinsic motivation. People work much harder at things that they actually want to do. That’s not very controversial.
But when you offer people money to do things that they wanted to do, anyway, they suffer from something called the Overjustification Effect. “I must be writing bug-free code because I like the money I get for it,” they think, and the extrinsic motivation displaces the intrinsic motivation. Since extrinsic motivation is a much weaker effect, the net result is that you’ve actually reduced their desire to do a good job. When you stop paying the bonus, or when they decide they don’t care that much about the money, they no longer think that they care about bug free code.
Another big problem with Econ 101 management is the tendency for people to find local maxima. They’ll find some way to optimize for the specific thing you’re paying them, without actually achieving the thing you really want.
So for example your customer retention specialist, in his desire to earn the bonus associated with maintaining a customer, will drive the customer so crazy that the New York Times will run a big front page story about how nasty your customer “service” is. Although his behavior maximizes the thing you’re paying him for (customer retention) it doesn’t maximize the thing you really care about (profit). And then you try to reward him for the company profit, say, by giving him 13 shares of stock, and you realize that it’s not really something he controls, so it’s a waste of time.
When you use Econ 101 management, you’re encouraging developers to game the system.
Suppose you decide to pay a bonus to the developer with the fewest bugs. Now every time a tester tries to report a bug, it becomes a big argument, and usually the developer convinces the tester that it’s not really a bug. Or the tester agrees to report the bug “informally” to the developer before writing it up in the bug tracking system. And now nobody uses the bug tracking system. The bug count goes way down, but the number of bugs stays the same.
Developers are clever this way. Whatever you try to measure, they’ll find a way to maximize, and you’ll never quite get what you want.
Robert Austin, in his book Measuring and Managing Performance in Organizations, says there are two phases when you introduce new performance metrics. At first, you actually get what you wanted, because nobody has figured out how to cheat. In the second phase, you actually get something worse, as everyone figures out the trick to maximizing the thing that you’re measuring, even at the cost of ruining the company.
Worse, Econ 101 managers think that they can somehow avoid this situation just by tweaking the metrics. Dr. Austin’s conclusion is that you just can’t. It never works. No matter how much you try to adjust the metrics to reflect what you think you want, it always backfires.
The biggest problem with Econ 101 management, though, is that it’s not management at all: it’s really more of an abdication of management. A deliberate refusal to figure out how things can be made better. It’s a sign that management simply doesn’t know how to teach people to do better work, so they force everybody in the system to come up with their own way of doing it.
Instead of training developers on techniques of writing reliable code, you just absolve yourself of responsibility by paying them if they do. Now every developer has to figure it out on their own.
For more mundane tasks, working the counter at Starbucks or answering phone calls at AOL, it’s pretty unlikely that the average worker will figure out a better way of doing things on their own. You can go into any coffee shop in the country and order a short soy caramel latte extra-hot, and you’ll find that you have to keep repeating your order again and again: once to the coffee maker, again to the coffee maker when they forgot what you said, and finally to the cashier so they can figure out what to charge you. That’s the result of nobody telling the workers a better way. Nobody figures it out, except Starbucks, where the standard training involves a complete system of naming, writing things on cups, and calling out orders which insures that customers only have to specify their drink orders once. The system, invented by Starbucks HQ, works great, but workers at the other chains never, ever come up with it on their own.
Your customer service people spend most of the day talking to customers. They don’t have the time, the inclination, or the training to figure out better ways to do things. Nobody in the customer retention crew is going to be able to keep statistics and measure which customer retention techniques work best while pissing off the fewest bloggers. They just don’t care enough, they’re not smart enough, they don’t have enough information, and they are too busy with their real job.
As a manager it’s your job to figure out a system. That’s Why You Get The Big Bucks.
If you read a little bit too much Ayn Rand as a kid, or if you took one semester of Economics, before they explained that utility is not measured in dollars, you may think that setting up simplified bonus schemes and Pay For Performance is a pretty neat way to manage. But it doesn’t work. Start doing your job managing and stop feeding your chickens kopecks.
“Joel!” you yell. “Yesterday you told us that the developers should make all the decisions. Today you’re telling us that the managers should make all the decisions. What’s up with that?”
Mmm, not exactly. Yesterday I told you that your developers, the leaves in the tree, have the most information; micromanagement or Command and Control barking out orders is likely to cause non-optimal results. Today I’m telling you that when you’re creating a system, you can’t abdicate your responsibility to train your people by bribing them. Management, in general, needs to set up the system so that people can get things done, it needs to avoid displacing intrinsic motivation with extrinsic motivation, and it won’t get very far using fear and barking out specific orders.
Now that I’ve shot down Command and Control management and Econ 101 management, there’s one more method managers can use to get people moving in the right direction. I call it the Identity method and I’ll talk about it more tomorrow.
http://www.joelonsoftware.com/items/2006/08/09.html
sábado, diciembre 22
Prescriptive Advice For Successful Unit Testing
At the beginning of the Vista (then Longhorn) project our team
decided that we would implement unit tests. This was the first attempt
in our locale to try to use them. We had some successes and some
failures. Out of that I have learned several things. This is an
attempt to codify what I have learned and to try to set out a
prescription for what I feel it would take to leverage them fully. What
follows are my recommended practices for implementing unit tests:
- Unit tests must be written by the developers of the code. Having a separate test team implementing them doesn't work as well. First, they take longer to write. The person developing the code knows how it is supposed to operate and can write tests for it quickly. Anyone else has to spend time learning how it is intended to be used from the minimal (at best) documentation or conversations with the developer. Having the developer write the unit tests acts as a form of documentation for everyone to follow. Second, the tests take a lot longer to come online. They are available days, weeks, or even months after the code is checked into the product.
- Unit tests must be written and passing before the code is checked in. Writing unit tests should be considered part of the development process. Nothing is code complete unless there are passing unit tests for it. Checking in code without passing unit tests should be treated like checking in a build break. That is, it is unacceptable.
- The unit tests must never be allowed to fail. Just recently I saw a bug in our database about a unit test that was failing. The bug was filed several months ago. This just cannot be allowed to happen if you are going to get the full value out of the unit tests. Unit tests should act as canaries. They are the first things to fall over when something goes wrong. If they ever break, they must be fixed immidiately.
- Before any checkin, existing unit tests must be run and made to pass 100%. This is a corrolary to points 2 and 3 but I want to make it explicit. Just as you would get a buddy build, a code review, and a smoke test before every checkin, you must also pass the unit tests.
- Unit tests must be granular and comprehensive. A test case which plays back some media is not really a unit test for a media pipeline and it certainly isn't sufficient. At a minimum, each external interface should be tested for all expected values. An even better system of unit tests would verify internal interfaces directly as well. A rule of thumb is that unit tests should achieve at least 60% code coverage before the first checkin is made.
- Standardize a mechanism for storing, building, and running unit tests. Don't leave it up to each individual or small team. There should be a standard harness for the tests to be written in. There should be a convention followed by all for where to check them into the source tree. Unit tests must be built regularly. In my opinion, the unit tests should be written in the same harness used by the test team for their tests. The unit tests should be checked into the build right alongside the code that they are testing and should be built with each build. A build break in a unit test is just as bad as a build break in the shipping code.
- The unit test framework must be lightweight. If the framework is to be the same one the test team uses (highly recommended), it must be one that can be run easily. If running unit tests requires anything more than copying some files and running an executable, it is too heavy. Expecting developers to install a whole test framework to run their tests is a prescription for disaster.
- Run unit tests as part of the daily testing process. If the tests use the same harness, they can be leveraged as part of the daily tests. The tests for external interfaces can be especially useful in assessing the quality of the product.
jueves, diciembre 20
Three Management Methods (Introduction)
If you want to lead a team, a company, an army, or a country, the primary
problem you face is getting everyone moving in the same direction, which is
really just a polite way of saying “getting people to do what you want.”
Think of it this way. As soon as your team consists of more than one person, you’re going to have different people with different agendas. They want different things than you want. If you’re a startup founder, you might want to make a lot of money quickly so you can retire early and spend the next couple of decades going to conferences for women bloggers. So you might spend most of your time driving around Sand Hill Road talking to VCs who might buy the company and flip it to Yahoo!. But Janice the Programmer, one of your employees, doesn’t care about selling out to Yahoo!, because she’s not going to make any money that way. What she cares about is writing code in the latest coolest new programming language, because it’s fun to learn a new thing. Meanwhile your CFO is entirely driven by the need to get out of the same cubicle he has been sharing with the system administrator, Trekkie Monster, and so he’s working up a new budget proposal that shows just how much money you would save by moving to larger office space that’s two minutes from his house, what a coincidence!
The problem of getting people to move in your direction (or, at least, the same direction) is not unique to startups, of course. It’s the same fundamental problem that a political leader faces when they get elected after promising to eliminate waste, corruption, and fraud in government. The mayor wants to make sure that it’s easy to get city approval of a new building project. The city building inspectors want to keep getting the bribes they have grown accustomed to.
And it’s the same problem that a military leader faces. They might want a team of soldiers to charge at the enemy, even when every individual soldier would really just rather cower behind a rock and let the others do the charging.
Here are three common approaches you might take:
http://www.joelonsoftware.com/items/2006/08/07.html
Think of it this way. As soon as your team consists of more than one person, you’re going to have different people with different agendas. They want different things than you want. If you’re a startup founder, you might want to make a lot of money quickly so you can retire early and spend the next couple of decades going to conferences for women bloggers. So you might spend most of your time driving around Sand Hill Road talking to VCs who might buy the company and flip it to Yahoo!. But Janice the Programmer, one of your employees, doesn’t care about selling out to Yahoo!, because she’s not going to make any money that way. What she cares about is writing code in the latest coolest new programming language, because it’s fun to learn a new thing. Meanwhile your CFO is entirely driven by the need to get out of the same cubicle he has been sharing with the system administrator, Trekkie Monster, and so he’s working up a new budget proposal that shows just how much money you would save by moving to larger office space that’s two minutes from his house, what a coincidence!
The problem of getting people to move in your direction (or, at least, the same direction) is not unique to startups, of course. It’s the same fundamental problem that a political leader faces when they get elected after promising to eliminate waste, corruption, and fraud in government. The mayor wants to make sure that it’s easy to get city approval of a new building project. The city building inspectors want to keep getting the bribes they have grown accustomed to.
And it’s the same problem that a military leader faces. They might want a team of soldiers to charge at the enemy, even when every individual soldier would really just rather cower behind a rock and let the others do the charging.
Here are three common approaches you might take:
- The Command and Control Method
- The Econ 101 Method
- The Identity Method
http://www.joelonsoftware.com/items/2006/08/07.html
lunes, diciembre 17
Tester titles/skills levels - what do they tell you?
I
want to discuss a topic seldom touched in this site. The recognition of
a tester’s skills. There was quite a discussion on SQAForums about the best tester .
The dilemma “who is best known not necessarily is the best at what
they do“. I want however to recognize testers in our company who are
the best at what they do and motivate others to become better at what
they do. Also to hire those who are good or has a potential and
willingness to be good at what they will do.
There is a formal activity designed to evaluate performance and productivity: performance reviews and there are tester titles/skill levels - also formality. I'm going to analyze them with this blog.
Experiences that caused me to talk about it
I’ve participated in describing the test levels (several times) and later improving them. I’m now a part of a special “working party“ (consisting of 2 persons) who is designed to approve each tester skill level change (increase). I’ve done a lot of performance reviews and was one of few persons in a company who found some value in that process other than following the corporate guidelines. And still I’m not happy about the processes I’m taking part in.
Details, examples, what’s wrong
For those who does not work in a large company with defined list of skill levels or positions – this is quite simple. They have list like functional tester, test engineer, QA analyst, etc. optionally complemented with advanced, specialist, expert. Each skill level has description typically including types of tasks and duties the person is capable of (optionally – typical tasks the person is supposed to perform).
I’ve never seen it include the evaluation of a “performance/speedâ€. I don’t mean diligence I mean ability to achieve the desired results with less effort. Testers are mostly problem solvers in my context. Skills not only determine which problems they could solve, but also – how fast. Suppose tester A is capable to solve 7 but tester B – 8 out of 10 problems, but it takes twice as much time for the tester B. If however tester A could solve the first 7 problems twice as fast as B he will have plenty of time to ask/search for solution for the last 3, while B will be left with 2
There are a lot of testing tasks that would take few times more effort for less skilled tester although each of them are capable to do it. Unfortunately there is also this vision of tester whose only task is to press buttons in certain sequence, so that the only skill you need to improve your performance is typing (typewriting) skill. It maybe the case in certain contexts like 100% scripted manual regression testing. It is fortunately not my context – I’m afraid I wouldn’t survive it.
The purpose of evaluating skill level?
I’m not against it. I know at least few good reasons having those skill levels evaluated:
- it evaluates testers usefulness for company and even help to decide the salary
- it makes a tester transition from one project to another (which happens quite frequently at least in my company) smooth – so the new boss knows what to expect from a tester and what tasks to give him
- it recognizes tester skills and show them path for further growth, especially taking into account that certifications are not among practitioners seen as skills recognition (which is also my attitude)
Skills in computer games (RPG)
I’ve used to play computer games a lot. RPG games was one of my favorite. The idea is to spend endless hours developing your character by killing monsters (well, there are also those quests, however). Killing brings experience which enables you to improve your skills. There are a lot of different skills in different disciplines like fighting: sword, dodging, parry, etc. Magic: fire, illusions, destruction, etc. etc. There are two basic needs for skills: 1) you are able to learn do certain actions (like specific spells) only once you skill is great enough 2) your efficiency (damage done on target, failure rate, how fast you got exhausted, etc.) at doing the actions increases as your skill increase.
Few games have only one of those two or are designed to make one of them more significant. I loved the most balanced ones. In those games one very experienced and correctly developed character is better that two less experienced or wrong developed, better than pack of little experienced and hundreds of novices.
What so specific about a tester
Tester credibility is issue number one. I’ve observed through 10 years of experience that once I earn credibility my performance improves as I don’t have to waste my time proving that I do the right things, especially when I follow context-driven methodology.
The issue number two is the wide variety of tasks functional tester is really supposed to do as tester is supposed to test any software written in any language following any architecture for any business.
For developer the main skill is the language, business analyst – business domain, designer – architecture type. For tester – generic skills in testing, analytical/critical thinking, communication documentation, etc.
I tend to think of a tester as a jack-of-all trades. And still there are a lot of place for specialization. In our company some areas of deeper specialization include performance testing, api-level, exploratory testing. Scripted black-box testing is never considered a specialization worth to mention – supposed as generic testing skill.
http://testingreflections.com/node/4569
There is a formal activity designed to evaluate performance and productivity: performance reviews and there are tester titles/skill levels - also formality. I'm going to analyze them with this blog.
Experiences that caused me to talk about it
I’ve participated in describing the test levels (several times) and later improving them. I’m now a part of a special “working party“ (consisting of 2 persons) who is designed to approve each tester skill level change (increase). I’ve done a lot of performance reviews and was one of few persons in a company who found some value in that process other than following the corporate guidelines. And still I’m not happy about the processes I’m taking part in.
Details, examples, what’s wrong
For those who does not work in a large company with defined list of skill levels or positions – this is quite simple. They have list like functional tester, test engineer, QA analyst, etc. optionally complemented with advanced, specialist, expert. Each skill level has description typically including types of tasks and duties the person is capable of (optionally – typical tasks the person is supposed to perform).
I’ve never seen it include the evaluation of a “performance/speedâ€. I don’t mean diligence I mean ability to achieve the desired results with less effort. Testers are mostly problem solvers in my context. Skills not only determine which problems they could solve, but also – how fast. Suppose tester A is capable to solve 7 but tester B – 8 out of 10 problems, but it takes twice as much time for the tester B. If however tester A could solve the first 7 problems twice as fast as B he will have plenty of time to ask/search for solution for the last 3, while B will be left with 2
There are a lot of testing tasks that would take few times more effort for less skilled tester although each of them are capable to do it. Unfortunately there is also this vision of tester whose only task is to press buttons in certain sequence, so that the only skill you need to improve your performance is typing (typewriting) skill. It maybe the case in certain contexts like 100% scripted manual regression testing. It is fortunately not my context – I’m afraid I wouldn’t survive it.
The purpose of evaluating skill level?
I’m not against it. I know at least few good reasons having those skill levels evaluated:
- it evaluates testers usefulness for company and even help to decide the salary
- it makes a tester transition from one project to another (which happens quite frequently at least in my company) smooth – so the new boss knows what to expect from a tester and what tasks to give him
- it recognizes tester skills and show them path for further growth, especially taking into account that certifications are not among practitioners seen as skills recognition (which is also my attitude)
Skills in computer games (RPG)
I’ve used to play computer games a lot. RPG games was one of my favorite. The idea is to spend endless hours developing your character by killing monsters (well, there are also those quests, however). Killing brings experience which enables you to improve your skills. There are a lot of different skills in different disciplines like fighting: sword, dodging, parry, etc. Magic: fire, illusions, destruction, etc. etc. There are two basic needs for skills: 1) you are able to learn do certain actions (like specific spells) only once you skill is great enough 2) your efficiency (damage done on target, failure rate, how fast you got exhausted, etc.) at doing the actions increases as your skill increase.
Few games have only one of those two or are designed to make one of them more significant. I loved the most balanced ones. In those games one very experienced and correctly developed character is better that two less experienced or wrong developed, better than pack of little experienced and hundreds of novices.
What so specific about a tester
Tester credibility is issue number one. I’ve observed through 10 years of experience that once I earn credibility my performance improves as I don’t have to waste my time proving that I do the right things, especially when I follow context-driven methodology.
The issue number two is the wide variety of tasks functional tester is really supposed to do as tester is supposed to test any software written in any language following any architecture for any business.
For developer the main skill is the language, business analyst – business domain, designer – architecture type. For tester – generic skills in testing, analytical/critical thinking, communication documentation, etc.
I tend to think of a tester as a jack-of-all trades. And still there are a lot of place for specialization. In our company some areas of deeper specialization include performance testing, api-level, exploratory testing. Scripted black-box testing is never considered a specialization worth to mention – supposed as generic testing skill.
http://testingreflections.com/node/4569
How to get your all bugs resolved without any ‘Invalid bug’ label?
I hate “Invalid bug” label from developers for the bugs reported by me, do you? I think every tester should try to get his/her 100% bugs resolved. This requires bug reporting skill. See my previous post on “How to write a good bug report? Tips and Tricks” to report bugs professionally and without any ambiguity.
The main reason for bug being marked as invalid is “Insufficient troubleshooting” by tester before reporting the bug. In this post I will focus only on troubleshooting to find main cause of the bug. Troubleshooting will help you to decide whether the ambiguity you found in your application under test is really a bug or any test setup mistake.
Yes, 50% bugs get marked as “Invalid bugs” only due to testers incomplete testing setup. Let’s say you found an ambiguity in application under test. You are now preparing the steps to report this ambiguity as a bug. But wait! Have you done enough troubleshooting before reporting this bug? Or have you confirmed if it is really a bug?
What troubleshooting you need to perform before reporting any bug?
Troubleshooting of:
Before reporting any bug, make sure it isn’t your mistake while testing, you have missed any important flag to set or you might have not configured your test setup properly.
Troubleshoot the reasons for the failure in application. On proper troubleshooting report the bug. I have complied a troubleshooting list. Check it out – what can be different reasons for failure.
Reasons of failure:
1) If you are using any configuration file for testing your application then make sure this file is upto date as per the application requirements: Many times some global configuration file is used to pick or set some application flags. Failure to maintain this file as per your software requirements will lead to malfunctioning of your application under test. You can’t report it as bug.
2) Check if your database is proper: Missing table is main reason that your application will not work properly.
I have a classic example for this: One of my projects was querying many monthly user database tables for showing the user reports. First table existence was checked in master table (This table was maintaining only monthly table names) and then data was queried from different individual monthly tables. Many testers were selecting big date range to see the user reports. But many times it was crashing the application as those tables were not present in database of test machine server, giving SQL query error and they were reporting it as bug which subsequently was getting marked as invalid by developers.
3) If you are working on automation testing project then debug your script twice before coming to conclusion that the application failure is a bug.
4) Check if you are not using invalid access credentials for authentication.
5) Check if software versions are compatible.
6) Check if there is any other hardware issue that is not related to your application.
7) Make sure your application hardware and software prerequisites are correct.
8 ) Check if all software components are installed properly on your test machine. Check whether registry entries are valid.
9) For any failure look into ‘system event viewer’ for details. You can trace out many failure reasons from system event log file.
10) Before starting to test make sure you have uploaded all latest version files to your test environment.
These are all small and common mistakes but can mostly impact on your relations and credibility in your team. When you will find that your bug is marked as invalid and the invalid bug reason is from above mentioned list – it will be a silly mistake and it will definitely hurt you. (At least to me!)
http://www.softwaretestinghelp.com/how-to-get-your-all-bugs-resolved/
The main reason for bug being marked as invalid is “Insufficient troubleshooting” by tester before reporting the bug. In this post I will focus only on troubleshooting to find main cause of the bug. Troubleshooting will help you to decide whether the ambiguity you found in your application under test is really a bug or any test setup mistake.
Yes, 50% bugs get marked as “Invalid bugs” only due to testers incomplete testing setup. Let’s say you found an ambiguity in application under test. You are now preparing the steps to report this ambiguity as a bug. But wait! Have you done enough troubleshooting before reporting this bug? Or have you confirmed if it is really a bug?
What troubleshooting you need to perform before reporting any bug?
Troubleshooting of:
- What’s not working?
- Why it’s not working?
- How can you make it work?
- What are the possible reasons for the failure?
Before reporting any bug, make sure it isn’t your mistake while testing, you have missed any important flag to set or you might have not configured your test setup properly.
Troubleshoot the reasons for the failure in application. On proper troubleshooting report the bug. I have complied a troubleshooting list. Check it out – what can be different reasons for failure.
Reasons of failure:
1) If you are using any configuration file for testing your application then make sure this file is upto date as per the application requirements: Many times some global configuration file is used to pick or set some application flags. Failure to maintain this file as per your software requirements will lead to malfunctioning of your application under test. You can’t report it as bug.
2) Check if your database is proper: Missing table is main reason that your application will not work properly.
I have a classic example for this: One of my projects was querying many monthly user database tables for showing the user reports. First table existence was checked in master table (This table was maintaining only monthly table names) and then data was queried from different individual monthly tables. Many testers were selecting big date range to see the user reports. But many times it was crashing the application as those tables were not present in database of test machine server, giving SQL query error and they were reporting it as bug which subsequently was getting marked as invalid by developers.
3) If you are working on automation testing project then debug your script twice before coming to conclusion that the application failure is a bug.
4) Check if you are not using invalid access credentials for authentication.
5) Check if software versions are compatible.
6) Check if there is any other hardware issue that is not related to your application.
7) Make sure your application hardware and software prerequisites are correct.
8 ) Check if all software components are installed properly on your test machine. Check whether registry entries are valid.
9) For any failure look into ‘system event viewer’ for details. You can trace out many failure reasons from system event log file.
10) Before starting to test make sure you have uploaded all latest version files to your test environment.
These are all small and common mistakes but can mostly impact on your relations and credibility in your team. When you will find that your bug is marked as invalid and the invalid bug reason is from above mentioned list – it will be a silly mistake and it will definitely hurt you. (At least to me!)
http://www.softwaretestinghelp.com/how-to-get-your-all-bugs-resolved/
The Command and Control Management Method
Frederick
the Great [PDF]: “Soldiers should fear their officers more
than all the dangers to which they are exposed.... Good will can never induce
the common soldier to stand up to such dangers; he will only do so through
fear.”
The Command and Control form of management is based on military management. Primarily, the idea is that people do what you tell them to do, and if they don’t, you yell at them until they do, and if they still don’t, you throw them in the brig for a while, and if that doesn’t teach them, you put them in charge of peeling onions on a submarine, sharing two cubit feet of personal space with a lad from a farm who really never quite learned about brushing his teeth.
There are a million great techniques you can use. Rent the movies Biloxi Blues and An Officer and a Gentleman for some ideas.
Some managers use this technique because they actually learned it in the military. Others grew up in authoritarian households or countries and think it’s a natural way to gain compliance. Others just don’t know any better. Hey, it works for the military, it should work for an internet startup!
There are, it turns out, three drawbacks with this method in a high tech team.
First of all, people don’t really like it very much, least of all smarty-pants software developers, who are, actually, pretty smart and are used to thinking they know more than everyone else, for perfectly good reasons, because it happens to be true, and so it really, really bothers them when they’re commanded to do something “because.” But that’s not really a good enough reason to discard this method… we’re trying to be rational here. High tech teams have many goals but making everyone happy is rarely goal number one.
A more practical drawback with Command and Control is that management literally does not have enough time to micromanage at this level, because there simply aren’t enough managers. In the military, it’s possible to give an order simultaneously to a large team of people because it’s common that everyone is doing the same thing. “Clean your guns!” you can say, to a squad of 28, and then go take a brief nap and have a cool iced tea on the Officer’s Club veranda. In software development teams everybody is working on something else, so attempts to micromanage turn into hit and run micromanagement. That’s where you micromanage one developer in a spurt of activity and then suddenly disappear from that developer’s life for a couple of weeks while you run around micromanaging other developers. The problem with hit and run micromanagement is that you don’t stick around long enough to see why your decisions are not working or to correct course. Effectively, all you accomplish is to knock your poor programmers off the train track every once in a while, so they spend the next week finding all their train cars and putting them back on the tracks and lining everything up again, a little bit battered from the experience.
The third drawback is that in a high tech company the individual contributors always have more information than the “leaders,” so they are really in the best position to make decisions. When the boss wanders into an office where two developers have been arguing for two hours about the best way to compress an image, the person with the least information is the boss, so that’s the last person you’d want making a technical decision. I remember when Mike Maples was my great grand-boss, in charge of Microsoft Applications, he was adamant about refusing to take sides on technical issues. Eventually people learned that they shouldn’t come to him to adjudicate. This forced people to debate the issue on the merits and issues were always resolved in favor of the person who was better at arguing, er, I mean, issues were always resolved in the best possible way.
If Command and Control is such a bad way to run a team, why does the military use it?
This was explained to me in NCO school. I was in the Israeli paratroopers in 1986. Probably the worst paratrooper they ever had, now that I think back.
There are several standing orders for soldiers. Number one: if you are in a mine field, freeze. Makes sense, right? It was drilled into you repeatedly during basic training. Every once in a while the instructor would shout out “Mine!” and everybody had to freeze just so you would get in the habit.
Standing order number two: when attacked, run towards your attackers while shooting. The shooting makes them take cover so they can’t fire at you. Running towards them causes you to get closer to them, which makes it easier to aim at them, which makes it easier to kill them. This standing order makes a lot of sense, too.
OK, now for the Interview Question. What do you do if you’re in a minefield, and people start shooting at you?
This is not such a hypothetical situation; it’s a really annoying way to get caught in an ambush.
The correct answer, it turns out, is that you ignore the minefield, and run towards the attackers while shooting.
The rationale behind this is that if you freeze, they’ll pick you off one at a time until you’re all dead, but if you charge, only some of you will die by running over mines, so for the greater good, that’s what you have to do.
The trouble is that no rational soldier would charge under such circumstances. Each individual soldier has an enormous incentive to cheat: freeze in place and let the other, more macho soldiers do the charging. It’s sort of like a Prisoners’ Dilemma.
In life or death situations, the military needs to make sure that they can shout orders and soldiers will obey them even if the orders are suicidal. That means soldiers need to be programmed to be obedient in a way which is not really all that important for, say, a software company.
In other words, the military uses Command and Control because it’s the only way to get 18 year olds to charge through a minefield, not because they think it’s the best management method for every situation.
In particular, in software development teams where good developers can work anywhere they want, playing soldier is going to get pretty tedious and you’re not really going to keep anyone on your team.
http://www.joelonsoftware.com/items/2006/08/08.html
The Command and Control form of management is based on military management. Primarily, the idea is that people do what you tell them to do, and if they don’t, you yell at them until they do, and if they still don’t, you throw them in the brig for a while, and if that doesn’t teach them, you put them in charge of peeling onions on a submarine, sharing two cubit feet of personal space with a lad from a farm who really never quite learned about brushing his teeth.
There are a million great techniques you can use. Rent the movies Biloxi Blues and An Officer and a Gentleman for some ideas.
Some managers use this technique because they actually learned it in the military. Others grew up in authoritarian households or countries and think it’s a natural way to gain compliance. Others just don’t know any better. Hey, it works for the military, it should work for an internet startup!
There are, it turns out, three drawbacks with this method in a high tech team.
First of all, people don’t really like it very much, least of all smarty-pants software developers, who are, actually, pretty smart and are used to thinking they know more than everyone else, for perfectly good reasons, because it happens to be true, and so it really, really bothers them when they’re commanded to do something “because.” But that’s not really a good enough reason to discard this method… we’re trying to be rational here. High tech teams have many goals but making everyone happy is rarely goal number one.
A more practical drawback with Command and Control is that management literally does not have enough time to micromanage at this level, because there simply aren’t enough managers. In the military, it’s possible to give an order simultaneously to a large team of people because it’s common that everyone is doing the same thing. “Clean your guns!” you can say, to a squad of 28, and then go take a brief nap and have a cool iced tea on the Officer’s Club veranda. In software development teams everybody is working on something else, so attempts to micromanage turn into hit and run micromanagement. That’s where you micromanage one developer in a spurt of activity and then suddenly disappear from that developer’s life for a couple of weeks while you run around micromanaging other developers. The problem with hit and run micromanagement is that you don’t stick around long enough to see why your decisions are not working or to correct course. Effectively, all you accomplish is to knock your poor programmers off the train track every once in a while, so they spend the next week finding all their train cars and putting them back on the tracks and lining everything up again, a little bit battered from the experience.
The third drawback is that in a high tech company the individual contributors always have more information than the “leaders,” so they are really in the best position to make decisions. When the boss wanders into an office where two developers have been arguing for two hours about the best way to compress an image, the person with the least information is the boss, so that’s the last person you’d want making a technical decision. I remember when Mike Maples was my great grand-boss, in charge of Microsoft Applications, he was adamant about refusing to take sides on technical issues. Eventually people learned that they shouldn’t come to him to adjudicate. This forced people to debate the issue on the merits and issues were always resolved in favor of the person who was better at arguing, er, I mean, issues were always resolved in the best possible way.
If Command and Control is such a bad way to run a team, why does the military use it?
This was explained to me in NCO school. I was in the Israeli paratroopers in 1986. Probably the worst paratrooper they ever had, now that I think back.
There are several standing orders for soldiers. Number one: if you are in a mine field, freeze. Makes sense, right? It was drilled into you repeatedly during basic training. Every once in a while the instructor would shout out “Mine!” and everybody had to freeze just so you would get in the habit.
Standing order number two: when attacked, run towards your attackers while shooting. The shooting makes them take cover so they can’t fire at you. Running towards them causes you to get closer to them, which makes it easier to aim at them, which makes it easier to kill them. This standing order makes a lot of sense, too.
OK, now for the Interview Question. What do you do if you’re in a minefield, and people start shooting at you?
This is not such a hypothetical situation; it’s a really annoying way to get caught in an ambush.
The correct answer, it turns out, is that you ignore the minefield, and run towards the attackers while shooting.
The rationale behind this is that if you freeze, they’ll pick you off one at a time until you’re all dead, but if you charge, only some of you will die by running over mines, so for the greater good, that’s what you have to do.
The trouble is that no rational soldier would charge under such circumstances. Each individual soldier has an enormous incentive to cheat: freeze in place and let the other, more macho soldiers do the charging. It’s sort of like a Prisoners’ Dilemma.
In life or death situations, the military needs to make sure that they can shout orders and soldiers will obey them even if the orders are suicidal. That means soldiers need to be programmed to be obedient in a way which is not really all that important for, say, a software company.
In other words, the military uses Command and Control because it’s the only way to get 18 year olds to charge through a minefield, not because they think it’s the best management method for every situation.
In particular, in software development teams where good developers can work anywhere they want, playing soldier is going to get pretty tedious and you’re not really going to keep anyone on your team.
http://www.joelonsoftware.com/items/2006/08/08.html
jueves, diciembre 13
What To Unit Test
Several months back I wrote
about unit testing. Following that I received a question from a reader
about how to actually carry out writing unit tests. What should be
tested? How much is enough? There is no single answer to these
questions but I can give guidance. The actual answer depends on the
specifics of what you are trying to test and how much time you have to
devote to unit testing. What follows is a list of the items you should
test, in the order you should test them. If you only have limited time,
start at the top of the list and work down until you run out of time.
In each case below, you should begin by writing tests for the positive cases and move on to the negative ones. By positive cases I mean those things you expect a user to do. This is the way the feature will be used if it is being used properly. Call each function with all the equivalence classes of data that they would be expected to see in normal circumstances. Ensure that the behavior you observe is what you expect. This means actually verifying that the right behavior took place. If short on time, you can often use the interface to verify itself. Calling a Get after a Set to verify that the value was set correctly is a good example of this. Even better is verifying through some alternative mechanism that the right behavior took place. This could be looking at internal structures, observing behavior, querying a database, etc.
Negative cases might also be called bad data. These are the inputs you don't expect. This includes out of range values, invalid objects, null pointers, improperly initialized structures, etc. Once upon a time we could ignore these and consider the result of these inputs to be undefined. In the days of networked computers and the security implications now present in many flaws, they cannot be treated lightly. It is important to test for these and ensure that the proper behavior is observed. That behavior could be to ignore the data, return an error, raise and exception, or even crash safely. The important thing is to understand what the behavior should be and to verify that it indeed takes place.
The first and most important thing to hit is the point where your customers will interact with your code. This could be an API or a UI* or a web service. This is the highest level of abstraction and perhaps not the easiest or even the best place for unit tests, but if you only have a little time, this is where you'll get the most bang for your buck.
The next place to go testing is at the internal interface level. This means testing the places where the internal objects/functional blocks interact. If you are programming OO, this means testing all of the public (or friend) interfaces on all of your objects.
Finally, test unit test all of your internal utility functions and private object methods. Crack the abstraction layer on your objects and probe their internal functionality.
If you do all of this, you'll have a pretty good idea that your feature is functioning as desired and will have warning when something breaks. Remember to run all of the (relevant) tests before *every* checkin. Also, be vigilant. Any failures at all should be fixed before checking in. It is not okay to ignore a failure. If it is a false failure, fix the test. If it is a valid one, fix the code.
* You probably cannot unit test the API but if you properly abstract the actual UI, you can write unit tests for the functionality of each UI element.
http://blogs.msdn.com/b/steverowe/archive/2006/12/01/what-to-unit-test.aspx
In each case below, you should begin by writing tests for the positive cases and move on to the negative ones. By positive cases I mean those things you expect a user to do. This is the way the feature will be used if it is being used properly. Call each function with all the equivalence classes of data that they would be expected to see in normal circumstances. Ensure that the behavior you observe is what you expect. This means actually verifying that the right behavior took place. If short on time, you can often use the interface to verify itself. Calling a Get after a Set to verify that the value was set correctly is a good example of this. Even better is verifying through some alternative mechanism that the right behavior took place. This could be looking at internal structures, observing behavior, querying a database, etc.
Negative cases might also be called bad data. These are the inputs you don't expect. This includes out of range values, invalid objects, null pointers, improperly initialized structures, etc. Once upon a time we could ignore these and consider the result of these inputs to be undefined. In the days of networked computers and the security implications now present in many flaws, they cannot be treated lightly. It is important to test for these and ensure that the proper behavior is observed. That behavior could be to ignore the data, return an error, raise and exception, or even crash safely. The important thing is to understand what the behavior should be and to verify that it indeed takes place.
The first and most important thing to hit is the point where your customers will interact with your code. This could be an API or a UI* or a web service. This is the highest level of abstraction and perhaps not the easiest or even the best place for unit tests, but if you only have a little time, this is where you'll get the most bang for your buck.
The next place to go testing is at the internal interface level. This means testing the places where the internal objects/functional blocks interact. If you are programming OO, this means testing all of the public (or friend) interfaces on all of your objects.
Finally, test unit test all of your internal utility functions and private object methods. Crack the abstraction layer on your objects and probe their internal functionality.
If you do all of this, you'll have a pretty good idea that your feature is functioning as desired and will have warning when something breaks. Remember to run all of the (relevant) tests before *every* checkin. Also, be vigilant. Any failures at all should be fixed before checking in. It is not okay to ignore a failure. If it is a false failure, fix the test. If it is a valid one, fix the code.
* You probably cannot unit test the API but if you properly abstract the actual UI, you can write unit tests for the functionality of each UI element.
http://blogs.msdn.com/b/steverowe/archive/2006/12/01/what-to-unit-test.aspx
Finding Great Developers
Where are all those great developers?
The first time you try to fill an open position, if you’re like most people, you place some ads, maybe browse around the large online boards, and get a ton of resumes.
As you go through them, you think, “hmm, this might work,” or, “no way!” or, “I wonder if this person could be convinced to move to Buffalo.” What doesn’t happen, and I guarantee this, what never happens is that you say, “wow, this person is brilliant! We must have them!” In fact you can go through thousands of resumes, assuming you know how to read resumes, which is not easy, and I’ll get to that on Friday, but you can go through thousands of job applications and quite frankly never see a great software developer. Not a one.
Here is why this happens.
The great software developers, indeed, the best people in every field, are quite simply never on the market.
The average great software developer will apply for, total, maybe, four jobs in their entire career.
The great college graduates get pulled into an internship by a professor with a connection to industry, then they get early offers from that company and never bother applying for any other jobs. If they leave that company, it’s often to go to a startup with a friend, or to follow a great boss to another company, or because they decided they really want to work on, say, Eclipse, because Eclipse is cool, so they look for an Eclipse job at BEA or IBM and then of course they get it because they’re brilliant.
If you’re lucky, if you’re really lucky, they show up on the open job market once, when, say, their spouse decides to accept a medical internship in Anchorage and they actually send their resume out to what they think are the few places they’d like to work at in Anchorage.
But for the most part, great developers (and this is almost a tautology) are, uh, great, (ok, it is a tautology), and, usually, prospective employers recognize their greatness quickly, which means, basically, they get to work wherever they want, so they honestly don’t send out a lot of resumes or apply for a lot of jobs.
Does this sound like the kind of person you want to hire? It should.
The corollary of that rule—the rule that the great people are never on the market—is that the bad people—the seriously unqualified—are on the market quite a lot. They get fired all the time, because they can’t do their job. Their companies fail—sometimes because any company that would hire them would probably also hire a lot of unqualified programmers, so it all adds up to failure—but sometimes because they actually are so unqualified that they ruined the company. Yep, it happens.
These morbidly unqualified people rarely get jobs, thankfully, but they do keep applying, and when they apply, they go to Monster.com and check off 300 or 1000 jobs at once trying to win the lottery.
Numerically, great people are pretty rare, and they’re never on the job market, while incompetent people, even though they are just as rare, apply to thousands of jobs throughout their career. So now, Sparky, back to that big pile of resumes you got off of Craigslist. Is it any surprise that most of them are people you don’t want to hire?
Astute readers, I expect, will point out that I’m leaving out the largest group yet, the solid, competent people. They’re on the market more than the great people, but less than the incompetent, and all in all they will show up in small numbers in your 1000 resume pile, but for the most part, almost every hiring manager in Palo Alto right now with 1000 resumes on their desk has the same exact set of 970 resumes from the same minority of 970 incompetent people that are applying for every job in Palo Alto, and probably will be for life, and only 30 resumes even worth considering, of which maybe, rarely, one is a great programmer. OK, maybe not even one. And figuring out how to find those needles in a haystack, we shall see, is possible but not easy.
Can I get them anyway?
Yes!
Well, Maybe!
Or perhaps, It Depends!
Instead of thinking as recruiting as a “gather resumes, filter resumes” procedure, you’re going to have to think of it as a “track down the winners and make them talk to you” procedure.
I have three basic methods for how to go about this:
You can probably come up with your own ideas, too. I’m just going to talk about three that worked for me.
To the mountain, Jeeves!
Think about where the people you want to hire are hanging out. What conferences do they go to? Where do they live? What organizations do they belong to? What websites do they read? Instead of casting a wide net with a job search on Monster.com, use the Joel on Software job board and limit your search to the smart people that read this site. Go to the really interesting tech conferences. Great Mac developers will be at Apple’s WWDC. Great Windows programmers will be at Microsoft’s PDC. There are a bunch of open source conferences, too.
Look for the hot new technology of the day. Last year it was Python; this year it’s Ruby. Go to their conferences where you’ll find early adopters who are curious about new things and always interested in improving.
Slink around in the hallways, talk to everyone you meet, go to the technical sessions and invite the speakers out for a beer, and when you find someone smart, BANG!—you launch into full-fledged flirt and flattery mode. “Ooooh, that’s so interesting!” you say. “Wow, I can’t believe you’re so smart. And handsome too. Where did you say you work? Really? There? Hmmmmmmm. Don’t you think you could do better? I think my company might be hiring…”
The corollary of this rule is to avoid advertising on general-purpose, large job boards. One summer, I inadvertently advertised our summer internships using MonsterTRAK, which offered the option to pay a little extra to make the internship visible to students at every school in the USA. This resulted in literally hundreds of resumes, not one of which made it past the first round. We ended up spending a ton of money to get a ton of resumes that stood almost no chance at finding the kind of people we wanted to hire. After a few days of this, the very fact that MonsterTRAK was the source of the resume made me think the candidate was probably not for us. Similarly, when Craigslist first started up and was really just visited by early-adopters in the Internet industry, we found great people by advertising on Craigslist, but today, virtually everyone who is moderately computer-literate uses it, resulting in too many resumes with too low of a needle-to-haystack ratio.
Internships
One good way to snag the great people who are never on the job market is to get them before they even realize there is a job market: when they’re in college.
Some hiring managers hate the idea of hiring interns. They see interns as unformed and insufficiently skilled. To some extent, that’s true. Interns are not as experienced as experienced employees (no. Really?!). You’re going to have to invest in them a little bit more and it’s going to take some time before they’re up to speed. The good news about our field is that the really great programmers often started programming when they were 10 years old. And while everyone else their age was running around playing “soccer” (this is a game that many kids who can’t program computers play that involves kicking a spherical object called a “ball” with their feet (I know, it sounds weird)), they were in their dad’s home office trying to get the Linux kernel to compile. Instead of chasing girls in the playground, they were getting into flamewars on Usenet about the utter depravity of programming languages that don’t implement Haskell-style type inference. Instead of starting a band in their garage, they were implementing a cool hack so that when their neighbor stole bandwidth over their open-access WIFI point, all the images on the web appeared upside-down. BWA HA HA HA HA!
So, unlike, say, the fields of law or medicine, over here in software development, by the time these kids are in their second or third year in college they are pretty darn good programmers.
Pretty much everyone applies for one job: their first one, and most kids think that it’s OK to wait until their last year to worry about this. And in fact most kids are not that inventive, and will really only bother applying for jobs where there is actually some kind of on-campus recruiting event. Kids at good colleges have enough choices of good jobs from the on-campus employers that they rarely bother reaching out to employers that don’t bother to come to campus.
You can either participate in this madness, by recruiting on campus, which is a good thing, don’t get me wrong, or you can subvert it, by trying to get great kids a year or two before they graduate.
I’ve had a lot of success doing it that way at Fog Creek. The process starts every September, when I start using all my resources to track down the best computer science students in the country. I send letters to a couple of hundred Computer Science departments. I track down lists of CS majors who are, at that point, two years away from graduating (usually you have to know someone in the department, a professor or student, to find these lists). Then I write a personal letter to every single CS major that I can find. Not email, a real piece of paper on Fog Creek letterhead, which I sign myself in actual ink. Apparently this is rare enough that it gets a lot of attention. I tell them we have internships and personally invite them to apply. I send email to CS professors and CS alumni, who usually have some kind of CS-majors mailing list that they forward it on to.
Eventually, we get a lot of applications for these internships, and we can have our pick of the crop. In the last couple of years I’ve gotten 200 applications for every internship. We’ll generally winnow that pile of applications down to about 10 (per opening) and then call all those people for a phone interview. Of the people getting past the phone interview, we’ll probably fly two or three out to New York for an in-person interview.
By the time of the in-person interview, there’s such a high probability that we’re going to want to hire this person that it’s time to launch into full-press recruitment. They’re met at the airport here by a uniformed limo driver who grabs their luggage and whisks them away to their hotel, probably the coolest hotel they’ve ever seen in their life, right in the middle of the fashion district with models walking in and out at all hours and complicated bathroom fixtures that are probably a part of the permanent collection of the Museum of Modern Art, but good luck trying to figure out how to brush your teeth. Waiting in the hotel room, we leave a hospitality package with a T-shirt, a suggested walking tour of New York written by Fog Creek staffers, and a DVD documentary of the 2005 summer interns. There’s a DVD player in the room so a lot of them watch how much fun was had by previous interns.
After a day of interviews, we invite the students to stay in New York at our expense for a couple of days if they want to check out the city, before the limo picks them up at their hotel and takes them back to the airport for their flight home.
Even though only about one in three applicants who make it to the in-person interview stage passes all our interviews, it’s really important that the ones that do pass have a positive experience. Even the ones that don’t make it go back to campus thinking we’re a classy employer and tell all their friends how much fun they had staying in a luxury hotel in the Big Apple, which makes their friends apply for an internship the next summer, if only for the chance at the trip.
During the summer of the internship itself, the students generally start out thinking, “ok, it’s a nice summer job and some good experience and maybe, just maybe, it’ll lead to a full-time job.” We’re a little bit ahead of them. We’re going to use the summer to decide if we want them as a full-time employee, and they’re going to use the summer to decide if they want to work for us.
So we give them real work. Hard work. Our interns always work on production code. Sometimes they’re working on the coolest new stuff in the company, which can make the permanent employees a little jealous, but that’s life. One summer we had a team of four interns build a whole new product from the ground up. That internship paid for itself in a matter of months. Even when they’re not building a new product, they’re working on real, shipping code, with some major area of functionality that they are totally, personally responsible for (with experienced mentors to help out, of course).
And then we make sure they have a great time. We host parties and open houses. We get them free housing in a rather nice local dorm where they can make friends from other companies and schools. We have some kind of extra-curricular activity or field trip every week: Broadway musicals (this year they went crazy about Avenue Q), movie openings, museum tours, a boat ride around Manhattan, a Yankees game, and believe it or not one of this year’s favorite things was a trip to Top of the Rock. I mean, it’s just a tall building where you go out on the roof in the middle of Manhattan. You wouldn’t think it would be such an awe-inspiring experience. But it was. A few Fog Creek employees go along on each activity, too.
At the end of the summer, there are always a few interns who convinced us that they are the truly great kinds of programmers that we just have to hire. Not all of them, mind you—some are merely great programmers that we are willing to pass on, and others would be great somewhere else, but not at Fog Creek. For example we’re a fairly autonomous company without a lot of middle management, where people are expected to be completely self-driven. Historically it has happened a couple of times where a summer intern would be great in a situation where they had someone to guide them, but at Fog Creek they wouldn’t get enough direction and would flounder.
Anyway, for the ones we really want to hire, there’s no use in waiting. We make an early offer for a full time job, conditional on their graduating. And it’s a great offer. We want them to be able to go back to school, compare notes with their friends, and realize that they’re getting a higher starting salary than anyone else.
Does this mean we’re overpaying? Not at all. You see, the average first year salary has to take into account a certain amount of risk that the person won’t work out. But we’ve already auditioned these kids, and there’s no risk that they won’t be great. We know what they can do. So when we hire them, we have more information about them than any other employer who has only interviewed them. That means we can pay them more money. We have better information, so we’re willing to pay more than employers without that information.
If we’ve done our job right, and we usually have, by this point the intern completely gives up and accepts our offer. Sometimes it takes a little more persuading. Sometimes they want to leave their options open, but the outstanding offer from Fog Creek ensures that the first time they have to wake up at 8:00am and put on a suit for an interview with Oracle, when the alarm goes off, there’s a good chance that they’ll say “why the heck am I getting up at 8:00am and putting on a suit for an interview with Oracle when I already have an excellent job waiting for me at Fog Creek?” And, my hope is, they won’t even bother going to that interview.
By the way, before I move on, I need to clarify something about internships in computer science and software development. In this day and age, in this country, it is totally expected that these are paid internships, and the salaries are usually pretty competitive. Although unpaid internships are common in other fields from publishing to music, we pay $750 a week, plus free housing, plus free lunch, plus free subway passes, not to mention relocation expenses and all the benefits. The dollar amount is a little bit lower than average but it includes the free housing so it works out being a little bit better than average. I thought I’d mention that because every time I’ve talked about internships on my website somebody inevitably gets confused and thinks I’m taking advantage of slave labor or something. You there—young whippersnapper! Get me a frosty cold orange juice, hand-squeezed, and make it snappy!
An internship program creates a pipeline for great employees, but it’s a pretty long pipeline, and a lot of people get lost along the way. We basically calculate we’re going to have to hire two interns for every full-time employee that we get out of it, and if you hire interns with one year left in school, there’s still a two year pipeline between when you start hiring and when they show up for their first day of full time work. That means we hire just about as many interns as we can physically fit in our offices each summer. The first three summers, we tried to limit our internship program to students with one year left in school, but this summer we finally realized that we were missing out on some great younger students so we opened the program to students in any year in college. Believe it or not, I’m even trying to figure out how to get high school kids in here, maybe setting up computers after school for college money, just to start to build a connection with the next generation of great programmers, even if it becomes a six year pipeline. I have a long horizon.
Build the community (*hard)
The idea here is to create a large community of like-minded smart developers who cluster around your company, somehow, so you have an automatic audience to reach out to every time you have an opening.
This is, to tell the truth, how we found so many of our great Fog Creek people: through my personal website, the one you’re reading right now. Major articles on this site can be read by as many as a million people, most of them software developers in some capacity. With a large, self-selecting audience, whenever I mention that I’m looking for someone on the home page, I’ll usually get a pretty big pile of very good resumes.
This is that category with the asterisk that means “hard,” since I feel like I’m giving you advice that says, “to win a beauty pageant, (a) get beautiful, and (b) enter the pageant.” That’s because I’m really not sure why or how this site became so popular or why the people who read it are the best software developers.
I really wish I could help you more here. Derek Powazek wrote a good book on the subject (Design for Community). A lot of companies tried various blogging strategies and unfortunately a lot of them failed to build up any kind of audience, so all I can say is that what worked for us may or may not work for you and I’m not sure what you can do about it. I did just open a job board on the site (jobs.joelonsoftware.com) where, for $350, you can list a job that Joel on Software readers will see.
Employee referrals: may be slippery when wet
The standard bit of advice on finding great software developers is to ask your existing developers. The theory is, gosh, they’re smart developers, they must know other smart developers.
And they might, but they also have very dear friends who are not very good developers, and there are about a million land mines in this field, so the truth is I generally consider the idea of employee referrals to be one of the weakest sources of new hires.
One big risk, of course, is non-compete agreements. If you didn’t think these mattered, think about the case of Crossgain, which had to fire a quarter of its employees, all ex-Microsoft, when Microsoft threatened them with individual lawsuits. No programmer in their right mind should ever sign a non-compete agreement, but most of them do because they can never imagine that it would be enforced, or because they are not in the habit of reading contracts, or because they already accepted the employment offer and moved their families across the country and the first day of work is the first time they’ve seen this agreement and it’s a little bit too late to try to negotiate it. So they sign, but this is one of the slimiest practices of employers and they are often enforceable and enforced.
The point being, non-compete agreements may mean that if you rely too heavily on referrals and end up hiring a block of people from the same ex-employer, which is where your employees know the other star programmers from in the first place, you’re taking a pretty major risk.
Another problem is that if you have any kind of selective hiring process at all, when you ask your employees to find referrals, they’re not going to even consider telling you about their real friends. Nobody wants to persuade their friends to apply for a job at their company only to get rejected. It sort of puts a damper on the friendship.
Since they won’t tell you about their friends and you may not be able to hire the people they used to work with, what’s left is not very many potential referrals.
But the real problem with employee referrals is what happens when recruiting managers with a rudimentary understanding of economics decide to offer cash bonuses for these referrals. This is quite common. The rationale goes like this: it can cost $30,000 to $50,000 to hire someone good through a headhunter or outside recruiter. If we can pay our employees, say, a $5000 bonus for every hire they bring in, or maybe an expensive sports car for every 10 referrals, or whatever, think how much money that will save? And $5000 sounds like a fortune to a salaried employee, because it is. So this sounds like a win-win all-around kind of situation.
The trouble is that suddenly you can see the little gears turning, and employees start dragging in everyone they can think of for interviews, and they have a real strong incentive to get these people hired, so they coach them for the interview, and Quiet Conversations are held in conference rooms with the interviewers, and suddenly your entire workforce is trying to get you to hire someone’s useless college roommate.
And it doesn’t work. ArsDigita got a lot of publicity for buying a Ferrari and putting it in the parking lot and announcing that anyone who got 10 referrals could have it. Nobody ever got close, the quality of new hires went down, and the company fell apart, but probably not because of the Ferrari, which, it turns out, was rented, and not much more than a publicity stunt.
When a Fog Creek employee suggests someone that might be perfect to work for us, we’ll be willing to skip the initial phone screen, but that’s it. We still want them going through all the same interviews and we maintain the same high standards.
http://www.joelonsoftware.com/articles/FindingGreatDevelopers.html
The first time you try to fill an open position, if you’re like most people, you place some ads, maybe browse around the large online boards, and get a ton of resumes.
As you go through them, you think, “hmm, this might work,” or, “no way!” or, “I wonder if this person could be convinced to move to Buffalo.” What doesn’t happen, and I guarantee this, what never happens is that you say, “wow, this person is brilliant! We must have them!” In fact you can go through thousands of resumes, assuming you know how to read resumes, which is not easy, and I’ll get to that on Friday, but you can go through thousands of job applications and quite frankly never see a great software developer. Not a one.
Here is why this happens.
The great software developers, indeed, the best people in every field, are quite simply never on the market.
The average great software developer will apply for, total, maybe, four jobs in their entire career.
The great college graduates get pulled into an internship by a professor with a connection to industry, then they get early offers from that company and never bother applying for any other jobs. If they leave that company, it’s often to go to a startup with a friend, or to follow a great boss to another company, or because they decided they really want to work on, say, Eclipse, because Eclipse is cool, so they look for an Eclipse job at BEA or IBM and then of course they get it because they’re brilliant.
If you’re lucky, if you’re really lucky, they show up on the open job market once, when, say, their spouse decides to accept a medical internship in Anchorage and they actually send their resume out to what they think are the few places they’d like to work at in Anchorage.
But for the most part, great developers (and this is almost a tautology) are, uh, great, (ok, it is a tautology), and, usually, prospective employers recognize their greatness quickly, which means, basically, they get to work wherever they want, so they honestly don’t send out a lot of resumes or apply for a lot of jobs.
Does this sound like the kind of person you want to hire? It should.
The corollary of that rule—the rule that the great people are never on the market—is that the bad people—the seriously unqualified—are on the market quite a lot. They get fired all the time, because they can’t do their job. Their companies fail—sometimes because any company that would hire them would probably also hire a lot of unqualified programmers, so it all adds up to failure—but sometimes because they actually are so unqualified that they ruined the company. Yep, it happens.
These morbidly unqualified people rarely get jobs, thankfully, but they do keep applying, and when they apply, they go to Monster.com and check off 300 or 1000 jobs at once trying to win the lottery.
Numerically, great people are pretty rare, and they’re never on the job market, while incompetent people, even though they are just as rare, apply to thousands of jobs throughout their career. So now, Sparky, back to that big pile of resumes you got off of Craigslist. Is it any surprise that most of them are people you don’t want to hire?
Astute readers, I expect, will point out that I’m leaving out the largest group yet, the solid, competent people. They’re on the market more than the great people, but less than the incompetent, and all in all they will show up in small numbers in your 1000 resume pile, but for the most part, almost every hiring manager in Palo Alto right now with 1000 resumes on their desk has the same exact set of 970 resumes from the same minority of 970 incompetent people that are applying for every job in Palo Alto, and probably will be for life, and only 30 resumes even worth considering, of which maybe, rarely, one is a great programmer. OK, maybe not even one. And figuring out how to find those needles in a haystack, we shall see, is possible but not easy.
Can I get them anyway?
Yes!
Well, Maybe!
Or perhaps, It Depends!
Instead of thinking as recruiting as a “gather resumes, filter resumes” procedure, you’re going to have to think of it as a “track down the winners and make them talk to you” procedure.
I have three basic methods for how to go about this:
- Go to the mountain
- Internships
- Build your own community*
You can probably come up with your own ideas, too. I’m just going to talk about three that worked for me.
To the mountain, Jeeves!
Think about where the people you want to hire are hanging out. What conferences do they go to? Where do they live? What organizations do they belong to? What websites do they read? Instead of casting a wide net with a job search on Monster.com, use the Joel on Software job board and limit your search to the smart people that read this site. Go to the really interesting tech conferences. Great Mac developers will be at Apple’s WWDC. Great Windows programmers will be at Microsoft’s PDC. There are a bunch of open source conferences, too.
Look for the hot new technology of the day. Last year it was Python; this year it’s Ruby. Go to their conferences where you’ll find early adopters who are curious about new things and always interested in improving.
Slink around in the hallways, talk to everyone you meet, go to the technical sessions and invite the speakers out for a beer, and when you find someone smart, BANG!—you launch into full-fledged flirt and flattery mode. “Ooooh, that’s so interesting!” you say. “Wow, I can’t believe you’re so smart. And handsome too. Where did you say you work? Really? There? Hmmmmmmm. Don’t you think you could do better? I think my company might be hiring…”
The corollary of this rule is to avoid advertising on general-purpose, large job boards. One summer, I inadvertently advertised our summer internships using MonsterTRAK, which offered the option to pay a little extra to make the internship visible to students at every school in the USA. This resulted in literally hundreds of resumes, not one of which made it past the first round. We ended up spending a ton of money to get a ton of resumes that stood almost no chance at finding the kind of people we wanted to hire. After a few days of this, the very fact that MonsterTRAK was the source of the resume made me think the candidate was probably not for us. Similarly, when Craigslist first started up and was really just visited by early-adopters in the Internet industry, we found great people by advertising on Craigslist, but today, virtually everyone who is moderately computer-literate uses it, resulting in too many resumes with too low of a needle-to-haystack ratio.
Internships
One good way to snag the great people who are never on the job market is to get them before they even realize there is a job market: when they’re in college.
Some hiring managers hate the idea of hiring interns. They see interns as unformed and insufficiently skilled. To some extent, that’s true. Interns are not as experienced as experienced employees (no. Really?!). You’re going to have to invest in them a little bit more and it’s going to take some time before they’re up to speed. The good news about our field is that the really great programmers often started programming when they were 10 years old. And while everyone else their age was running around playing “soccer” (this is a game that many kids who can’t program computers play that involves kicking a spherical object called a “ball” with their feet (I know, it sounds weird)), they were in their dad’s home office trying to get the Linux kernel to compile. Instead of chasing girls in the playground, they were getting into flamewars on Usenet about the utter depravity of programming languages that don’t implement Haskell-style type inference. Instead of starting a band in their garage, they were implementing a cool hack so that when their neighbor stole bandwidth over their open-access WIFI point, all the images on the web appeared upside-down. BWA HA HA HA HA!
So, unlike, say, the fields of law or medicine, over here in software development, by the time these kids are in their second or third year in college they are pretty darn good programmers.
Pretty much everyone applies for one job: their first one, and most kids think that it’s OK to wait until their last year to worry about this. And in fact most kids are not that inventive, and will really only bother applying for jobs where there is actually some kind of on-campus recruiting event. Kids at good colleges have enough choices of good jobs from the on-campus employers that they rarely bother reaching out to employers that don’t bother to come to campus.
You can either participate in this madness, by recruiting on campus, which is a good thing, don’t get me wrong, or you can subvert it, by trying to get great kids a year or two before they graduate.
I’ve had a lot of success doing it that way at Fog Creek. The process starts every September, when I start using all my resources to track down the best computer science students in the country. I send letters to a couple of hundred Computer Science departments. I track down lists of CS majors who are, at that point, two years away from graduating (usually you have to know someone in the department, a professor or student, to find these lists). Then I write a personal letter to every single CS major that I can find. Not email, a real piece of paper on Fog Creek letterhead, which I sign myself in actual ink. Apparently this is rare enough that it gets a lot of attention. I tell them we have internships and personally invite them to apply. I send email to CS professors and CS alumni, who usually have some kind of CS-majors mailing list that they forward it on to.
Eventually, we get a lot of applications for these internships, and we can have our pick of the crop. In the last couple of years I’ve gotten 200 applications for every internship. We’ll generally winnow that pile of applications down to about 10 (per opening) and then call all those people for a phone interview. Of the people getting past the phone interview, we’ll probably fly two or three out to New York for an in-person interview.
By the time of the in-person interview, there’s such a high probability that we’re going to want to hire this person that it’s time to launch into full-press recruitment. They’re met at the airport here by a uniformed limo driver who grabs their luggage and whisks them away to their hotel, probably the coolest hotel they’ve ever seen in their life, right in the middle of the fashion district with models walking in and out at all hours and complicated bathroom fixtures that are probably a part of the permanent collection of the Museum of Modern Art, but good luck trying to figure out how to brush your teeth. Waiting in the hotel room, we leave a hospitality package with a T-shirt, a suggested walking tour of New York written by Fog Creek staffers, and a DVD documentary of the 2005 summer interns. There’s a DVD player in the room so a lot of them watch how much fun was had by previous interns.
After a day of interviews, we invite the students to stay in New York at our expense for a couple of days if they want to check out the city, before the limo picks them up at their hotel and takes them back to the airport for their flight home.
Even though only about one in three applicants who make it to the in-person interview stage passes all our interviews, it’s really important that the ones that do pass have a positive experience. Even the ones that don’t make it go back to campus thinking we’re a classy employer and tell all their friends how much fun they had staying in a luxury hotel in the Big Apple, which makes their friends apply for an internship the next summer, if only for the chance at the trip.
During the summer of the internship itself, the students generally start out thinking, “ok, it’s a nice summer job and some good experience and maybe, just maybe, it’ll lead to a full-time job.” We’re a little bit ahead of them. We’re going to use the summer to decide if we want them as a full-time employee, and they’re going to use the summer to decide if they want to work for us.
So we give them real work. Hard work. Our interns always work on production code. Sometimes they’re working on the coolest new stuff in the company, which can make the permanent employees a little jealous, but that’s life. One summer we had a team of four interns build a whole new product from the ground up. That internship paid for itself in a matter of months. Even when they’re not building a new product, they’re working on real, shipping code, with some major area of functionality that they are totally, personally responsible for (with experienced mentors to help out, of course).
And then we make sure they have a great time. We host parties and open houses. We get them free housing in a rather nice local dorm where they can make friends from other companies and schools. We have some kind of extra-curricular activity or field trip every week: Broadway musicals (this year they went crazy about Avenue Q), movie openings, museum tours, a boat ride around Manhattan, a Yankees game, and believe it or not one of this year’s favorite things was a trip to Top of the Rock. I mean, it’s just a tall building where you go out on the roof in the middle of Manhattan. You wouldn’t think it would be such an awe-inspiring experience. But it was. A few Fog Creek employees go along on each activity, too.
At the end of the summer, there are always a few interns who convinced us that they are the truly great kinds of programmers that we just have to hire. Not all of them, mind you—some are merely great programmers that we are willing to pass on, and others would be great somewhere else, but not at Fog Creek. For example we’re a fairly autonomous company without a lot of middle management, where people are expected to be completely self-driven. Historically it has happened a couple of times where a summer intern would be great in a situation where they had someone to guide them, but at Fog Creek they wouldn’t get enough direction and would flounder.
Anyway, for the ones we really want to hire, there’s no use in waiting. We make an early offer for a full time job, conditional on their graduating. And it’s a great offer. We want them to be able to go back to school, compare notes with their friends, and realize that they’re getting a higher starting salary than anyone else.
Does this mean we’re overpaying? Not at all. You see, the average first year salary has to take into account a certain amount of risk that the person won’t work out. But we’ve already auditioned these kids, and there’s no risk that they won’t be great. We know what they can do. So when we hire them, we have more information about them than any other employer who has only interviewed them. That means we can pay them more money. We have better information, so we’re willing to pay more than employers without that information.
If we’ve done our job right, and we usually have, by this point the intern completely gives up and accepts our offer. Sometimes it takes a little more persuading. Sometimes they want to leave their options open, but the outstanding offer from Fog Creek ensures that the first time they have to wake up at 8:00am and put on a suit for an interview with Oracle, when the alarm goes off, there’s a good chance that they’ll say “why the heck am I getting up at 8:00am and putting on a suit for an interview with Oracle when I already have an excellent job waiting for me at Fog Creek?” And, my hope is, they won’t even bother going to that interview.
By the way, before I move on, I need to clarify something about internships in computer science and software development. In this day and age, in this country, it is totally expected that these are paid internships, and the salaries are usually pretty competitive. Although unpaid internships are common in other fields from publishing to music, we pay $750 a week, plus free housing, plus free lunch, plus free subway passes, not to mention relocation expenses and all the benefits. The dollar amount is a little bit lower than average but it includes the free housing so it works out being a little bit better than average. I thought I’d mention that because every time I’ve talked about internships on my website somebody inevitably gets confused and thinks I’m taking advantage of slave labor or something. You there—young whippersnapper! Get me a frosty cold orange juice, hand-squeezed, and make it snappy!
An internship program creates a pipeline for great employees, but it’s a pretty long pipeline, and a lot of people get lost along the way. We basically calculate we’re going to have to hire two interns for every full-time employee that we get out of it, and if you hire interns with one year left in school, there’s still a two year pipeline between when you start hiring and when they show up for their first day of full time work. That means we hire just about as many interns as we can physically fit in our offices each summer. The first three summers, we tried to limit our internship program to students with one year left in school, but this summer we finally realized that we were missing out on some great younger students so we opened the program to students in any year in college. Believe it or not, I’m even trying to figure out how to get high school kids in here, maybe setting up computers after school for college money, just to start to build a connection with the next generation of great programmers, even if it becomes a six year pipeline. I have a long horizon.
Build the community (*hard)
The idea here is to create a large community of like-minded smart developers who cluster around your company, somehow, so you have an automatic audience to reach out to every time you have an opening.
This is, to tell the truth, how we found so many of our great Fog Creek people: through my personal website, the one you’re reading right now. Major articles on this site can be read by as many as a million people, most of them software developers in some capacity. With a large, self-selecting audience, whenever I mention that I’m looking for someone on the home page, I’ll usually get a pretty big pile of very good resumes.
This is that category with the asterisk that means “hard,” since I feel like I’m giving you advice that says, “to win a beauty pageant, (a) get beautiful, and (b) enter the pageant.” That’s because I’m really not sure why or how this site became so popular or why the people who read it are the best software developers.
I really wish I could help you more here. Derek Powazek wrote a good book on the subject (Design for Community). A lot of companies tried various blogging strategies and unfortunately a lot of them failed to build up any kind of audience, so all I can say is that what worked for us may or may not work for you and I’m not sure what you can do about it. I did just open a job board on the site (jobs.joelonsoftware.com) where, for $350, you can list a job that Joel on Software readers will see.
Employee referrals: may be slippery when wet
The standard bit of advice on finding great software developers is to ask your existing developers. The theory is, gosh, they’re smart developers, they must know other smart developers.
And they might, but they also have very dear friends who are not very good developers, and there are about a million land mines in this field, so the truth is I generally consider the idea of employee referrals to be one of the weakest sources of new hires.
One big risk, of course, is non-compete agreements. If you didn’t think these mattered, think about the case of Crossgain, which had to fire a quarter of its employees, all ex-Microsoft, when Microsoft threatened them with individual lawsuits. No programmer in their right mind should ever sign a non-compete agreement, but most of them do because they can never imagine that it would be enforced, or because they are not in the habit of reading contracts, or because they already accepted the employment offer and moved their families across the country and the first day of work is the first time they’ve seen this agreement and it’s a little bit too late to try to negotiate it. So they sign, but this is one of the slimiest practices of employers and they are often enforceable and enforced.
The point being, non-compete agreements may mean that if you rely too heavily on referrals and end up hiring a block of people from the same ex-employer, which is where your employees know the other star programmers from in the first place, you’re taking a pretty major risk.
Another problem is that if you have any kind of selective hiring process at all, when you ask your employees to find referrals, they’re not going to even consider telling you about their real friends. Nobody wants to persuade their friends to apply for a job at their company only to get rejected. It sort of puts a damper on the friendship.
Since they won’t tell you about their friends and you may not be able to hire the people they used to work with, what’s left is not very many potential referrals.
But the real problem with employee referrals is what happens when recruiting managers with a rudimentary understanding of economics decide to offer cash bonuses for these referrals. This is quite common. The rationale goes like this: it can cost $30,000 to $50,000 to hire someone good through a headhunter or outside recruiter. If we can pay our employees, say, a $5000 bonus for every hire they bring in, or maybe an expensive sports car for every 10 referrals, or whatever, think how much money that will save? And $5000 sounds like a fortune to a salaried employee, because it is. So this sounds like a win-win all-around kind of situation.
The trouble is that suddenly you can see the little gears turning, and employees start dragging in everyone they can think of for interviews, and they have a real strong incentive to get these people hired, so they coach them for the interview, and Quiet Conversations are held in conference rooms with the interviewers, and suddenly your entire workforce is trying to get you to hire someone’s useless college roommate.
And it doesn’t work. ArsDigita got a lot of publicity for buying a Ferrari and putting it in the parking lot and announcing that anyone who got 10 referrals could have it. Nobody ever got close, the quality of new hires went down, and the company fell apart, but probably not because of the Ferrari, which, it turns out, was rented, and not much more than a publicity stunt.
When a Fog Creek employee suggests someone that might be perfect to work for us, we’ll be willing to skip the initial phone screen, but that’s it. We still want them going through all the same interviews and we maintain the same high standards.
http://www.joelonsoftware.com/articles/FindingGreatDevelopers.html
lunes, diciembre 10
Mind Mapping in Software Testing – Ways to Make Testing More Fun!
As we all know visual aid is more powerful than any other mode of
learning. This has proven many times that people will remember the
creative visual aid than learning the things traditionally.
Usually we have seen people explaining presentations by chalking the lines, circles, and squares on board or through PowerPoint point slides.
But have we ever thought representing it in more creative way? Have we ever thought to make it more colorful?
If not, please read the below article to present your ideas in more creative way.
Why Mind Map is Required?
When we have many conventional methods why we need mind map? How this is different from concept maps.
Mind map is not limited for particular problems and ideas. It is open to create maps for every idea you have. Just you need to have good idea and intuitive knowledge about the subject.
Find the below few examples which could help you to frame your ideas.
Work Assignment on Software Project – Mind Map
This is about total work assigned to you for one release. What data you need to collect to map your ideas. A sample and simple example,
A Small analysis of this mind map,
Traceability concept is very important for testing perspective. This maps testing requirements to testcases by preparing test traceability matrix. Through this we have to ensure that we have covered all the testing functionality of the application.
Traceability between requirements and other downstream components like tests, task, team, milestones can be achieved from mind map.
One of the simple example we have mapped using mind map tool:
This is how I have mapped my requirements and again this is depending on the user how he assigns the value to symbols.
Similarly mind map can be used in any phase of the testing. Few more scenarios listed here, you can try these in your company.
Tony Buzan (Inventor of Mind Mapping) suggests 7 steps for making a successful mind map.
Mind Mapping Examples in Software Testing
Please access below links for more testing mind maps
http://www.softwaretestinghelp.com/mind-mapping-software-testing/
Usually we have seen people explaining presentations by chalking the lines, circles, and squares on board or through PowerPoint point slides.
But have we ever thought representing it in more creative way? Have we ever thought to make it more colorful?
If not, please read the below article to present your ideas in more creative way.
What is Mind Mapping?
A mind map is a graphical representation of ideas and concepts. It is a creative and logical way of advanced note-taking using symbols, colors, mind shapes, words, lines and images. This helps you to structuring information, helping you to understand requirements in better way, helps you to analyze, to cover the data comprehensively and moreover its fun!!Why Mind Map is Required?
When we have many conventional methods why we need mind map? How this is different from concept maps.
- Increases creativity
- Simple to implement the idea in creative way
- Very flexible and easy to maintain the mind maps.
- Provide more coverage.
- Can position all the data at one place (you need visit different portals every time).
- Can represent to management without any hurdle and confusion.
- We can mark different areas in different way to make it more attractive.
Mind map is not limited for particular problems and ideas. It is open to create maps for every idea you have. Just you need to have good idea and intuitive knowledge about the subject.
- Problem solving
- Structural representations
- Team planning
- Condensing material into compact and effective format
- To graph team activity.
Mind Map in Software Testing
Testing is huge area of ideas and creativity. Every phase of testing has its own methods and terminologies. It is up to the individual where to apply mind map in software testing. It is always advisable to have good understanding and ground work of internal branches of testing phase which you are planning to chalk out. We need to collect all those thoughts into one place.Find the below few examples which could help you to frame your ideas.
Work Assignment on Software Project – Mind Map
This is about total work assigned to you for one release. What data you need to collect to map your ideas. A sample and simple example,
- Start with the release name and year (like: June’12 Major Release)
- Collect all requirements assigned to you.(Like: CRs,SR,ITRs)
- Collect requirement numbers.
- Collect the requirement names and program name under which these requirement falls.
- Collect charge codes provided to each requirement.
- Collect Developer, Development lead and Development manager names (It helps to catch development team when we face issue)
- Similarly collect Testing team details. This helps you not hit internal websites every time if someone asks you to provide details.
- Collect analyst details. This is to get clarification on your requirements.
- Collect the iteration details (start date, end date, number) under which iteration the requirement comes.
- Collect all the links and credentials from where you access these details
A Small analysis of this mind map,
- Requirement square has a small pen and book like symbol; it indicates that it has some notes move cursor on requirements or those notes you can see in the bottom.
- Requirement numbers and flags. Flag is for severity of the requirement, here Red flag indicates that it is a critical requirement and with iteration number 1.
- See the graphical hyperlink join between ‘Dev team’ and ‘SR12345’. It means this dev team has developed this requirement.
- Joined one more graphical presentation from ‘Tester’ to ‘SR12345’ – This means these testers are responsible for testing these requirements.
- Also notice Local hyper link symbol (green arrow mark) in ‘Links and Credentials’ square – This is connected between ‘Links and Credentials’ and Requirements. Click on ‘Links and Credentials’ will redirect you to ‘Requirements’ square.
Traceability concept is very important for testing perspective. This maps testing requirements to testcases by preparing test traceability matrix. Through this we have to ensure that we have covered all the testing functionality of the application.
Traceability between requirements and other downstream components like tests, task, team, milestones can be achieved from mind map.
One of the simple example we have mapped using mind map tool:
This is how I have mapped my requirements and again this is depending on the user how he assigns the value to symbols.
Similarly mind map can be used in any phase of the testing. Few more scenarios listed here, you can try these in your company.
- Test case creation from Use case / Requirements.
- General report management.
- Automation test script management.
- Team management.
- Daily or weekly meetings.
Tony Buzan (Inventor of Mind Mapping) suggests 7 steps for making a successful mind map.
Mind Mapping Examples in Software Testing
Please access below links for more testing mind maps
- Example 1 – Test Planning using mind map
- Example 2 – Software Testing Interactive Mind map
- Example 3 – Software testing types mind map
Free Mind Mapping Software
Many freeware mind mapping tools are available in market. You can try any mind map tool which works for your ideas. Few free tools listed below which I’m familiar with.http://www.softwaretestinghelp.com/mind-mapping-software-testing/
Test, test and test again
Should testing drive development or development drive testing?
Almost no-one disagrees with the idea of testing, writes David Norfolk. but many people fail to follow an uncompromising test-centric process. Recently, I had the chance to ask Richard Collins, Development Strategist at a specialist software vendor, why he believes that test-driven development is the way to build better software. Interestingly, most of Richard's ideas probably feature in Comp Sci courses, or did when I did one, which isn't to say Comp Sci grads remember those parts. However, it is good to have it confirmed that "good practice" isn't just dull theory used to pass exams, but actually helps to keep a real software company in business.
Over to you Richard:
Well, would you trust a Comp Sci grad?
Developers are special; special in terms of being outside the usual economic boundaries that constrain the rest of us. I'm thinking particularly of software engineers with a Computer Science background (as opposed to Physics majors who seem to understand which side of a piece of buttered bread to lick).Working with Computer Science grads in a variety of areas and sectors I've found that they can be intensely competitive [they can be pretty bright too – Ed]; however, when it comes to competition at company level they are liable to hand over the crown jewels to even a half-witted industrial spy. There's not a lot of 'Them and Us' in the average developer mindset [I might disagree with that in part; sometimes, "them" is anybody in the business management hierarchy and "us", as Richard goes on to point out, is anyone in IT; but that situation is often a symptom of poor management – Ed] .
If you work in the games sector or in building office desktop applications, say, then you may have a strong understanding that if your output is buggy or non-intuitive to use, it simply won't sell.
But where the end user is a programmer, someone who needs a tool to make a tool, the temptation on the vendors' part is often to assume, "Hey we're all smart guys. You're going to have some fun getting this sucker to work properly. In fact I wouldn't want to patronise you by assuming that you'd need any help in making it work".
Paying customers
Why should software developers be treated as anything other than paying customers? If they're writing commercial code then they have some of the toughest deadlines around, with on-time bonuses attached. Is there something funny with the colour of their money?The main excuse for not treating them as "real" customers is cost. If test engineering is left out, for example – and a great way to leave it out is to call it QA – then monthly salary bills can be almost halved [I'm not actually sure that this approach is unique to software engineering tools vendors - Ed]. Many companies enjoy this apparent 'cost benefit', only to find it's no benefit at all. In fact, it turns up as an unquantifiable cost: poor reputation, which is likely to cost you more in the long-run than the "overheads" of rigorous test engineering.
Software tools are generally more subject to word-of-mouth (excited recommendation and its opposite, dire warning) than any other branch of IT. It's partly the collegiate way of thinking amongst developers – they have a need to feel part of a cutting-edge peer group and therefore to share information – and it's partly the flipside of this: the relative isolation and lonely nature of knocking out code in a language no-one speaks aloud.
So sending a link, with "Check this out, it's awesome!" to a colleague is then the most natural way to do three things in one: a) make a friend, b) solicit a reciprocal recommendation, and last but not least, c) assert 'insider' status.
Reputation risk
The way for a vendor to ensure that its reputation is (at the very least) untarnished on this word-of-mouth circuit is for it to employ sufficient test engineering talent for customers not to end up feeling like they're doing the vendor's testing for it. Irrespective of the function and ingenuity of a "cool tool", if it hasn't been crashed at high speed a thousand times to see which bits fly off or get jammed, then customers are never going to develop a thoughtless dependence on the product. Being "taken for granted" is, after all, just about the very best a brand can achieve.However, moving up a gear from 'at least it doesn't crash your machine' and into the positive recommendation space requires more than just keeping an eye on the bugs. It means that at the very beginning, at the back-of-envelope stage of any project, someone important/respected has asked the question: "How do we design this so that bugs, which are probably inevitable, at least have nowhere to hide; anything less wouldn't be fair to our customers. How do we design software so that it is completely transparent?" This key design question distinguishes small software vendors relying on a reputation for reliability and resilience, in order to compete with the big players in this game.
Jonathan Watts is a Lead Test Engineer at Red Gate and has been instrumental in ensuring that its developers design nothing that his test team can't get full access to at any time in the development process. It's a test-driven process, which is, basically, a local, closed-circuit equivalent of the Open Source mantra: 'release early and often'.
One tester per developer…
With a ratio of one test engineer for every development engineer it's hard for design flaws to stay flawed. Particularly as, from 'Day One' of the development cycle Watts and his colleagues submit all new builds to an overnight hosedown of test data.But it's more than process design – it's about how software comes apart. When tools are performing a relatively straightforward function - comparing two databases - the temptation is to cut straight to the chase and build the logic into the UI, a single testable entity. While this seems sensible and economical, the downside is that it's only testable after the whole thing is finished. Imagine trying to build a motorbike without being able to test that the parts are sound before you bolt them in place.
This process of 'relentless testing' also stretches beyond the development and test engineers. A team of usability engineers who work with customers and designers from the earliest phase of developing a new tool through to the polishing of the final button are also part of the process. They ensure that from the splash-screen onwards, the tool is completely self-explanatory and that it needs no more thinking about than using a pair of scissors.
This relentless focus on testing probably results in more time spent on making sure that internal team relationships are working as well as they should be. As mentioned earlier a software developer is quite often prone to thinking in a rather "community of geeks" kind of way, and has a hard time seeing the commercial wood for the trees of fascinating code.
You only have to compare Software Developers with Test Engineers and you see the difference instantly. The Test Engineer sees everything in terms of 'Them and Us'; 'Them' being the soft-hearted software developers and 'Us' being the hard-bitten bastids whose job it is to make them cry. And cry they do. I would too if I'd spent days coding up a spiffy new interface or regular expression, only to have the person sitting at the desk across from me break it in the first five minutes I might even be driven to build it right in the first place – Ed]. It must be like having the Cousin from Hell coming visiting on Christmas morning and grabbing your lovingly assembled F-117, with an evil glint in his eye; except that, in the test engineer’s case s/he also arrives with a lovingly packed toolbox of infernal instruments to help speed the disassembly along.
Along with the Cruise Control continuous-integration build rig and the equally widely used NUnit framework (used for automated regression testing) all the testers have their own preferred pliers, callipers and drills to hand: hard-wearing ‘building site’ tools with very specific jobs. The main ‘using point’ (and its curious how often this is different from the selling point) is that the tools work first time and don’t stop. Michelle Taylor, for example, is one of Jonathan Watts’ fellow test engineers and uses on a regular basis: Xenu’s LinkSleuth, PassMark’s TestLog and StudioSpell, an add-in for Visual Studio from Keyoti. The sight of any of these can bring a developer out in hives.
Happy endings
There is a happy ending: typically, when you put these two attitudes in close proximity, the good money drives out the bad and a shared concern for durability prevails. Developers quickly learn to build stuff that defeats their colleagues' meanest efforts and a healthy quality arms race ensues [well, it does if management understands the people issues involved and sets suitable goals and rewards - Ed].In many software companies, after unit testing, nothing happens until system testing begins. But there's a real opportunity to test earlier if you set things up as I recommend, because all products have an API that allows access for functionality testing without the UI being in place; and the earlier issues are found, the better the payback.
It all feeds into the bottom line. It's well known that, if a problem can be fixed at design level or requirements level, it will save a lot more money than if something was found later on. Investment at each stage turns into payback: a simpler, more usable product that is easier to test, therefore less likely to let customers down, which means they will then make recommends to colleagues and a virtuous circle then starts to turn.
http://www.theregister.co.uk/2007/01/08/test_test_red_gate/
Suscribirse a:
Entradas (Atom)