If you are a tester, QA executive or want to come in testing field,
see in which of following qualities describes you.
What makes a good test engineer?
A good test engineer has :
. A ‘test to break’ attitude,
. An ability to take the point of view of the customer,
. A strong desire for quality, and an attention to detail.
. Tact and diplomacy are useful in maintaining a cooperative relationship with developers,
. An ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.
.
Previous software development experience can be helpful as it provides a
deeper understanding of the software development process, gives the
tester an appreciation for the developers’ point of view, and reduce the
learning curve in automated test tool programming.
. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer.
Additionally :
.
They must be able to understand the entire software development process
and how it can fit into the business approach and goals of the
organization.
. Communication skills and the ability to understand various sides of issues are important.
.
In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find
problems as well as to see ‘what’s missing’ is important for inspections
and reviews.
http://www.softwaretestinghelp.com/what-makes-a-good-test-engineer/
martes, julio 31
Smoke testing and sanity testing – Quick and simple differences
Despite of hundreds of web articles on Smoke and sanity testing, many
people still have confusion between these terms and keep on asking to
me. Here is a simple and understandable difference that can clear your confusion between smoke testing and sanity testing.
Here are the differences you can see:
SMOKE TESTING:
http://www.softwaretestinghelp.com/smoke-testing-and-sanity-testing-difference/
Here are the differences you can see:
SMOKE TESTING:
- Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
- A smoke test is scripted, either using a written set of tests or an automated test
- A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
- Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
- Smoke testing is normal health check up to a build of an application before taking it to testing in depth.
- A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
- A sanity test is usually unscripted.
- A Sanity test is used to determine a small section of the application is still working after a minor change.
- Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
- Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.
http://www.softwaretestinghelp.com/smoke-testing-and-sanity-testing-difference/
Chistes sobre Testing - Parte 1
A group of managers were given the assignment of measuring the height of a flagpole. So they go out to the flagpole with ladders and tape measures and they’re struggling to get the correct measurement; dropping the tape measures and falling off the ladders.
A tester comes along and sees what they’re trying to do, walks over, pulls down the flagpole, lays it flat, measures it from end to end, gives the measurement to one of the managers and walks away.
After the tester is gone, one manager turns to another and laughs, “Isn’t that just like a tester? We’re looking for the height and he gives us the length.”
Damage Testing
The Aviation Department had a unique device for testing the strength of windshields on airplanes. The device was a gun that launched a dead chicken at a plane’s windshield at approximately the speed the plane flies. The theory was that if the windshield does not crack from the impact of the chicken, it will survive a real collision with a bird during flight.
The Railroad Department heard of this device and decided to use it for testing a windshield on a locomotive they were developing.
So the Railroad Department borrowed the device, loaded a chicken and fired at the windshield of the locomotive. The chicken not only shattered the windshield but also went right through and made a hole on the back wall of the engine cab – the unscathed chicken’s head popping out of the hole. The Railroad Department was stunned and contacted the Aviation Department to recheck the test to see if everything was done correctly.
The Aviation Department reviewed the test thoroughly and sent a report. The report consisted of just one recommendation and it read “Use a thawed chicken.”
A Tester’s Courage
The Director of a software company proudly announced that a flight software developed by the company was installed in an airplane and the airlines was offering free first flights to the members of the company. “Who are interested?” the Director asked. Nobody came forward. Finally, one person volunteered. The brave Software Tester stated, “I will do it. I know that the airplane will not be able to take off.”
Light Bulb
Question: How many testers does it take to change a light bulb?
Answer: None. Testers do not fix problems; they just find them.
Answer: None. Testers do not fix problems; they just find them.
Question: How many programmers does it take to change a light bulb?
Answer1: What’s the problem? The bulb at my desk works fine!
Answer2: None. That’s a hardware problem.
Answer1: What’s the problem? The bulb at my desk works fine!
Answer2: None. That’s a hardware problem.
Fuente: http://softwaretestingfundamentals.com/software-testing-jokes/
Etiquetas:
Chistes,
Humor,
Pruebas de Software,
Testing
Ubicación:
Tlaquepaque, JAL, Mexico
lunes, julio 30
Difference between Internet Explorer 7 and Firefox
Is web testing your occupation? Do you have a queer observation that you were able to see such differences? If so, then you can probably relate on this.
There are three major things that has been bothering me in terms of website interoperability testing between browsers. I’m talking about two browser giants: Internet Explorer 7 (IE7) and Firefox (FX).
-
Pixel Display
Sometimes, due to the browser’s different pixel-fetching, arrangment of letters in a paragraph also differs. Some of the letters, especially if it contains hypernation, moves up-down as you shift from IE7 to FX.
-
1.1. Font Display
I guess this is probably due to the pixel display of the two different browsers. In Internet Explorer 7, you can see up-close the “emboss-styled” texts. You can easily spot the difference if you are using Firefox’s IE Tab Add-On. Upon switching from one tab to another (from IE7 to FX), you can see how the font’s display changes from “embossed” to “un-embossed” states. -
1.2. Horizontal and Vertical Spacing Display
-
- This is again, probably due to the pixel display of two different browsers. As I have mentioned above, opening same sites on two different browsers would only seem to display the same buttons, frames and image spaces. However, upon close inspection, you can easily spot the difference by using Firefox’s IE Tab Add-On. For super-techies, you can see the difference by overlaying screenshots (e.g. Photoshop CS3) or by manual screenshot pixel measurment to measure the exact distance or pixel difference.
-
Loading Time
There is definitely a difference between the two browsers in terms of loading time. Since loading time testing is included in our procedures, I was able to observe how Internet Explorer’s loading time is much quicker than that of Firefox. On average, there is around 2 seconds per page difference – given that things or elements are on its usual or normal behaviour. -
CMS Compatibility
A Content Management System (or CMS) is a program used to create a framework for content of a website. Our company is one of those who are using this tool; and one thing I observed is that some of the functions created (specifically buttons), doesn’t work well with Firefox. During one of my testing, I experienced non-functional buttons. I thought at first that this was one of our developers mistake – but later found out that it was due to the tool’s incompatibility with Firefox. It was explained to me that some of the heavy-coded scripts embedded on the CMS cannot be properly read by Firefox. But so far in IE7, things worked out just fine.
Now my question is, why can’t they just follow strict standards (or atleast in proximity)? I mean, I’ve read some articles in which these two browsers have corresponding or equivalent add-ons and other stuff, but how come in terms of the things I’ve mentioned above, there is really a difference? It would have been much easier on everyone else’s part — users, developers, designers, and as testers, as well.
jueves, julio 26
Bug counts as key performance indicators (KPI) for testers
Every
once in awhile I meet testers who say their manager rates individual
performance based on bug metrics. It is no secret that management is
constantly looking at bug metrics. But, bug numbers are generally a poor
indication of any direct meaningful measure, especially individual
human performance. Yet, some managers continue this horrible practice
and even create fancy spreadsheets with all sorts of formulas to analyze
bug data in relation to individual performance. Number of bugs
reported, fix rates, severity, and other data points are tracked in a
juvenile attempt to come up with some comparative performance indicator
among testers. Perhaps this is because bugs numbers are an easy metric
to collect, or perhaps it is because management maintains the antiquated
view that the purpose of testing is to simply find bugs!
Regardless of the reasons, using bug numbers as a direct measure of individual performance is ridiculous.
There are simply too many variables in bug metrics to use these
measures in any form of comparative analysis for performance. Consider a
team of testers of equal skills, experience and domain knowledge there
are several factors that affect the number of defects or defect resolutions such as:
· Complexity
–the complexity coefficient for a feature area under test impacts risk.
For example a feature with a high code complexity measure has higher
risk and may have a greater number of potential defects as compared to a
feature with a lower code complexity measure.
· Code maturity – a product or feature with a more mature code base may have less defects than a newer product or feature.
· Defect density
– a new developer may inject more defects than an experienced
developer. A developer that performs code reviews and unit tests will
likely produce less defects in their area as compared to a developer who
simply throws his or her code over the wall. Are defect density ratios
used to normalize bug counts?
· Initial design
– if the customer needs are not well understood, or if the requirements
are not thought out before the code is written then there will likely
be lots of changes. Changes in code are more likely to produce defects
as compared to ‘original’ code.
Attempting
to use bug counts as performance indicators must also take into account
the relative value of reported defects. For example, surely more severe
issues such as data loss are given more weight compared to simple UI
problems such as a misspelled word. And we all know the sooner defects
are detected the cheaper they are in the grand scheme of things. So, defects
reported earlier are certainly valued more than defects reported later
in the cycle. Also, we all know that not all defects will be fixed. Some
defects reported by testers will be postponed, some will simply will
not be fixed, and others may be resolved as “by design.” A defect that
the management team decides not to fix is still a defect! Just because
the management team decides not of fix the problem doesn’t totally
negate the value of the bug.
The
bottom line is that using bug metrics to analyze trends is useful, but
using them to assess individual performance or comparative performance
among testers is absurd. Managers who continue to use bug count as
performance indicators are simply lazy, or don’t understand testing well
enough to evaluate key performance indicators of professional testers.
http://blogs.msdn.com/b/imtesty/archive/2006/06/26/647628.aspx
martes, julio 24
In Defense of Logic Questions
Microsoft has a history
of asking logic questions in its interviews. Because of this, there
are many web sites talking about the questions and given answers to
them. There is even an entire book dedicated to the subject called How Would You Move Mount Fuji?
There are those who believe strongly in their usefulness. There are
others who think they have no value. I fall into the first camp, but
only up to a point. There are two sorts of logic questions. One is
very useful, the other not. There are logic questions I will call
complete. These are the questions where everything needed to solve them
is presented in the problem description. There are also those I will
call incomplete which require knowledge beyond the scope of the problem
to solve.
Logic questions are useful when hiring for computer-related jobs because they represent the same sort of thought process required to program or test software. Computers are pure logic. They will do exactly what they are told to do and nothing more. They are unforgiving. They do not draw inferences. Close enough doesn't count. To make them do what you want, you have to be explicit. To understand why they break, you must comprehend where the logic broke down. Logic questions tend to exercise the same pathways in the mind. Being able to figure them out is a good indicator that someone can figure out software.
Logic questions get their poor reputation from the use of incomplete questions. These questions require you to "think outside the box." That may be good, but they don't really allow the interviewee to be creative. Instead, they look for one particular piece of information outside the box. Solving them is more about getting lucky than it is about applying logic.
Here is a good example of a bad logic question: Assume there is a room with three light bulbs in it. Outside the room is a light switch with 3 switches on it labeled A, B, and C. Once you enter the room, you will no longer be able to access the switches. How can you, upon entering the room, tell me which of the three switches controls each of the light bulbs?
Think about this question for a moment. Do you have the answer? With merely the given information, it is not possible to formulate the answer. Instead, one must start thinking about characteristics of the light bulb and switches. Can I take apart the switch? Can I somehow see into the room before opening the door? Is there something about the bulbs themselves that I can use to my advantage?
The answer is the following: Turn on switch A for 5 minutes or so. Now turn it off and turn on switch B. Enter the room. Upon entering, the lit bulb is clearly connected to B. Feel the other two bulbs. The one which is warm is attached to A. By process of elimination, the cool one is connected to C.
Is it really fair to fail someone in an interview for not thinking about the properties of light bulbs? Other questions require understanding of math, physics, geometry, etc. that, while the person may once have taken, probably aren't fresh in their mind. Assume that a candidate gets these questions wrong. What do you know? You know that they either are not a logical person or that they don't have a complete understanding of light bulbs. Unless the knowledge of light bulbs is critical to the job at hand, avoid the question. It doesn't provide useful information.
There is a better sort of logic question to use. These I call complete questions. They contain within them all the information one needs to divine the answer. The candidate needs only apply the rules of logic to the problem statement and they will succeed.
Here is an example of a simple, yet complete question: There is a river. You start on one side with a wolf, a pig, and a carrot. There is a raft which you can use to cross the river. Unfortunately, the raft is too small to hold more than 2 things (you being one of them) at a time. You need to get yourself and all three of the others to the opposite side to continue your journey. Unfortunately, without you present, the pig will eat the carrot and the wolf will eat the pig. How do you get all 3 to the other side?
If you are reading this, hopefully the right solution comes to mind fairly quickly. This question is a bit more simple than I would normally ask but it does demonstrate the point. The candidate merely needs to find the right combination of things and trips across the river to succeed. He doesn't need to understand the nature of pigs beyond the fact that they eat carrots. He doesn't need to recall vector math to calculate the path across the flowing river.
The answer is as follows: Take the pig across the river and live it there. Now come back and get the wolf. Take him across. On the return trip, bring the pig back to the original side of the river. This time across take the carrot over and leave it safely with the wolf. Finally, go back for the pig and bring it across. All three are now on the other side of the river. Journey on.
Hopefully you can see and understand the distinction between the light bulb question and the pig question. Failing to answer the first is ambiguous why the failure happened. Failure to answer the second can only be a failure to apply the rules of logic correctly.
One point that should be made is that you should never judge someone on the failure to answer a single logic question. Each question requires particular train of thought and just because someone doesn't reach it doesn't mean that they are illogical. Sometimes their mind is so busy going down other paths that it never reaches the right one to solve the problem. In short, it is easy to get hung up on one approach to a problem. Whenever you ask a question such as those I advocate here, you should always have a second question handy. Only if someone fails both questions should you pass judgement. Missing one solution is understandable. Missing two is a sign that the individual in question may not be able to handle this sort of problem solving.
Much of the recent anti-logic-question backlash has, IMHO, been caused by failing to make the distinctions I make in this post. Logic question, if used correctly, are an invaluable arrow in the quiver of an interviewer. Especially when interviewing someone for a non-coding position, the question can provide valuable information that is very difficult to glean otherwise.
http://blogs.msdn.com/b/steverowe/archive/2007/02/23/in-defense-of-logic-questions.aspx
Logic questions are useful when hiring for computer-related jobs because they represent the same sort of thought process required to program or test software. Computers are pure logic. They will do exactly what they are told to do and nothing more. They are unforgiving. They do not draw inferences. Close enough doesn't count. To make them do what you want, you have to be explicit. To understand why they break, you must comprehend where the logic broke down. Logic questions tend to exercise the same pathways in the mind. Being able to figure them out is a good indicator that someone can figure out software.
Logic questions get their poor reputation from the use of incomplete questions. These questions require you to "think outside the box." That may be good, but they don't really allow the interviewee to be creative. Instead, they look for one particular piece of information outside the box. Solving them is more about getting lucky than it is about applying logic.
Here is a good example of a bad logic question: Assume there is a room with three light bulbs in it. Outside the room is a light switch with 3 switches on it labeled A, B, and C. Once you enter the room, you will no longer be able to access the switches. How can you, upon entering the room, tell me which of the three switches controls each of the light bulbs?
Think about this question for a moment. Do you have the answer? With merely the given information, it is not possible to formulate the answer. Instead, one must start thinking about characteristics of the light bulb and switches. Can I take apart the switch? Can I somehow see into the room before opening the door? Is there something about the bulbs themselves that I can use to my advantage?
The answer is the following: Turn on switch A for 5 minutes or so. Now turn it off and turn on switch B. Enter the room. Upon entering, the lit bulb is clearly connected to B. Feel the other two bulbs. The one which is warm is attached to A. By process of elimination, the cool one is connected to C.
Is it really fair to fail someone in an interview for not thinking about the properties of light bulbs? Other questions require understanding of math, physics, geometry, etc. that, while the person may once have taken, probably aren't fresh in their mind. Assume that a candidate gets these questions wrong. What do you know? You know that they either are not a logical person or that they don't have a complete understanding of light bulbs. Unless the knowledge of light bulbs is critical to the job at hand, avoid the question. It doesn't provide useful information.
There is a better sort of logic question to use. These I call complete questions. They contain within them all the information one needs to divine the answer. The candidate needs only apply the rules of logic to the problem statement and they will succeed.
Here is an example of a simple, yet complete question: There is a river. You start on one side with a wolf, a pig, and a carrot. There is a raft which you can use to cross the river. Unfortunately, the raft is too small to hold more than 2 things (you being one of them) at a time. You need to get yourself and all three of the others to the opposite side to continue your journey. Unfortunately, without you present, the pig will eat the carrot and the wolf will eat the pig. How do you get all 3 to the other side?
If you are reading this, hopefully the right solution comes to mind fairly quickly. This question is a bit more simple than I would normally ask but it does demonstrate the point. The candidate merely needs to find the right combination of things and trips across the river to succeed. He doesn't need to understand the nature of pigs beyond the fact that they eat carrots. He doesn't need to recall vector math to calculate the path across the flowing river.
The answer is as follows: Take the pig across the river and live it there. Now come back and get the wolf. Take him across. On the return trip, bring the pig back to the original side of the river. This time across take the carrot over and leave it safely with the wolf. Finally, go back for the pig and bring it across. All three are now on the other side of the river. Journey on.
Hopefully you can see and understand the distinction between the light bulb question and the pig question. Failing to answer the first is ambiguous why the failure happened. Failure to answer the second can only be a failure to apply the rules of logic correctly.
One point that should be made is that you should never judge someone on the failure to answer a single logic question. Each question requires particular train of thought and just because someone doesn't reach it doesn't mean that they are illogical. Sometimes their mind is so busy going down other paths that it never reaches the right one to solve the problem. In short, it is easy to get hung up on one approach to a problem. Whenever you ask a question such as those I advocate here, you should always have a second question handy. Only if someone fails both questions should you pass judgement. Missing one solution is understandable. Missing two is a sign that the individual in question may not be able to handle this sort of problem solving.
Much of the recent anti-logic-question backlash has, IMHO, been caused by failing to make the distinctions I make in this post. Logic question, if used correctly, are an invaluable arrow in the quiver of an interviewer. Especially when interviewing someone for a non-coding position, the question can provide valuable information that is very difficult to glean otherwise.
http://blogs.msdn.com/b/steverowe/archive/2007/02/23/in-defense-of-logic-questions.aspx
lunes, julio 23
We don't make the software you use, we make the software you use better....
BASF used a variation on the above as it's corporate tagline. Michael Hunter,
the technical lead for my test team, used this as part of his auto
signature for quite some time. It should be the motto for every software
test/qa organization.
For those who don't know him, Adam Barr worked for microsoft from 1990 to 2000, and wrote a book called proudly serving my corporate masters. He commented on a comment to a post I'd made about testing at microsoft. Here's a couple excerpts from his comments.
So what I am trying to argue in the book is that testers are just as important as developers, in fact they may be more important, but it's not because they are as good developers as the developers, it's because they are good testers! That should be reason enough to respect them. In fact I make the claim that Microsoft started out in the "era of the developer", moved to the "era of the program manager" around 1990 when Windows 3.0 shipped, and has now moved to the "era of the tester" -- except nobody realizes this, and it is hurting the company.
Anyway you can read the book (it's linked to online from my website if you don't want to buy a copy) to get the full argument, it's scattered throughout chapter 3, on pages 40-60. But please understand that I was trying to raise testers up to equality with dev and PM. I'm not sure what "He further rants negatively about testers especially in chapter 4" means but I don't recall doing any negative ranting about testers. I rant negatively about the ATTITUDE towards testers. The quote on p. 50 about "for a variety of reasons they are viewed as lower on the pecking order than program managers and developers" is not me bragging about the way things should be, it's me lamenting about the way they are.
Adam, thanks for clarifying your books points. I think that Adam and I are in agreement about the way things were, and that attitudes needed changing and understand that testing is currently a bottleneck in delivering quality software. I believe that the companies views have changed significantly in the last 2-3 years. Bill Gates quote from May 2002 : http://www.informationweek.com/story/IWK20020517S0011
INFORMATIONWEEK: When we were on Microsoft's campus in March, it became clear that some people there think they've been developing quality software for years; in other words, it's not a new concept to them. And we got a sense that a few people were a little bit defensive about it.
GATES: Well, let's separate out quality from security. Microsoft in terms of this quality stuff--we have as many testers as we have developers. And testers spend all their time testing, and developers spend half their time testing. We're more of a testing, a quality software organization than we're a software organization.
Now where Adam Barr and I may disagree: I want to hire people who are both great testers as well as being great developers; I don't see these skills as mutually exclusive. I took my current job to start a new group 2 years ago. From day one we've worked to hire people who have strong testing experience or aptitude, and are skilled developers. Given the resource constraints, and the longevity of software servicing, as a manager, I must pursue 100% test automation as aggresively as possible, and have the automation completed as close to feature completion as possible. To build the framework necessary to deliver on this goal, you have to have strong developers. It's simply not a simple task.
Think about it like this: students at school, or developers from other companies, have lots of knowledge on how to solve typical problems in software engineering. How to design a 2 tier or 3 tier data bound application; how to draw bezier curves; how to implement pixel shading support, etc, etc. There are books for every aspect of software engineering. Need a sort? Just look it up. How about jpeg compression? Again, just look it up. I'm not saying that everything developers do is regurgitate code. There is room for lots of creativity. But there just aren't books or classes yet on how to write a test harness that can be shared by a development team and a qa team to run both unit tests and functional tests, that runs in process and out of process with respect to the application being tested. How about designing software that automates the installation of Operating Systems and other prerequisites? How about building a verification system that can test that your bezier curve is actually rendered correctly?
That's why I believe that during the next decade, the hardest problems we have in software engineering are in the realm of testing. It may not be glamorous work; the code we write never gets put onto a single customer's computer, yet quality is still the single most important feature we ship in every product. If it works as expected, people will have the ability to love it (of course, it also has to do something they care about for them to actually love it); if it is buggy, people will hate it, no matter how many killer features are in the box. Getting the product to market in a timely fashion at that quality bar, and putting the product in a position to be sustained without continually pulling resources away from the next version, unless you have unlimited resources (hah!), requires you to have quality test automation. To implement that, of course, brings me full circle back to my point on hiring people who are both great testers and great developers.
http://blogs.msdn.com/b/adamu/archive/2004/07/23/192366.aspx
For those who don't know him, Adam Barr worked for microsoft from 1990 to 2000, and wrote a book called proudly serving my corporate masters. He commented on a comment to a post I'd made about testing at microsoft. Here's a couple excerpts from his comments.
So what I am trying to argue in the book is that testers are just as important as developers, in fact they may be more important, but it's not because they are as good developers as the developers, it's because they are good testers! That should be reason enough to respect them. In fact I make the claim that Microsoft started out in the "era of the developer", moved to the "era of the program manager" around 1990 when Windows 3.0 shipped, and has now moved to the "era of the tester" -- except nobody realizes this, and it is hurting the company.
Anyway you can read the book (it's linked to online from my website if you don't want to buy a copy) to get the full argument, it's scattered throughout chapter 3, on pages 40-60. But please understand that I was trying to raise testers up to equality with dev and PM. I'm not sure what "He further rants negatively about testers especially in chapter 4" means but I don't recall doing any negative ranting about testers. I rant negatively about the ATTITUDE towards testers. The quote on p. 50 about "for a variety of reasons they are viewed as lower on the pecking order than program managers and developers" is not me bragging about the way things should be, it's me lamenting about the way they are.
Adam, thanks for clarifying your books points. I think that Adam and I are in agreement about the way things were, and that attitudes needed changing and understand that testing is currently a bottleneck in delivering quality software. I believe that the companies views have changed significantly in the last 2-3 years. Bill Gates quote from May 2002 : http://www.informationweek.com/story/IWK20020517S0011
INFORMATIONWEEK: When we were on Microsoft's campus in March, it became clear that some people there think they've been developing quality software for years; in other words, it's not a new concept to them. And we got a sense that a few people were a little bit defensive about it.
GATES: Well, let's separate out quality from security. Microsoft in terms of this quality stuff--we have as many testers as we have developers. And testers spend all their time testing, and developers spend half their time testing. We're more of a testing, a quality software organization than we're a software organization.
Now where Adam Barr and I may disagree: I want to hire people who are both great testers as well as being great developers; I don't see these skills as mutually exclusive. I took my current job to start a new group 2 years ago. From day one we've worked to hire people who have strong testing experience or aptitude, and are skilled developers. Given the resource constraints, and the longevity of software servicing, as a manager, I must pursue 100% test automation as aggresively as possible, and have the automation completed as close to feature completion as possible. To build the framework necessary to deliver on this goal, you have to have strong developers. It's simply not a simple task.
Think about it like this: students at school, or developers from other companies, have lots of knowledge on how to solve typical problems in software engineering. How to design a 2 tier or 3 tier data bound application; how to draw bezier curves; how to implement pixel shading support, etc, etc. There are books for every aspect of software engineering. Need a sort? Just look it up. How about jpeg compression? Again, just look it up. I'm not saying that everything developers do is regurgitate code. There is room for lots of creativity. But there just aren't books or classes yet on how to write a test harness that can be shared by a development team and a qa team to run both unit tests and functional tests, that runs in process and out of process with respect to the application being tested. How about designing software that automates the installation of Operating Systems and other prerequisites? How about building a verification system that can test that your bezier curve is actually rendered correctly?
That's why I believe that during the next decade, the hardest problems we have in software engineering are in the realm of testing. It may not be glamorous work; the code we write never gets put onto a single customer's computer, yet quality is still the single most important feature we ship in every product. If it works as expected, people will have the ability to love it (of course, it also has to do something they care about for them to actually love it); if it is buggy, people will hate it, no matter how many killer features are in the box. Getting the product to market in a timely fashion at that quality bar, and putting the product in a position to be sustained without continually pulling resources away from the next version, unless you have unlimited resources (hah!), requires you to have quality test automation. To implement that, of course, brings me full circle back to my point on hiring people who are both great testers and great developers.
http://blogs.msdn.com/b/adamu/archive/2004/07/23/192366.aspx
Técnica de Prueba de bucles - Pruebas de Caja Blanca
Los bucles son la piedra angular de la mayoría de los algoritmos implementados en software. La prueba de bucles es una técnica de prueba de caja blanca que centra su punto de atención en la validez de las construcciones de bucles.
A)
Bucles simples: Se les debe aplicar el siguiente conjunto de pruebas, donde
n es el número máximo de pasos permitidos para el bucle:
1. Pasar por alto totalmente el bucle
2. Pasar una sola vez por el bucle
3. Pasar dos veces por el bucle
4. Hacer m pasos por el bucle con m < n
5. Hacer n-1, n y n+1 pasos por el bucle
1. Pasar por alto totalmente el bucle
2. Pasar una sola vez por el bucle
3. Pasar dos veces por el bucle
4. Hacer m pasos por el bucle con m < n
5. Hacer n-1, n y n+1 pasos por el bucle
B)
Bucles anidados
1. Comenzar por el bucle más interior. Establecer los demás bucles en sus valores mínimos.
1. Comenzar por el bucle más interior. Establecer los demás bucles en sus valores mínimos.
2. Llevar a cabo las pruebas de bucles
simples para el bucle más interior, mientras se mantienen los parámetros de
iteración.
Por ejemplo. Contadores de bucles, de los bucles externos en sus valores
mínimos. Añadir otras pruebas para valores fuera de rango o excluídos.
3.
Progresar hacia fuera, llevando a cabos pruebas para el siguiente bucle, pero
manteniendo todos los bucles externos en sus valores mínimos y los demás bucles
anidados en sus valores "típicos".
4. Continuar hasta que se hayan probado
todos los bucles.
C)
Bucles concatenados: Se pueden probar mediante el enfoque anteriormente
definido para los bucles simples, mientras cada uno de los bucles sea
independiente del resto (si el contador del bucle 1 se usa como valor inicial
del bucle 2 entonces los bucles no son independientes)
D)
Bucles no estructurados: Esta clase de bucles se deben rediseñar para que se
ajusten a las construcciones de la programación estructurada.
jueves, julio 19
Measuring testers by test metrics doesn't.
This one’s likely to get a bit controversial J.
There is an unfortunate tendency among test leads to measure the performance of their testers by the number of bugs they report.
As best as I’ve been able to figure out, the logic works like this:
Test
Manager 1: “Hey, we want to have concrete metrics to help in the
performance reviews of our testers. How can we go about doing that?”
Test Manager 2: “Well, the best testers are the ones that file the most bugs, right?”
Test Manager 1: “Hey that makes sense. We’ll measure the testers by the number of bugs they submit!”
Test Manager 2: “Hmm. But the testers could game the system if we do that – they could file dozens of bogus bugs to increase their bug count…”
Test Manager 1: “You’re right. How do we prevent that then? – I know, let’s just measure them by the bugs that are resolved “fixed” – the bugs marked “won’t fix”, “by design” or “not reproducible” won’t count against the metric.”
Test Manager 2: “That sounds like it’ll work, I’ll send the email out to the test team right away.”
Test Manager 2: “Well, the best testers are the ones that file the most bugs, right?”
Test Manager 1: “Hey that makes sense. We’ll measure the testers by the number of bugs they submit!”
Test Manager 2: “Hmm. But the testers could game the system if we do that – they could file dozens of bogus bugs to increase their bug count…”
Test Manager 1: “You’re right. How do we prevent that then? – I know, let’s just measure them by the bugs that are resolved “fixed” – the bugs marked “won’t fix”, “by design” or “not reproducible” won’t count against the metric.”
Test Manager 2: “That sounds like it’ll work, I’ll send the email out to the test team right away.”
Sounds
good, right? After all, the testers are going to be rated by an
absolute value based on the number of real bugs they find – not the
bogus ones, but real bugs that require fixes to the product.
The problem is that this idea falls apart in reality.
Testers
are given a huge incentive to find nit-picking bugs – instead of
finding significant bugs in the product, they try to find the bugs that
increase their number of outstanding bugs. And they get very combative
with the developers if the developers dare to resolve their bugs as
anything other than “fixed”.
So let’s see how one scenario plays out using a straightforward example:
My app pops up a dialog box with the following:
Plsae enter you password: _______________
Where the edit control is misaligned with the text.
Without
a review metric, most testers would file a bug with a title of
“Multiple errors in password dialog box” which then would call out the
spelling error and the alignment error on the edit control.
They
might also file a separate localization bug because there’s not enough
room between the prompt and the edit control (separate because it falls
under a different bug category).
But
if the tester has their performance review based on the number of bugs
they file, they now have an incentive to file as many bugs as possible.
So the one bug morphs into two bugs – one for the spelling error, the
other for the misaligned edit control.
This
version of the problem is a total and complete nit – it’s not
significantly more work for me to resolve one bug than it is to resolve
two, so it’s not a big deal.
But what happens when the problem isn’t
a real bug – remember – bugs that are resolved “won’t fix” or “by
design” don’t count against the metric so that the tester doesn’t flood
the bug database with bogus bugs artificially inflating their bug
counts.
Tester:
“When you create a file when logged on as an administrator, the owner
field of the security descriptor on the file’s set to
BUILTIN\Administrators, not the current user”.
Me: “Yup, that’s the way it’s supposed to work, so I’m resolving the bug as by design. This is because NT considers all administrators as idempotent, so when a member of BUILTIN\Administrators creates a file, the owner is set to the group to allow any administrator to change the DACL on the file.”
Me: “Yup, that’s the way it’s supposed to work, so I’m resolving the bug as by design. This is because NT considers all administrators as idempotent, so when a member of BUILTIN\Administrators creates a file, the owner is set to the group to allow any administrator to change the DACL on the file.”
Normally
the discussion ends here. But when the tester’s going to have their
performance review score based on the number of bugs they submit, they
have an incentive to challenge every bug resolution that isn’t “Fixed”.
So the interchange continues:
Tester:
“It’s not by design. Show me where the specification for your feature
says that the owner of a file is set to the BUILTIN\Administrators
account”.
Me: “My spec doesn’t. This is the way that NT works; it’s a feature of the underlying system.”
Tester: “Well then I’ll file a bug against your spec since it doesn’t document this.”
Me: “Hold on – my spec shouldn’t be required to explain all of the intricacies of the security infrastructure of the operating system – if you have a problem, take it up with the NT documentation people”.
Tester: “No, it’s YOUR problem – your spec is inadequate, fix your specification. I’ll only accept the “by design” resolution if you can show me the NT specification that describes this behavior.”
Me: “Sigh. Ok, file the spec bug and I’ll see what I can do.”
Me: “My spec doesn’t. This is the way that NT works; it’s a feature of the underlying system.”
Tester: “Well then I’ll file a bug against your spec since it doesn’t document this.”
Me: “Hold on – my spec shouldn’t be required to explain all of the intricacies of the security infrastructure of the operating system – if you have a problem, take it up with the NT documentation people”.
Tester: “No, it’s YOUR problem – your spec is inadequate, fix your specification. I’ll only accept the “by design” resolution if you can show me the NT specification that describes this behavior.”
Me: “Sigh. Ok, file the spec bug and I’ll see what I can do.”
So
I have two choices – either I document all these subtle internal
behaviors (and security has a bunch of really subtle internal behaviors,
especially relating to ACL inheritance) or I chase down the NT program
manager responsible and file bugs against that program manager. Neither
of which gets us closer to shipping the product. It may make the NT
documentation better, but that’s not one of MY review goals.
In
addition, it turns out that the “most bugs filed” metric is often
flawed in the first place. The tester that files the most bugs isn’t
necessarily the best tester on the project. Often times the tester
that is the most valuable to the team is the one that goes the extra
mile and spends time investigating the underlying causes of bugs and
files bugs with detailed information about possible causes of bugs. But
they’re not the most prolific testers because they spend the time to
verify that they have a clean reproduction and have good information
about what is going wrong. They spent the time that they would have
spent finding nit bugs and instead spent it making sure that the bugs
they found were high quality – they found the bugs that would have
stopped us from shipping, and not the “the florblybloop isn’t set when I
twiddle the frobjet” bugs.
I’m
not saying that metrics are bad. They’re not. But basing people’s
annual performance reviews on those metrics is a recipe for disaster.
Somewhat
later: After I wrote the original version of this, a couple of other
developers and I discussed it a bit at lunch. One of them, Alan Ludwig,
pointed out that one of the things I missed in my discussion above is
that there should be two halves of a performance review:
MEASUREMENT: Give me a number that represents the quality of the work that the user is doing.And
EVALUATION: Given the measurement, is the employee doing a
good job or a bad job. In other words, you need to assign a value to
the metric – how relevant is the metric to your performance.
He
went on to discuss the fact that any metric is worthless unless it is
reevaluated at every time to determine how relevant the metric is – a
metric is only as good as its validity.
One
other comment that was made was that absolute bug count metrics cannot
be a measure of the worth of a tester. The tester that spends two weeks
and comes up with four buffer overflow errors in my code is likely to
be more valuable to my team than the tester that spends the same two
weeks and comes up with 20 trivial bugs. Using the severity field of
the bug report was suggested as a metric, but Alan pointed out that this
only worked if the severity field actually had significant meaning, and
it often doesn’t (it’s often very difficult to determine the relative
severity of a bug, and often the setting of the severity field is left
to the tester, which has the potential for abuse unless all bugs are
externally triaged, which doesn’t always happen).
By the end of the discussion, we had all agreed that bug counts were an interesting metric, but they couldn’t be the only metric.
http://blogs.msdn.com/b/larryosterman/archive/2004/04/20/116998.aspx
lunes, julio 16
Exploratory testing and philosophical psycho-babble
Last year James Bach wrote
that he “…"invented" testing…mainly by discovering that the problems of
testing have already been solved in the fields of cognitive psychology,
epistemology, and general systems thinking.” Well, there are simply 2
things wrong with the 2 points James makes in his statement. First, I am
pretty sure James didn’t invent testing. Although, he has been very
successful at taking the most commonly used testing method, wrapping it
with the fancy moniker of ‘exploratory testing,’ and having us believe
that we really don’t understand it (until of course we drink the
'kool-aid' and concede exploratory testing is the best thing since
sliced bread). Secondly, the problems of software testing are complex
and they certainly aren’t already solved. I won’t pretend to know what
the solutions are to the problems we encounter in software testing, but I
am pretty confident the solutions are not going to be found by studying
cognitive psychology, epistemology, or systems thinking.
Now,
don’t assume that I simply dislike James Bach. I agree with James on
several points regarding the need for better tester education, the value
(or lack thereof) of tester certifications, and some of his earlier
writings. However, I do admit that James and I disagree on several
issues. Professional disagreement is not bad; in fact it sometimes
sparks healthy debate, and motivates thought so we can reach our own
logical conclusion and reconcile alternative points of view.
One
point of contention is the correlation of software testing with
branches of philosophical, psychological, and sociological fields of
study. The idea that solutions to the problems in software testing
practices and processes can be found by studying philosophy, psychology,
and systems theory is twaddle. These are interesting topics and can
certainly heighten a person’s ability to think abstractly and question
assumptions. But, are these subjects really unique to software testing
or do they also apply as equally to development, or to any other job
requiring a high degree of innovation and ingenuity?
Epistemology
Epistemology is the philosophical study
of the origin, the nature, and the scope of knowledge. When I find a
defect in software testing, I must admit I really don’t spend a lot of
time philosophizing how I know that I know this is a defect? Current
theories in epistemology tend to focus on the subjects of truth and
belief. Imagine a Venn diagram that illustrates an intersection between
belief and truth as knowledge. So, I guess that if I believe an
unexpected behavior to be a defect, and the requirements confirm it to
be a defect, then I now know it is a defect. But, of course, if the
requirements are wrong, then I can only theorize it is a defect, but I
may be wrong because then it is only my belief (which may or may not be
correct). Frankly, I
am not too sure that understanding the difference between priori
knowledge versus posteriori knowledge will help me decide whether or not
the unexpected behavior is a defect. I am also not too sure how
understanding the differences between the foundationalism and coherentism approaches to knowledge will help me design better software tests. Personally
if I were to correlate software testing to a non-engineering field of
study, I would say that software testing is more akin to archaeology
than it is to philosophy. To quote Indiana Jones, “Archaeology
is the search for fact... not truth. If it's truth you're looking for,
Dr. Tyree's philosophy class is right down the hall.”
Cognitive psychology
Cognitive
psychology is the branch of psychology that studies mental processes.
Basically, it studies how we learn and process that knowledge into
behavior. As an educator and consultant I am very interested in how we
learn and how we use the knowledge we acquire to reach logical
conclusions, make reasonable decisions, and solve complex problems.
Understanding cognition helps me design and develop better curriculum to
train new software testers into professional testers and work with
teams to resolve problems or implement new ideas. But, do new testers
really need to study how people learn and how
they process information, or do they simply need the capacity to learn
engineering principles and apply that knowledge to design effective
tests.
Systems thinking
Systems
thinking is related to general systems theory which studies the
organization and interdependent relationships of systems. This seems
logical in computer science, but general systems theory is usually
applied to natural sciences and systems thinking is typically used to
model human organizational behavior. Systems thinking views the ‘big
picture’ of organizational relationships and the environment on a
system. Systems thinking is mostly applied to change management
processes because small events in an organization can sometimes cause
catastrophic consequences. Effectively communicating decisions and
continuously managing the processes effecting the change in an
organization to minimize isolation of any part or element of the
organization. Consultants must understanding change management and
system thinking in order to be successful because their primary role is
to introduce change to an organization. But, from a software testing
viewpoint I am thinking that systems thinking may not be the best field
of study to pursue. Perhaps a better focus would be systems philosophy,
which is a form of systems thinking that focuses on design and root
cause analysis. Systems philosophy studies the development of systems
expressed as models. We often create models to help understand complex
systems. In software testing we create models to help us design state
transition tests.
I
am not an expert in systems thinking, cognitive psychology, or
epistemology. And, the simple fact is that I don’t need to be an expert
in social sciences to be successful as a professional tester.(Of course
taking courses in social sciences in university helps one think
abstractly.) And I certainly
wouldn’t try to convince anyone who is serious about a career in
software testing that they should spend a lot of time studying various
branches of philosophy, psychology, or other human sciences to be an
effective tester. As I said above, while these topics are certainly
fascinating, they are of no greater importance to the practice of
software testing then they are to software development or any other
discipline that requires a high degree of intellectual horsepower
and creative innovation.
http://blogs.msdn.com/b/imtesty/archive/2006/06/01/612734.aspx
jueves, julio 12
Customer focused test automation
We
often limit our test automation projects to internal use only. This
restricted use of test automation has contributed the typical
sub-standard quality of automated test code. If you think production
code is poorly documented you haven’t reviewed very much automated test
code. But, since the test code is for internal use only we really don’t
need to put as much effort into it as we do production code. Or do we?
Unfortunately, the
restricted view of test automation as an internal only tool in the
testing process has perhaps limited its potential and its overall value.
But, as testing matures into a profession testers are going to have a
greater impact not only during the product development lifecycle, but
testers will also extend their scope of influence directly to the
customers. You may ask how or why test automation be used by customers?
Read the following scenarios, and think about the possibilities!
A
company contracts for a customized software solution based on a
specific set of requirements. At the completion of the project the
product is handed off to the company who then conducts several suites of
tests such as user acceptance tests and integration tests to test
compatibility with the existing environment. Additional ‘acceptance’
testing by the company costs them time and uses their valuable IT
resources to re-verify the product’s functional capabilities and other
quality attributes. Now imagine the software company also provides
well-documented automated test suites along with the product that
demonstrates the product does what it is supposed to do, and does it in
the customer’s environment. The customers ‘acceptance’ testing time is
greatly decreased, their overall cost of adoption is decreased, and
(just possibly) customer satisfaction is increased.
Another
company purchases several Microsoft products such as Windows Server,
Exchange Server, and SQL. But, it is often not as simple as installing
these platforms and setting the preferred configuration. So, the company
either staffs an IT department that rivals an IT team assembled by a
Fortune 500 company, or hires a company similar to Avanade to put the
pieces together and make them work with the desired configuration
settings. Of course, either solution is going to cost the company a lot
of money, time, and potentially use internal resources to get things
setup correctly. Now suppose Microsoft creates an enterprise Software
Test Kit (STK) that contains thousands of automated tests. The IT team
identifies the specific configuration settings for its platforms,
identifies applications in the corporate environment, and selects other
requirements for adoption. Based on the IT department’s criteria the STK
will automatically select a suite of tests the company’s IT department
can execute to verify their platform configuration and other acceptance
tests. Automated test suites in the STK can even be used for
non-functional testing such as performance and stress testing by the
company based on their configuration settings.
Are
these potential uses of test automation by customers? I believe these
are only some of the possibilities that exist. However, to realize this
potential our test automation must evolve. The quality of our automated
tests must rival that of our production code. In fact, test code must be
better than production code because there is no room for errors or
false positives. If customers use our automated tests to reduce their
overall costs of adoption the tests must be bullet-proof or customer
confidence decreases and customer satisfaction plummets.
So,
what must change in order to increase the overall significance and
potential usefulness of test automation beyond its current value? To
begin with, automated test cases must be considered a product with a
meaningful shelf-life instead of disposable artifact or something we
simply pass on to the maintenance or sustained engineering team. The
automated tests must be well-planned and well-documented (preferably
with XML). The testers who develop the tests must understand the product
and be able to develop the automated tests with a modern programming
language. (Sorry, but scripting languages, keyword or data-driven
approaches, and record/playback are simply too limited in their ability
to scope to test automation of this magnitude.) Testers will need to use
static analysis tools and perform code reviews to verify the quality of
the test code. We will also need an framework that can select and run
the appropriate set of tests to satisfy the customer’s needs, but is
extensible enough for the customer to easily create and import
additional automated tests if necessary.
Perhaps
this is all just wishful thinking, or perhaps it is a vision of things
to come in the professional world of software testing.
http://blogs.msdn.com/b/imtesty/archive/2006/05/19/601692.aspx
Técnica de Prueba de condiciones - Pruebas de Caja Blanca
La
prueba de condiciones es un método de diseño de casos de prueba que ejercita las
condiciones lógicas contenidas en el módulo de un programa. Se basa en el
criterio de que si un conjunto de pruebas de un programa P es efectivo para
detectar errores en las condiciones que se encuentran en P, es probable que el
conjunto de pruebas sea también efctivo para detectar otros errores en el
programa P.
Para una expresión relacional de la forma:
Para una expresión relacional de la forma:
E1
<operador relacional> E2
se
requieren tres pruebas: el valor de E1 mayor, menor o igual que el de E2. La
prueba que haga el valor de E1 mayor o menor que el de E2 debe hacer que la
diferencia entre estos dos valores sea lo más pequeña posible.
Si la expresión es de la forma B1 & B2 donde B1 y B2 son variables lógicas, esta se cubre para B1 verdadero y B2 falso, B1 falso y B2 verdadero y ambos B1 y B2 verdadero.
Si la expresión es de la forma B1 & B2 donde B1 y B2 son variables lógicas, esta se cubre para B1 verdadero y B2 falso, B1 falso y B2 verdadero y ambos B1 y B2 verdadero.
Etiquetas:
Caja Blanca,
Herramientas,
Herramientas.,
Prueba de condiciones,
Pruebas de Software,
Técnicas,
Técnicas de Pruebas
Ubicación:
45500 Tlaquepaque, JAL, Mexico
lunes, julio 9
Advice for Computer Science College Students
Despite the fact that it was only a year or two ago that I was
blubbering about how rich Windows GUI clients were the wave of the
future, college students nonetheless do occasionally email me asking for
career advice, and since it's recruiting season, I thought I'd write up
my standard advice which they can read, laugh at, and ignore.
Most college students, fortunately, are brash enough never to bother asking their elders for advice, which, in the field of computer science, is a good thing, because their elders are apt to say goofy, antediluvian things like "the demand for keypunch operators will exceed 100,000,000 by the year 2010" and "lisp careers are really very hot right now."
I, too, have no idea what I'm talking about when I give advice to college students. I'm so hopelessly out of date that I can't really figure out AIM and still use (horrors!) this quaint old thing called "email" which was popular in the days when music came on flat round plates called "CDs."
So you'd be better off ignoring what I'm saying here and instead building some kind of online software thing that lets other students find people to go out on dates with.
Nevertheless.
If you enjoy programming computers, count your blessings: you are in a very fortunate minority of people who can make a great living doing work they enjoy. Most people aren't so lucky. The very idea that you can "love your job" is a modern concept. Work is supposed to be something unpleasant you do to get money to do the things you actually like doing, when you're 65 and can finally retire, if you can afford it, and if you're not too old and infirm to do those things, and if those things don't require reliable knees, good eyes, and the ability to walk twenty feet without being out of breath, etc.
What was I talking about? Oh yeah. Advice.
Without further ado, then, here are Joel's Seven Pieces of Free Advice for Computer Science College Students (worth what you paid for them):
Learn how to write before graduating.
Would Linux have succeeded if Linus Torvalds hadn't evangelized it? As brilliant a hacker as he is, it was Linus's ability to convey his ideas in written English via email and mailing lists that made Linux attract a worldwide brigade of volunteers.
Have you heard of the latest fad, Extreme Programming? Well, without getting into what I think about XP, the reason you've heard of it is because it is being promoted by people who are very gifted writers and speakers.
Even on the small scale, when you look at any programming organization, the programmers with the most power and influence are the ones who can write and speak in English clearly, convincingly, and comfortably. Also it helps to be tall, but you can't do anything about that.
The difference between a tolerable programmer and a great programmer is not how many programming languages they know, and it's not whether they prefer Python or Java. It's whether they can communicate their ideas. By persuading other people, they get leverage. By writing clear comments and technical specs, they let other programmers understand their code, which means other programmers can use and work with their code instead of rewriting it. Absent this, their code is worthless. By writing clear technical documentation for end users, they allow people to figure out what their code is supposed to do, which is the only way those users can see the value in their code. There's a lot of wonderful, useful code buried on sourceforge somewhere that nobody uses because it was created by programmers who don't write very well (or don't write at all), and so nobody knows what they've done and their brilliant code languishes.
I won't hire a programmer unless they can write, and write well, in English. If you can write, wherever you get hired, you'll soon find that you're getting asked to write the specifications and that means you're already leveraging your influence and getting noticed by management.
Most colleges designate certain classes as "writing intensive," meaning, you have to write an awful lot to pass them. Look for those classes and take them! Seek out classes in any field that have weekly or daily written assignments.
Start a journal or weblog. The more you write, the easier it will be, and the easier it is to write, the more you'll write, in a virtuous circle.
Learn C before graduating Part two: C. Notice I didn't say C++. Although C is becoming increasingly rare, it is still the lingua franca of working programmers. It is the language they use to communicate with one another, and, more importantly, it is much closer to the machine than "modern" languages that you'll be taught in college like ML, Java, Python, whatever trendy junk they teach these days. You need to spend at least a semester getting close to the machine or you'll never be able to create efficient code in higher level languages. You'll never be able to work on compilers and operating systems, which are some of the best programming jobs around. You'll never be trusted to create architectures for large scale projects. I don't care how much you know about continuations and closures and exception handling: if you can't explain why while (*s++ = *t++); copies a string, or if that isn't the most natural thing in the world to you, well, you're programming based on superstition, as far as I'm concerned: a medical doctor who doesn't know basic anatomy, passing out prescriptions based on what the pharma sales babe said would work.
Learn microeconomics before graduating
Super quick review if you haven't taken any economics courses: econ is one of those fields that starts off with a bang, with many useful theories and facts that make sense, can be proven in the field, etc., and then it's all downhill from there. The useful bang at the beginning is microeconomics, which is the foundation for literally every theory in business that matters. After that things start to deteriorate: you get into Macroeconomics (feel free to skip this if you want) with its interesting theories about things like the relationship of interest rates to unemployment which, er, seem to be disproven more often than they are proven, and after that it just gets worse and worse and a lot of econ majors switch out to Physics, which gets them better Wall Street jobs, anyway. But make sure you take Microeconomics, because you have to know about supply and demand, you have to know about competitive advantage, and you have to understand NPVs and discounting and marginal utility before you'll have any idea why business works the way it does.
Why should CS majors learn econ? Because a programmer who understands the fundamentals of business is going to be a more valuable programmer, to a business, than a programmer who doesn't. That's all there is to it. I can't tell you how many times I've been frustrated by programmers with crazy ideas that make sense in code but don't make sense in capitalism. If you understand this stuff, you're a more valuable programmer, and you'll get rewarded for it, for reasons which you'll also learn in micro.
Don't blow off non-CS classes just because they're boring. Blowing off your non-CS courses is a great way to get a lower GPA.
Never underestimate how big a deal your GPA is. Lots and lots of recruiters and hiring managers, myself included, go straight to the GPA when they scan a resume, and we're not going to apologize for it. Why? Because the GPA, more than any other one number, reflects the sum of what dozens of professors over a long period of time in many different situations think about your work. SAT scores? Ha! That's one test over a few hours. The GPA reflects hundreds of papers and midterms and classroom participations over four years. Yeah, it's got its problems. There has been grade inflation over the years. Nothing about your GPA says whether you got that GPA taking easy classes in home economics at Podunk Community College or taking graduate level Quantum Mechanics at Caltech. Eventually, after I screen out all the 2.5 GPAs from Podunk Community, I'm going to ask for transcripts and recommendations. And then I'm going to look for consistently high grades, not just high grades in computer science.
Why should I, as an employer looking for software developers, care about what grade you got in European History? After all, history is boring. Oh, so, you're saying I should hire you because you don't work very hard when the work is boring? Well, there's boring stuff in programming, too. Every job has its boring moments. And I don't want to hire people that only want to do the fun stuff.
I took this course in college called Cultural Anthropology because I figured, what the heck, I need to learn something about anthropology, and this looked like an interesting survey course.
Interesting? Not even close! I had to read these incredibly monotonous books about Indians in the Brazilian rain forest and Trobriand Islanders, who, with all due respect, are not very interesting to me. At some point, the class was so incredibly wearisome that I longed for something more exciting, like watching grass grow. I had completely lost interest in the subject matter. Completely, and thoroughly. My eyes teared I was so tired of the endless discussions of piling up yams. I don't know why the Trobriand Islanders spend so much time piling up yams, I can't remember any more, it's incredibly boring, but It Was Going To Be On The Midterm, so I plowed through it. I eventually decided that Cultural Anthropology was going to be my Boredom Gauntlet: my personal obstacle course of tedium. If I could get an A in a class where the tests required me to learn all about potlatch blankets, I could handle anything, no matter how boring. The next time I accidentally get stuck in Lincoln Center sitting through all 18 hours of Wagner’s Ring Cycle, I could thank my studies of the Kwakiutl for making it seem pleasant by comparison.
I got an A. And if I could do it, you can do it.
Take programming-intensive courses.
I remember the exact moment I vowed never to go to graduate school.
It was in a course on Dynamic Logic, taught by the dynamic Lenore Zuck at Yale, one of the brightest of an array of very bright CS faculty.
Now, my murky recollections are not going to do proper credit to this field, but let me muddle through anyway. The idea of Formal Logic is that you prove things are true because other things are true. For example thanks to Formal Logic, "Everyone who gets good grades will get hired" plus "Johnny got good grades" allows you to discover the new true fact, "Johnny will get hired." It's all very quaint and it only takes ten seconds for a deconstructionist to totally tear apart everything useful in Formal Logic so you're left with something fun, but useless.
Now, dynamic logic is the same thing, with the addition of time. For example, "after you turn the light on, you can see your shoes" plus "The light went on in the past" implies "you can see your shoes."
Dynamic Logic is appealing to brilliant theoreticians like Professor Zuck because it holds up the hope that you might be able to formally prove things about computer programs, which could be very useful, if, for example, you could formally prove that the Mars Rover's flash card wouldn't overflow and cause itself to be rebooted again and again all day long when it's supposed to be driving around the red planet looking for Marvin the Martian.
So in the first day of that class, Dr. Zuck filled up two entire whiteboards and quite a lot of the wall next to the whiteboards proving that if you have a light switch, and the light was off, and you flip the switch, the light will then be on.
The proof was insanely complicated, and very error-prone. It was harder to prove that the proof was correct than to convince yourself of the fact that switching a light switch turns on the light. Indeed the multiple whiteboards of proof included many skipped steps, skipped because they were too tedious to go into formally. Many steps were reached using the long-cherished method of Proof by Induction, others by Proof by Reductio ad Absurdum, and still others using Proof by Graduate Student.
For our homework, we had to prove the converse: if the light was off, and it's on now, prove that you flipped it.
I tried, I really did.
I spent hours in the library trying.
After a couple of hours I found a mistake in Dr. Zuck's original proof which I was trying to emulate. Probably I copied it down wrong, but it made me realize something: if it takes three hours of filling up blackboards to prove something trivial, allowing hundreds of opportunities for mistakes to slip in, this mechanism would never be able to prove things that are interesting.
Not that that matters to dynamic logicians: they're not in it for useful, they're in it for tenure.
I dropped the class and vowed never to go to graduate school in Computer Science.
The moral of the story is that computer science is not the same as software development. If you're really really lucky, your school might have a decent software development curriculum, although, they might not, because elite schools think that teaching practical skills is better left to the technical-vocational institutes and the prison rehabilitation programs. You can learn mere programming anywhere. We are Yale University, and we Mold Future World Leaders. You think your $160,000 tuition entititles you to learn about while loops? What do you think this is, some fly-by-night Java seminar at the Airport Marriott? Pshaw.
The trouble is, we don't really have professional schools in software development, so if you want to be a programmer, you probably majored in Computer Science. Which is a fine subject to major in, but it's a different subject than software development.
If you're lucky, though, you can find lots of programming-intensive courses in the CS department, just like you can find lots of courses in the History department where you'll write enough to learn how to write. And those are the best classes to take. If you love programming, don't feel bad if you don't understand the point of those courses in lambda calculus or linear algebra where you never touch a computer. Look for the 400-level courses with Practicum in the name. This is an attempt to hide a useful (shudder) course from the Liberal Artsy Fartsy Administration by dolling it up with a Latin name.
Stop worrying about all the jobs going to India.
Well, OK, first of all, if you're already in India, you never really had to worry about this, so don't even start worrying about all the jobs going to India. They're wonderful jobs, enjoy them in good health.
But I keep hearing that enrollment in CS departments is dropping perilously, and one reason I hear for it is "students are afraid to go into a field where all the jobs are going to India." That's so wrong for so many reasons. First, trying to choose a career based on a current business fad is foolish. Second, programming is incredibly good training for all kinds of fabulously interesting jobs, such as business process engineering, even if every single programming job does go to India and China. Third, and trust me on this, there's still an incredible shortage of the really good programmers, here and in India. Yes, there are a bunch of out of work IT people making a lot of noise about how long they've been out of work, but you know what? At the risk of pissing them off, really good programmers do have jobs. Fourth, you got any better ideas? What are you going to do, major in History? Then you'll have no choice but to go to law school. And there's one thing I do know: 99% of working lawyers hate their jobs, hate every waking minute of it, and they're working 90 hour weeks, too. Like I said: if you love to program computers, count your blessings: you are in a very fortunate minority of people who can make a great living doing work they love.
Anyway, I don't think students really think about this. The drop in CS enrollment is merely a resumption of historically normal levels after a big bubble in enrollment caused by dotcom mania. That bubble consisted of people who didn't really like programming but thought the sexy high paid jobs and the chances to IPO at age 24 were to be found in the CS department. Those people, thankfully, are long gone.
No matter what you do, get a good summer internship.
Smart recruiters know that the people who love programming wrote a database for their dentist in 8th grade, and taught at computer camp for three summers before college, and built the content management system for the campus newspaper, and had summer internships at software companies. That's what they're looking for on your resume.
If you enjoy programming, the biggest mistake you can make is to take any kind of job--summer, part time, or otherwise--that is not a programming job. I know, every other 19-year-old wants to work in the mall folding shirts, but you have a skill that is incredibly valuable even when you're 19, and it's foolish to waste it folding shirts. By the time you graduate, you really should have a resume that lists a whole bunch of programming jobs. The A&F graduates are going to be working at Enterprise Rent-a-Car "helping people with their rental needs." (Except for Tom Welling. He plays Superman on TV.)
To make your life really easy, and to underscore just how completely self-serving this whole essay is, my company, Fog Creek Software, has summer internships in software development that look great on resumes. "You will most likely learn more about software coding, development, and business with Fog Creek Software than any other internship out there," says Ben, one of the interns from last summer, and not entirely because I sent a goon out to his dorm room to get him to say that. The application deadline is February 1st. Get on it.
If you follow my advice, you, too, may end up selling stock in Microsoft way too soon, turning down jobs at Google because you want your own office with a door, and other stupid life decisions, but they won't be my fault. I told you not to listen to me.
http://www.joelonsoftware.com/articles/CollegeAdvice.html
Most college students, fortunately, are brash enough never to bother asking their elders for advice, which, in the field of computer science, is a good thing, because their elders are apt to say goofy, antediluvian things like "the demand for keypunch operators will exceed 100,000,000 by the year 2010" and "lisp careers are really very hot right now."
I, too, have no idea what I'm talking about when I give advice to college students. I'm so hopelessly out of date that I can't really figure out AIM and still use (horrors!) this quaint old thing called "email" which was popular in the days when music came on flat round plates called "CDs."
So you'd be better off ignoring what I'm saying here and instead building some kind of online software thing that lets other students find people to go out on dates with.
Nevertheless.
If you enjoy programming computers, count your blessings: you are in a very fortunate minority of people who can make a great living doing work they enjoy. Most people aren't so lucky. The very idea that you can "love your job" is a modern concept. Work is supposed to be something unpleasant you do to get money to do the things you actually like doing, when you're 65 and can finally retire, if you can afford it, and if you're not too old and infirm to do those things, and if those things don't require reliable knees, good eyes, and the ability to walk twenty feet without being out of breath, etc.
What was I talking about? Oh yeah. Advice.
Without further ado, then, here are Joel's Seven Pieces of Free Advice for Computer Science College Students (worth what you paid for them):
- Learn how to write before graduating.
- Learn C before graduating.
- Learn microeconomics before graduating.
- Don't blow off non-CS classes just because they're boring.
- Take programming-intensive courses.
- Stop worrying about all the jobs going to India.
- No matter what you do, get a good summer internship.
Learn how to write before graduating.
Would Linux have succeeded if Linus Torvalds hadn't evangelized it? As brilliant a hacker as he is, it was Linus's ability to convey his ideas in written English via email and mailing lists that made Linux attract a worldwide brigade of volunteers.
Have you heard of the latest fad, Extreme Programming? Well, without getting into what I think about XP, the reason you've heard of it is because it is being promoted by people who are very gifted writers and speakers.
Even on the small scale, when you look at any programming organization, the programmers with the most power and influence are the ones who can write and speak in English clearly, convincingly, and comfortably. Also it helps to be tall, but you can't do anything about that.
The difference between a tolerable programmer and a great programmer is not how many programming languages they know, and it's not whether they prefer Python or Java. It's whether they can communicate their ideas. By persuading other people, they get leverage. By writing clear comments and technical specs, they let other programmers understand their code, which means other programmers can use and work with their code instead of rewriting it. Absent this, their code is worthless. By writing clear technical documentation for end users, they allow people to figure out what their code is supposed to do, which is the only way those users can see the value in their code. There's a lot of wonderful, useful code buried on sourceforge somewhere that nobody uses because it was created by programmers who don't write very well (or don't write at all), and so nobody knows what they've done and their brilliant code languishes.
I won't hire a programmer unless they can write, and write well, in English. If you can write, wherever you get hired, you'll soon find that you're getting asked to write the specifications and that means you're already leveraging your influence and getting noticed by management.
Most colleges designate certain classes as "writing intensive," meaning, you have to write an awful lot to pass them. Look for those classes and take them! Seek out classes in any field that have weekly or daily written assignments.
Start a journal or weblog. The more you write, the easier it will be, and the easier it is to write, the more you'll write, in a virtuous circle.
Learn C before graduating Part two: C. Notice I didn't say C++. Although C is becoming increasingly rare, it is still the lingua franca of working programmers. It is the language they use to communicate with one another, and, more importantly, it is much closer to the machine than "modern" languages that you'll be taught in college like ML, Java, Python, whatever trendy junk they teach these days. You need to spend at least a semester getting close to the machine or you'll never be able to create efficient code in higher level languages. You'll never be able to work on compilers and operating systems, which are some of the best programming jobs around. You'll never be trusted to create architectures for large scale projects. I don't care how much you know about continuations and closures and exception handling: if you can't explain why while (*s++ = *t++); copies a string, or if that isn't the most natural thing in the world to you, well, you're programming based on superstition, as far as I'm concerned: a medical doctor who doesn't know basic anatomy, passing out prescriptions based on what the pharma sales babe said would work.
Learn microeconomics before graduating
Super quick review if you haven't taken any economics courses: econ is one of those fields that starts off with a bang, with many useful theories and facts that make sense, can be proven in the field, etc., and then it's all downhill from there. The useful bang at the beginning is microeconomics, which is the foundation for literally every theory in business that matters. After that things start to deteriorate: you get into Macroeconomics (feel free to skip this if you want) with its interesting theories about things like the relationship of interest rates to unemployment which, er, seem to be disproven more often than they are proven, and after that it just gets worse and worse and a lot of econ majors switch out to Physics, which gets them better Wall Street jobs, anyway. But make sure you take Microeconomics, because you have to know about supply and demand, you have to know about competitive advantage, and you have to understand NPVs and discounting and marginal utility before you'll have any idea why business works the way it does.
Why should CS majors learn econ? Because a programmer who understands the fundamentals of business is going to be a more valuable programmer, to a business, than a programmer who doesn't. That's all there is to it. I can't tell you how many times I've been frustrated by programmers with crazy ideas that make sense in code but don't make sense in capitalism. If you understand this stuff, you're a more valuable programmer, and you'll get rewarded for it, for reasons which you'll also learn in micro.
Don't blow off non-CS classes just because they're boring. Blowing off your non-CS courses is a great way to get a lower GPA.
Never underestimate how big a deal your GPA is. Lots and lots of recruiters and hiring managers, myself included, go straight to the GPA when they scan a resume, and we're not going to apologize for it. Why? Because the GPA, more than any other one number, reflects the sum of what dozens of professors over a long period of time in many different situations think about your work. SAT scores? Ha! That's one test over a few hours. The GPA reflects hundreds of papers and midterms and classroom participations over four years. Yeah, it's got its problems. There has been grade inflation over the years. Nothing about your GPA says whether you got that GPA taking easy classes in home economics at Podunk Community College or taking graduate level Quantum Mechanics at Caltech. Eventually, after I screen out all the 2.5 GPAs from Podunk Community, I'm going to ask for transcripts and recommendations. And then I'm going to look for consistently high grades, not just high grades in computer science.
Why should I, as an employer looking for software developers, care about what grade you got in European History? After all, history is boring. Oh, so, you're saying I should hire you because you don't work very hard when the work is boring? Well, there's boring stuff in programming, too. Every job has its boring moments. And I don't want to hire people that only want to do the fun stuff.
I took this course in college called Cultural Anthropology because I figured, what the heck, I need to learn something about anthropology, and this looked like an interesting survey course.
Interesting? Not even close! I had to read these incredibly monotonous books about Indians in the Brazilian rain forest and Trobriand Islanders, who, with all due respect, are not very interesting to me. At some point, the class was so incredibly wearisome that I longed for something more exciting, like watching grass grow. I had completely lost interest in the subject matter. Completely, and thoroughly. My eyes teared I was so tired of the endless discussions of piling up yams. I don't know why the Trobriand Islanders spend so much time piling up yams, I can't remember any more, it's incredibly boring, but It Was Going To Be On The Midterm, so I plowed through it. I eventually decided that Cultural Anthropology was going to be my Boredom Gauntlet: my personal obstacle course of tedium. If I could get an A in a class where the tests required me to learn all about potlatch blankets, I could handle anything, no matter how boring. The next time I accidentally get stuck in Lincoln Center sitting through all 18 hours of Wagner’s Ring Cycle, I could thank my studies of the Kwakiutl for making it seem pleasant by comparison.
I got an A. And if I could do it, you can do it.
Take programming-intensive courses.
I remember the exact moment I vowed never to go to graduate school.
It was in a course on Dynamic Logic, taught by the dynamic Lenore Zuck at Yale, one of the brightest of an array of very bright CS faculty.
Now, my murky recollections are not going to do proper credit to this field, but let me muddle through anyway. The idea of Formal Logic is that you prove things are true because other things are true. For example thanks to Formal Logic, "Everyone who gets good grades will get hired" plus "Johnny got good grades" allows you to discover the new true fact, "Johnny will get hired." It's all very quaint and it only takes ten seconds for a deconstructionist to totally tear apart everything useful in Formal Logic so you're left with something fun, but useless.
Now, dynamic logic is the same thing, with the addition of time. For example, "after you turn the light on, you can see your shoes" plus "The light went on in the past" implies "you can see your shoes."
Dynamic Logic is appealing to brilliant theoreticians like Professor Zuck because it holds up the hope that you might be able to formally prove things about computer programs, which could be very useful, if, for example, you could formally prove that the Mars Rover's flash card wouldn't overflow and cause itself to be rebooted again and again all day long when it's supposed to be driving around the red planet looking for Marvin the Martian.
So in the first day of that class, Dr. Zuck filled up two entire whiteboards and quite a lot of the wall next to the whiteboards proving that if you have a light switch, and the light was off, and you flip the switch, the light will then be on.
The proof was insanely complicated, and very error-prone. It was harder to prove that the proof was correct than to convince yourself of the fact that switching a light switch turns on the light. Indeed the multiple whiteboards of proof included many skipped steps, skipped because they were too tedious to go into formally. Many steps were reached using the long-cherished method of Proof by Induction, others by Proof by Reductio ad Absurdum, and still others using Proof by Graduate Student.
For our homework, we had to prove the converse: if the light was off, and it's on now, prove that you flipped it.
I tried, I really did.
I spent hours in the library trying.
After a couple of hours I found a mistake in Dr. Zuck's original proof which I was trying to emulate. Probably I copied it down wrong, but it made me realize something: if it takes three hours of filling up blackboards to prove something trivial, allowing hundreds of opportunities for mistakes to slip in, this mechanism would never be able to prove things that are interesting.
Not that that matters to dynamic logicians: they're not in it for useful, they're in it for tenure.
I dropped the class and vowed never to go to graduate school in Computer Science.
The moral of the story is that computer science is not the same as software development. If you're really really lucky, your school might have a decent software development curriculum, although, they might not, because elite schools think that teaching practical skills is better left to the technical-vocational institutes and the prison rehabilitation programs. You can learn mere programming anywhere. We are Yale University, and we Mold Future World Leaders. You think your $160,000 tuition entititles you to learn about while loops? What do you think this is, some fly-by-night Java seminar at the Airport Marriott? Pshaw.
The trouble is, we don't really have professional schools in software development, so if you want to be a programmer, you probably majored in Computer Science. Which is a fine subject to major in, but it's a different subject than software development.
If you're lucky, though, you can find lots of programming-intensive courses in the CS department, just like you can find lots of courses in the History department where you'll write enough to learn how to write. And those are the best classes to take. If you love programming, don't feel bad if you don't understand the point of those courses in lambda calculus or linear algebra where you never touch a computer. Look for the 400-level courses with Practicum in the name. This is an attempt to hide a useful (shudder) course from the Liberal Artsy Fartsy Administration by dolling it up with a Latin name.
Stop worrying about all the jobs going to India.
Well, OK, first of all, if you're already in India, you never really had to worry about this, so don't even start worrying about all the jobs going to India. They're wonderful jobs, enjoy them in good health.
But I keep hearing that enrollment in CS departments is dropping perilously, and one reason I hear for it is "students are afraid to go into a field where all the jobs are going to India." That's so wrong for so many reasons. First, trying to choose a career based on a current business fad is foolish. Second, programming is incredibly good training for all kinds of fabulously interesting jobs, such as business process engineering, even if every single programming job does go to India and China. Third, and trust me on this, there's still an incredible shortage of the really good programmers, here and in India. Yes, there are a bunch of out of work IT people making a lot of noise about how long they've been out of work, but you know what? At the risk of pissing them off, really good programmers do have jobs. Fourth, you got any better ideas? What are you going to do, major in History? Then you'll have no choice but to go to law school. And there's one thing I do know: 99% of working lawyers hate their jobs, hate every waking minute of it, and they're working 90 hour weeks, too. Like I said: if you love to program computers, count your blessings: you are in a very fortunate minority of people who can make a great living doing work they love.
Anyway, I don't think students really think about this. The drop in CS enrollment is merely a resumption of historically normal levels after a big bubble in enrollment caused by dotcom mania. That bubble consisted of people who didn't really like programming but thought the sexy high paid jobs and the chances to IPO at age 24 were to be found in the CS department. Those people, thankfully, are long gone.
No matter what you do, get a good summer internship.
Smart recruiters know that the people who love programming wrote a database for their dentist in 8th grade, and taught at computer camp for three summers before college, and built the content management system for the campus newspaper, and had summer internships at software companies. That's what they're looking for on your resume.
If you enjoy programming, the biggest mistake you can make is to take any kind of job--summer, part time, or otherwise--that is not a programming job. I know, every other 19-year-old wants to work in the mall folding shirts, but you have a skill that is incredibly valuable even when you're 19, and it's foolish to waste it folding shirts. By the time you graduate, you really should have a resume that lists a whole bunch of programming jobs. The A&F graduates are going to be working at Enterprise Rent-a-Car "helping people with their rental needs." (Except for Tom Welling. He plays Superman on TV.)
To make your life really easy, and to underscore just how completely self-serving this whole essay is, my company, Fog Creek Software, has summer internships in software development that look great on resumes. "You will most likely learn more about software coding, development, and business with Fog Creek Software than any other internship out there," says Ben, one of the interns from last summer, and not entirely because I sent a goon out to his dorm room to get him to say that. The application deadline is February 1st. Get on it.
If you follow my advice, you, too, may end up selling stock in Microsoft way too soon, turning down jobs at Google because you want your own office with a door, and other stupid life decisions, but they won't be my fault. I told you not to listen to me.
http://www.joelonsoftware.com/articles/CollegeAdvice.html
Técnica de Camino Básico - Pruebas de Caja Blanca
El
método del camino básico, propuesto por Tom McCabe, permite al diseñador de
casos de prueba obtener una medida de la complejidad lógica de un diseño
procedimental y usar esa medida como guía para la definición de un conjunto
básico de caminos de ejecución. Los casos de prueba derivados del conjunto
básico garantizan que durante la prueba se ejecuta por lo menos una vez cada
sentencia de programa.
Cualquier representación del diseño procedimental se puede traducir a un grafo de flujo. La complejidad ciclomática de este grafo (como se definió en la clase anterior) define el número de caminos independientes del conjunto básico de un programa y nos da un límite superior para el número de pruebas que se deben realizar para asegurar que se ejecuta cada sentencia al menos una vez.
Un camino independiente es cualquier camino del programa que introduce por lo menos un nuevo conjunto de sentencias de procesamiento o una nueva condición.
En términos del grafo de flujo, un camino independiente se debe mover por lo menos por una arista que no haya sido recorrida anteriormente a la definición de un camino.
La complejidad se puede calcular de tres formas:
1.- El numero de regiones del grafo de flujo
2.- Aristas - Nodos + 2
3.-Nodos predicado + 1 (un nodo predicado es el que representa una condicional if o case, es decir, que de él salen varios caminos)
El valor de V(G) nos da el número de caminos linealmente independientes de la estructura de control del programa. Entonces se preparan los casos de prueba que forzarán la ejecución de cada camino del conjunto básico.
Ejemplo:
Diseñemos un algoritmo que sea capaz de procesar una situación de overflow cuando un conjunto de N pilas comparten un área de memoria común de M localizaciones enumeradas de L0 a Lm.
Cualquier representación del diseño procedimental se puede traducir a un grafo de flujo. La complejidad ciclomática de este grafo (como se definió en la clase anterior) define el número de caminos independientes del conjunto básico de un programa y nos da un límite superior para el número de pruebas que se deben realizar para asegurar que se ejecuta cada sentencia al menos una vez.
Un camino independiente es cualquier camino del programa que introduce por lo menos un nuevo conjunto de sentencias de procesamiento o una nueva condición.
En términos del grafo de flujo, un camino independiente se debe mover por lo menos por una arista que no haya sido recorrida anteriormente a la definición de un camino.
La complejidad se puede calcular de tres formas:
1.- El numero de regiones del grafo de flujo
2.- Aristas - Nodos + 2
3.-Nodos predicado + 1 (un nodo predicado es el que representa una condicional if o case, es decir, que de él salen varios caminos)
El valor de V(G) nos da el número de caminos linealmente independientes de la estructura de control del programa. Entonces se preparan los casos de prueba que forzarán la ejecución de cada camino del conjunto básico.
Ejemplo:
Diseñemos un algoritmo que sea capaz de procesar una situación de overflow cuando un conjunto de N pilas comparten un área de memoria común de M localizaciones enumeradas de L0 a Lm.
L0 L1 L2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LM
Entradas al programa:
N - Cantidad de pilas
b(j), t(j) J = 1, ..., N
i - Pila donde se va a efectuar una inserción
Si t(i)
> b(i+1) ocurre overflow. Tres casos son posibles:
1.- ¿Existe espacio a
la izquierda? Se decrementa j desde i hasta 1 hasta encontrar que t(j) <=
b(j+1). Si se encuentra, se corren a la izquierda una posición todas las pilas
desde la j+1 hasta la pila i. Posteriormente se procede a ejecutar la inserción
abandonada.
2.- Si no existe espacio a la izquierda ¿existe a la derecha? Se
incremente j desde i hasta M hasta encontrar que t(j) <= b(j+1). Si es así,
se corren todas las pilas 1 posición a la derecha desde la i+1 hasta la j.
Efectuar la inserción abandonada.
3.- Si no existe espacio a la derecha ni a
la izquierda, el overflow no tiene solución.
Caminos:
1 2
9
1 3 4 5 7 9
1 3 4 6 8 7 9
1 3 4 6 9
1 3 4 5 7 9
1 3 4 6 8 7 9
1 3 4 6 9
Los casos de prueba deben
explotar los caminos anteriores.
jueves, julio 5
Top 5 recommended books on software testing for new software testers
It now astounds me that
some individuals can spend years in a testing role and never read a
single book on the subject. I say 'now' because when I first started in
software testing I hadn't read a book on the subject either. In
fact, it wasn't until I had to start teaching software testing to new
testers at Microsoft that I began reading books on the subject in ernest
instead of just skimming through chapters in Kaner's book, Testing Computer Software.
When I thought the purpose of testing was simply to find bugs, I didn't need books on software testing to help me find bugs. I was very good at finding errors in Microsoft products (it really isn't that hard). But, as I matured in this profession, books on software testing provided me new perspectives on the application of time-proven techniques to help me become more effective and more efficient in my testing endeavors. Books on software testing presented diverse insight into various methodologies and approaches to solve problems improve processes. Books on software testing portrayed a breadth of information from a myriad of outlooks that enabled me to draw logical conclusions based on a wealth of industry experiences rather than from a limited (Microsoft-ish) view of testing based on personal trail and error in large market computer software.
Now, I read a lot of books on software testing. Some books are less than I'd expect, most are as dry as English humor, and there are a mere handful I'd consider great treatises on the subject of software testing. But, the important point is that I learn something from each new book I read. The good news is that today there is a plethora of books on software testing. (This is great compared to 10 years ago when there were but a few books that broached the subject.) But, for the folks who have chosen to enter into the professional discipline of software testing it is sometimes hard to know where to begin. To complicate matters a bit more, the books I recommend differ according to one's primary role in the organization. But, since most new testers (including those with CS degrees) receive very little formal education in software testing, I think it is important for all new testers (and developers) to become familiar with the basic techniques used in the practice of software testing.
This list of software testing books was compiled based on input from Test Managers and Test Architects at Microsoft as recommended reading for new testers at the company, and are books I constantly reference in our internal training.
1. A Practitioner's Guide to Software Test Design, Lee Copeland, 2003
2. The Art of Software Testing, 2nd edition, Glenford Myers, et. el., 2004
3. Software Testing Techniques, 2nd edition, Boris Beizer, 1990
4. How to Break Software: A Practical Guide to Testing, James Whittaker, 2002
5. Testing Object-Oriented Systems: Models, Patterns, and Tools, Robert V. Binder, 1999
http://blogs.msdn.com/b/imtesty/archive/2006/05/02/588125.aspx
When I thought the purpose of testing was simply to find bugs, I didn't need books on software testing to help me find bugs. I was very good at finding errors in Microsoft products (it really isn't that hard). But, as I matured in this profession, books on software testing provided me new perspectives on the application of time-proven techniques to help me become more effective and more efficient in my testing endeavors. Books on software testing presented diverse insight into various methodologies and approaches to solve problems improve processes. Books on software testing portrayed a breadth of information from a myriad of outlooks that enabled me to draw logical conclusions based on a wealth of industry experiences rather than from a limited (Microsoft-ish) view of testing based on personal trail and error in large market computer software.
Now, I read a lot of books on software testing. Some books are less than I'd expect, most are as dry as English humor, and there are a mere handful I'd consider great treatises on the subject of software testing. But, the important point is that I learn something from each new book I read. The good news is that today there is a plethora of books on software testing. (This is great compared to 10 years ago when there were but a few books that broached the subject.) But, for the folks who have chosen to enter into the professional discipline of software testing it is sometimes hard to know where to begin. To complicate matters a bit more, the books I recommend differ according to one's primary role in the organization. But, since most new testers (including those with CS degrees) receive very little formal education in software testing, I think it is important for all new testers (and developers) to become familiar with the basic techniques used in the practice of software testing.
This list of software testing books was compiled based on input from Test Managers and Test Architects at Microsoft as recommended reading for new testers at the company, and are books I constantly reference in our internal training.
1. A Practitioner's Guide to Software Test Design, Lee Copeland, 2003
2. The Art of Software Testing, 2nd edition, Glenford Myers, et. el., 2004
3. Software Testing Techniques, 2nd edition, Boris Beizer, 1990
4. How to Break Software: A Practical Guide to Testing, James Whittaker, 2002
5. Testing Object-Oriented Systems: Models, Patterns, and Tools, Robert V. Binder, 1999
http://blogs.msdn.com/b/imtesty/archive/2006/05/02/588125.aspx
Suscribirse a:
Entradas (Atom)