Below sample bug/defect report will give you exact idea of how to report a bug in bug tracking tool.
Here is the example scenario that caused a bug:
Lets
assume in your application under test you want to create a new user
with user information, for that you need to logon into the application
and navigate to USERS menu > New User, then enter all the details in
the ‘User form’ like, First Name, Last Name, Age, Address, Phone etc.
Once you enter all these information, you need to click on ‘SAVE’ button
in order to save the user. Now you can see a success message saying,
“New User has been created successfully”.
But when you entered
into your application by logging in and navigated to USERS menu > New
user, entered all the required information to create new user and
clicked on SAVE button. BANG! The application crashed and you got one
error page on screen. (Capture this error message window and save as a
Microsoft paint file)
Now this is the bug scenario and you would like to report this as a BUG in your bug-tracking tool.
How will you report this bug effectively?
Here is the sample bug report for above mentioned example:
(Note that some ‘bug report’ fields might differ depending on your bug tracking system)
SAMPLE BUG REPORT:
Bug Name: Application crash on clicking the SAVE button while creating a new user.
Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are using)
Environment: Windows 2003/SQL Server 2005
Description:
Application crash on clicking the SAVE button while creating a new
user, hence unable to create a new user in the application.
Steps To Reproduce:
1) Logon into the application
2) Navigate to the Users Menu > New User
3) Filled all the user information fields
4) Clicked on ‘Save’ button
5) Seen an error page “ORA1090 Exception: Insert values Error…”
6) See the attached logs for more information (Attach more logs related to bug..IF any)
7) And also see the attached screenshot of the error page.
Expected result: On clicking SAVE button, should be prompted to a success message “New User has been created successfully”.
(Attach ‘application crash’ screen shot.. IF any)
Save the defect/bug in the BUG TRACKING TOOL. You will get a bug id, which you can use for further bug reference.
Default ‘New bug’ mail will go to respective developer and the default
module owner (Team leader or manager) for further action.
http://www.softwaretestinghelp.com/sample-bug-report/
viernes, agosto 31
jueves, agosto 30
What's your test automation strategy?
I overheard
some test managers discussing problems with their test automation
effort, so I couldn't refrain from asking the redundant question, "What
is your test automation strategy?" They looked at me as if I had just
beamed down from another planet and said, "c'mon, you know our strategy
is to automate everything!"
It is unfortunately true that some managers drink the proverbial kool-aid and blindly regurgitate the 100% automation mantra or similar incantations such as "no manual testing" popular among agile pundits like Lisa Crispin.
Let me be clear. A goal of 100% automation is not a test strategy; it is a fantasy! Similar to the Disney fairytales where fairy dust causes magical transformations, evil is defeated, the prince marries the maiden, and everyone lives happily ever after forever automating everything is not practical or realistic.
It is unfortunately true that some managers drink the proverbial kool-aid and blindly regurgitate the 100% automation mantra or similar incantations such as "no manual testing" popular among agile pundits like Lisa Crispin.
Let me be clear. A goal of 100% automation is not a test strategy; it is a fantasy! Similar to the Disney fairytales where fairy dust causes magical transformations, evil is defeated, the prince marries the maiden, and everyone lives happily ever after forever automating everything is not practical or realistic.
Perhaps
the single biggest problem with most test automation efforts is lack of
a practical strategy. A practical test automation strategy is one that
provides a pragmatic solution to address specific business needs with
well-defined, measurable goals based upon realistic expectations.
Business
needs drive a lot of the change in any organization, and usually
involve cost saving measures, quality improvement, or increased customer
satisfaction. A business need for test automation includes reduced
testing time. (This doesn’t mean reduced ship cycles; it simply means
the time it takes to perform certain tests during the product life cycle
can be shortened.) For example, the
Build Verification Test (BVT) is a necessary test suite to verify the
stability of each new build. Depending on the size and complexity of the
product a manual BVT suite can be very time consuming. An automated BVT
suite (which should be 100% automated including results validation
because it establishes a baseline measure on build stability and the
tests remain relatively static over the duration of the development life
cycle) can substantially reduce the time spent in this phase of testing
especially in iterative build environments where the team is getting
daily or even weekly builds. It doesn’t take long to realize the cost
savings over the product life cycle.
Test
automation strategies must also have realistic expectations. For
example, I have never been convinced that finding “new’ bugs is a
realistic expectation for test automation. (Yes, it will occasionally
find some new bugs, but let’s face it…the majority of the 5 -15% of the
bugs exposed by test automation in production environments are
regressions.) I have never seen data that suggests increased automation
reduces the overall development cycle. Nor will test automation
eliminate testers. (This is a false hope imagined only by prima donna
developers and bean counting managers scheming of ways to find value in
their Masters in Business Mismanagement degrees.) So, what are realistic
expectations for test automation? Well, I can reasonably expect test
automation to identify stress issues such as mean time to failure (MTTF)
and mean time between failures (MTBF). I can reasonably require test
automation to establish baseline measures such as BVT suites or
regression suites. Test automation is a pragmatic solution for load any
type of load testing or other forms of concurrency testing.
Finally,
a good test automation strategy must have measurable goals so we
clearly understand what success looks like (or identifies where we need
to improve). Without goals we are developing automated tests just to say
we are automating. Unfortunately, I occasionally see teams with goals
of automating n% of existing tests. This really doesn’t make
much sense because it doesn’t take into account logical decisions of
what tests should be automated (remember, not all tests need to be or
should be automated), so some redundant tests or run-once type tests are
automated (which may not be the best use of your limited testing
resources). Also, the ‘existing’ set of tests is usually a moving
target, so that means the goal is a moving target, which means we can
never achieve the goal. Goals for test automation should be specific,
measurable, achievable, realistic, and timely (SMART). Set short term
and long range SMART goals for your test automation effort. For example,
a short term goals might be 100% automation of the BVT suite within 1
week after the first build drop. Long term goals might include design
elements and processes to transfer automation to sustained engineering
or maintenance teams, or 100% language neutral automation that will
execute on any localized (or pseudo localized) language version.
Test
automation is expensive. Testers have a lot of work to do in a very
limited timeframe, so it is important that we use our testing resources
effectively. A well defined automation strategy will establish clear
goals, set expectations, and provide practical, automated solutions.
http://blogs.msdn.com/b/imtesty/archive/2006/05/08/593395.aspx
lunes, agosto 27
Living life as a Software Tester!
Recently I read a very interesting article on “All I Ever need to know about testing” by Lee Copeland.
I was so impressed with the concept of our day to day work comparison with the software testing.
I will extract only points related to software testing. As a software tester keep in mind these simple points:
Share everything:
If you are a experienced tester on any project then help the new developers on your project. Some testers have habit to keep the known bugs hidden till they get implement in code and then they write a big defect report on that. Don’t try to only pump your bug count, share everything with developers.
Build trust:
Let the developers know any bug you found in design phase. Do not log the bug repeatedly with small variations just to pump the bug count. Build trust in developer and tester relation.
Don’t blame others:
As a tester you should not always blame developers for the bugs. Concentrate on bug, not always on pointing that bug in front of all people. Hit the bug and its cause not the developer!
Clean up your own mess:
When you finish doing any test scenario then reconfigure that machine to its original configuration. The same case applies for bug report. Write a clean effective bug report. Let the developer find it easy to repro and fix it.
Give credit to others for their work:
Do not take others credit. If you have referred any others work, immediately give credit to that person. Do not get frustrated if you not found any bug that later has been reported by client. Do work hard, use your skill.
Remember to flush
Like the toilets flush all the software’s at some point. While doing performance testing remember to flush the system cache.
Take a nap everyday:
We need time to think, get refresh or to regenerate our energy.
Some times its important to take one step back in order to get fresh insight and to find different working approach.
Always work in teams, team score are always better and powerful than individuals.
Now its time to take nap Happy Testing!
http://www.softwaretestinghelp.com/living-life-as-a-software-tester/
I was so impressed with the concept of our day to day work comparison with the software testing.
I will extract only points related to software testing. As a software tester keep in mind these simple points:
Share everything:
If you are a experienced tester on any project then help the new developers on your project. Some testers have habit to keep the known bugs hidden till they get implement in code and then they write a big defect report on that. Don’t try to only pump your bug count, share everything with developers.
Build trust:
Let the developers know any bug you found in design phase. Do not log the bug repeatedly with small variations just to pump the bug count. Build trust in developer and tester relation.
Don’t blame others:
As a tester you should not always blame developers for the bugs. Concentrate on bug, not always on pointing that bug in front of all people. Hit the bug and its cause not the developer!
Clean up your own mess:
When you finish doing any test scenario then reconfigure that machine to its original configuration. The same case applies for bug report. Write a clean effective bug report. Let the developer find it easy to repro and fix it.
Give credit to others for their work:
Do not take others credit. If you have referred any others work, immediately give credit to that person. Do not get frustrated if you not found any bug that later has been reported by client. Do work hard, use your skill.
Remember to flush
Like the toilets flush all the software’s at some point. While doing performance testing remember to flush the system cache.
Take a nap everyday:
We need time to think, get refresh or to regenerate our energy.
Some times its important to take one step back in order to get fresh insight and to find different working approach.
Always work in teams, team score are always better and powerful than individuals.
Now its time to take nap Happy Testing!
http://www.softwaretestinghelp.com/living-life-as-a-software-tester/
Pass Rates Don’t Matter
It seems obvious that
test pass rates are important. The higher the pass rate, the better
quality the product. The lower the pass rate, the more known issues
there are and the worse the quality of the product. It then follows
that teams should drive their pass rates to be high. I’ve shipped many
products where the exit criteria included some specified pass
rate—usually 95% passing or higher. For most of my career I agreed with
that logic. I was wrong. I have come to understand that pass rates
are irrelevant. Pass rates don’t tell you the true state of the
product. It is important which bugs remain in the product, but pass
rates don’t actually show this.
The typical argument for pass rates is that it represents the quality of the product. This makes the assumption that the tests represent the ideal product. If they all passed, the product would be error-free (or free enough). Each case is then an important aspect of this ideal state and any deviation from 100% pass is a failure to achieve the ideal. This isn’t true though. How many times have you shipped a product with 100% passing tests? Why? You probably rationalized that certain failures were not important. You were probably right. Not every case represents this ideal state. Consider a test that calls a COM API and checks the return result. Assuming you pass in a bar argument and the return result is E_FAIL. Is that a pass? Perhaps. A lot of testers would fail this because it wasn’t E_INVALIDARG. Fair point. It should be. Would you stop the product from shipping because of this though? Perhaps not. The reality is that not all cases are important. Not all cases represent whether the product is ready to ship or not.
Another argument is that 100% passing is a bright line that is easy to see. Anything less is hard to see. Did we have 871 or 872 passing tests yesterday? If it was 871 and today is 871, are they the same 129 failures? Determining this can be hard and it’s a good way to miss a bug. It is easy to remember that everything passed yesterday and no bugs are hiding in the 0 failures. I’ve made this argument. It is true as far as it goes, but it only matters if we use humans to interpret the results. Today we can use programs to analyze the failures automatically and to compare the results from today to those from yesterday.
As soon as the line is not 100% passing, rates do not matter. There is no inherent difference in the quality of a product with 99% passing tests and the quality of a product with 80% passing tests. “Really?“ you say. “Isn’t there a difference of 18%? That’s a lot of test cases.” Yes, that is true. But how valuable are those cases? Imagine a test suite with 100 test cases, only one of which touches on some core functionality. If that case fails, you have a 99% passing rate. You also don’t have a product that should ship. On the other hand, imagine a test suite for the same software with 1000 cases. Imagine that the testers were much more zealous and coded 200 cases that intersected that one bug. Perhaps it was in some activation code. These two pass rates then represent the exact same situation. The pass rate does not correlate with quality. Likewise one could imagine a test case of 1000 cases where 200 were bugs in the tests. That is an 80% pass rate and a shippable product.
The critical takeaway is that bugs matter, not tests. Failing tests represent bugs, but not equally. There is no way to determine, from a pass rate, how important the failures are. Are they the “wrong return result” sort or the “your api won’t activate” sort? You would hold the product for the 2nd, but not the first. Tests pass/fail rates do not provide the critical context about what is failing and without the context, it cannot be known whether the product should ship or not. Test cases are a means to an end. They are not the end in themselves. Test cases are merely a way to reveal the defects in a product. After they do so, their utility is gone. The defects (bugs) become the critical information. Rather than worrying about pass rates, it is better to worry about how many critical bugs are left. When all of the critical bugs are fixed, it is time to ship the product whether the pass rate is high or low.
All that being said, there is some utility in driving up pass rates. Failing cases can mask real failures. Much like code coverage, the absolute pass rate does not matter, but the act of driving the pass rate up can yield benefits.
http://blogs.msdn.com/b/steverowe/archive/2010/03/17/pass-rates-don-t-matter.aspx
The typical argument for pass rates is that it represents the quality of the product. This makes the assumption that the tests represent the ideal product. If they all passed, the product would be error-free (or free enough). Each case is then an important aspect of this ideal state and any deviation from 100% pass is a failure to achieve the ideal. This isn’t true though. How many times have you shipped a product with 100% passing tests? Why? You probably rationalized that certain failures were not important. You were probably right. Not every case represents this ideal state. Consider a test that calls a COM API and checks the return result. Assuming you pass in a bar argument and the return result is E_FAIL. Is that a pass? Perhaps. A lot of testers would fail this because it wasn’t E_INVALIDARG. Fair point. It should be. Would you stop the product from shipping because of this though? Perhaps not. The reality is that not all cases are important. Not all cases represent whether the product is ready to ship or not.
Another argument is that 100% passing is a bright line that is easy to see. Anything less is hard to see. Did we have 871 or 872 passing tests yesterday? If it was 871 and today is 871, are they the same 129 failures? Determining this can be hard and it’s a good way to miss a bug. It is easy to remember that everything passed yesterday and no bugs are hiding in the 0 failures. I’ve made this argument. It is true as far as it goes, but it only matters if we use humans to interpret the results. Today we can use programs to analyze the failures automatically and to compare the results from today to those from yesterday.
As soon as the line is not 100% passing, rates do not matter. There is no inherent difference in the quality of a product with 99% passing tests and the quality of a product with 80% passing tests. “Really?“ you say. “Isn’t there a difference of 18%? That’s a lot of test cases.” Yes, that is true. But how valuable are those cases? Imagine a test suite with 100 test cases, only one of which touches on some core functionality. If that case fails, you have a 99% passing rate. You also don’t have a product that should ship. On the other hand, imagine a test suite for the same software with 1000 cases. Imagine that the testers were much more zealous and coded 200 cases that intersected that one bug. Perhaps it was in some activation code. These two pass rates then represent the exact same situation. The pass rate does not correlate with quality. Likewise one could imagine a test case of 1000 cases where 200 were bugs in the tests. That is an 80% pass rate and a shippable product.
The critical takeaway is that bugs matter, not tests. Failing tests represent bugs, but not equally. There is no way to determine, from a pass rate, how important the failures are. Are they the “wrong return result” sort or the “your api won’t activate” sort? You would hold the product for the 2nd, but not the first. Tests pass/fail rates do not provide the critical context about what is failing and without the context, it cannot be known whether the product should ship or not. Test cases are a means to an end. They are not the end in themselves. Test cases are merely a way to reveal the defects in a product. After they do so, their utility is gone. The defects (bugs) become the critical information. Rather than worrying about pass rates, it is better to worry about how many critical bugs are left. When all of the critical bugs are fixed, it is time to ship the product whether the pass rate is high or low.
All that being said, there is some utility in driving up pass rates. Failing cases can mask real failures. Much like code coverage, the absolute pass rate does not matter, but the act of driving the pass rate up can yield benefits.
http://blogs.msdn.com/b/steverowe/archive/2010/03/17/pass-rates-don-t-matter.aspx
jueves, agosto 23
Test Developers Are Real Developers
Through a few twists of fate, I ended up at Microsoft as a test developer (lead). It’s
not something I ever considered doing before landing here and I’m sure
it is not something a lot of you have thought much about.
It is the goal of this post to introduce you to who test developers
are, where they fit into the team, and what sort of work they do.
Most people seem to have a view of the software world divided into two sorts of people: developers and testers. Devs are the real heroes of software. They get to write code. Testers are a necessary evil. They help to keep the developers from screwing up too badly but don’t produce any positive results. I won’t go into the role of test in this post. I’ll save that for another day. The thing is, most people think of testers as the people who push a lot of buttons and try to break things. They run tests. Perhaps they even cobble together some script and batch files or a VisualTest “program” to help them out. Those sort of testers do exist. I’ll call them runtime testers or software test engineers (STEs). At Microsoft, however, a lot of our testing ranks are filled with the rare breed known as the Test Developer (SDE/T).
So what is this test developer then? Is he someone who just isn’t quite good enough to be a “real” developer? I posit that the answer is no. A good test developer is as good as any developer writing production code. A good test developer is worth his weight in gold. A test developer’s job is to develop software which exercises a product to ensure its correctness and gauge its stability.
He has to exercise every corner of the code before it ships and usually
before it is documented. On my team, test developers are developers
who know how to test, not testers who know how to develop.
In
my business, which is multimedia (audio and video rendering), we
produce not polished applications but the APIs which underlie such
programs as Windows XP Media Center Edition and Windows Media Player. Countless other applications use our APIs underlying their games, video editing suites, and media players. Our work product forms the building blocks of many an application. Whenever you see video or hear sound from your computer, we’re probably underneath doing something.
So, who is it that makes sure that these APIs actually work as advertised? It is the role of the test developer to make sure that developers using our APIs have a pleasant experience. We are the front line of defense for the developers out there in the “real world” trying to use Windows to make a buck or two. We are the first ones to try to implement anything on top of those interfaces. It is our job to verify that they work as advertised.
We accomplish by writing a lot of code to exercise the interfaces. While it starts with the fairly mundane testing of each method, it doesn’t end there. It includes writing sample code (which may ship in the SDK) to do exactly what those coming after will try to do. It involves trying to simulate the end-to-end scenarios this component is likely to be used for. Most interesting, is trying to validate that the right action just took place. Anyone can call an API and determine if it reports that it did the right thing. It
is a whole other ballgame to try to verify that the audio mixing
component you are testing really did mix the audio correctly or that the
video decoder really output the picture you expected and not some
unintelligible pattern of greens and pinks. The role of the test developer is to do all of this, and to do it without an SDK to guide him. He’s the first one trying it, after all.
Test developers are tasked with one thing: writing code and writing it first. They are always breaking new ground: going where no one has gone before. It is a challenging and yet rewarding endeavor. Great software is not written by some developers and their unit tests. It takes someone to go beyond the granular and ensure that the whole package works together. When
your deliverable is not a whiz-bang UI but rather a set of building
blocks for someone else, that task falls upon the shoulders of the test
developer.
That
is probably enough for now. Anyway, if you are out there looking for
work as a developer, don't skip over all those SDE/T jobs. Our code may
not ship in the box but it's not a job for someone who isn't good
enough to be a “real” dev.
http://blogs.msdn.com/b/steverowe/archive/2004/03/26/96663.aspx
miércoles, agosto 22
Recommended Books On Testing
Another question I was asked via e-mail. "Do you know of any good
books that explain the testing process in detail? I noticed you
mentioned Debugging Applications for .Net and Windows, but I am looking
for a book that really explains the basics of 'I have this
API/application and I want to test it'. "
Let me say up front that I’m not a big fan of testing books. Most of what they say is obvious and they are often long-winded. You can learn to test more quickly by doing than by reading. Unlike programming, the barrier to entry is low and exploration is the best teacher. That said, Cem Kaner’s Testing Computer Software gets a lot of good marks. I’m not a huge fan of it (perahps I'll blog on my disagreements later) but if you want to speak the language of the testing world, this is probably the book to read. Managing the Testing Process by Rex Black gives a good overview of the testing process from a management perspective. Most testing books are very process-intensive. They teach process over technique. How to Break Software by James Whittaker is more practical. I have read some of the book and have heard James Whittaker speak. As the title indicates, the intent of the book is to explain where software is most vulnerable and the techniques it takes to break it at those points.
Part of the difficulty with testing books is that there are so many kinds of testing. Testing a web service is fundamentally different than testing a GUI application like Microsoft Word which is again wholly different than testing a multimedia API like DirectShow. Approaches to testing differ also. Some people have a lot of manual tests. Some automate everything with testing tools like SilkRunner or Visual Test. Others write code by hand to accomplish their testing. The latter is what my team does. Most books on testing will either distill this down to the basics--at which time you have no real meat left--or they will teach you a little about everything and not much about anything. Read the 3 books I call out above but make sure to adapt everything they say to your specific situation. Treat them as food for though, not instruction manuals.
Do you have a favorite testing book that I haven't mentioned? Share your knowledge with others in the comments.
http://blogs.msdn.com/b/steverowe/archive/2005/02/24/recommended-books-on-testing.aspx
Let me say up front that I’m not a big fan of testing books. Most of what they say is obvious and they are often long-winded. You can learn to test more quickly by doing than by reading. Unlike programming, the barrier to entry is low and exploration is the best teacher. That said, Cem Kaner’s Testing Computer Software gets a lot of good marks. I’m not a huge fan of it (perahps I'll blog on my disagreements later) but if you want to speak the language of the testing world, this is probably the book to read. Managing the Testing Process by Rex Black gives a good overview of the testing process from a management perspective. Most testing books are very process-intensive. They teach process over technique. How to Break Software by James Whittaker is more practical. I have read some of the book and have heard James Whittaker speak. As the title indicates, the intent of the book is to explain where software is most vulnerable and the techniques it takes to break it at those points.
Part of the difficulty with testing books is that there are so many kinds of testing. Testing a web service is fundamentally different than testing a GUI application like Microsoft Word which is again wholly different than testing a multimedia API like DirectShow. Approaches to testing differ also. Some people have a lot of manual tests. Some automate everything with testing tools like SilkRunner or Visual Test. Others write code by hand to accomplish their testing. The latter is what my team does. Most books on testing will either distill this down to the basics--at which time you have no real meat left--or they will teach you a little about everything and not much about anything. Read the 3 books I call out above but make sure to adapt everything they say to your specific situation. Treat them as food for though, not instruction manuals.
Do you have a favorite testing book that I haven't mentioned? Share your knowledge with others in the comments.
http://blogs.msdn.com/b/steverowe/archive/2005/02/24/recommended-books-on-testing.aspx
lunes, agosto 20
How to write software Testing Weekly Status Report
Writing effective status report is as important as the actual work you did! How to write a effective status report of your weekly work at the end of each week?
Here I am going to give some tips. Weekly report is important to track the important project issues, accomplishments of the projects, pending work and milestone analysis. Even using these reports you can track the team performance to some extent. From this report prepare future actionables items according to the priorities and make the list of next weeks actionable.
So how to write weekly status report?
Follow the below template:
Prepared By:
Project:
Date of preparation:
Status:
A) Issues:
Issues holding the QA team from delivering on schedule:
Project:
Issue description:
Possible solution:
Issue resolution date:
You can mark these issues in red colour. These are the issues that requires managements help in resolving them.
Issues that management should be aware:
These are the issues that not hold the QA team from delivering on time but management should be aware of them. Mark these issues in Yellow colour. You can use above same template to report them.
Project accomplishments:
Mark them in Green colour. Use below template.
Project:
Accomplishment:
Accomplishment date:
B) Next week Priorities:
Actionable items next week list them in two categories:
1) Pending deliverables: Mark them in blue colour: These are previous weeks deliverables which should get released as soon as possible in this week.
Project:
Work update:
Scheduled date:
Reason for extending:
2) New tasks:
List all next weeks new task here. You can use black colour for this.
Project:
Scheduled Task:
Date of release:
C) Defect status:
Active defects:
List all active defects here with Reporter, Module, Severity, priority, assigned to.
Closed Defects:
List all closed defects with Reporter, Module, Severity, priority, assigned to.
Test cases:
List total number of test cases wrote, test cases passed, test cases failed, test cases to be executed.
This template should give you the overall idea of the status report. Don’t ignore the status report. Even if your managers are not forcing you to write these reports they are most important for your work assessment in future.
Try to follow report writing routine. Use this template or at least try to report it in your own words about the overall work of which you can keep some track.
http://www.softwaretestinghelp.com/how-to-write-software-testing-weekly-status-report/
Here I am going to give some tips. Weekly report is important to track the important project issues, accomplishments of the projects, pending work and milestone analysis. Even using these reports you can track the team performance to some extent. From this report prepare future actionables items according to the priorities and make the list of next weeks actionable.
So how to write weekly status report?
Follow the below template:
Prepared By:
Project:
Date of preparation:
Status:
A) Issues:
Issues holding the QA team from delivering on schedule:
Project:
Issue description:
Possible solution:
Issue resolution date:
You can mark these issues in red colour. These are the issues that requires managements help in resolving them.
Issues that management should be aware:
These are the issues that not hold the QA team from delivering on time but management should be aware of them. Mark these issues in Yellow colour. You can use above same template to report them.
Project accomplishments:
Mark them in Green colour. Use below template.
Project:
Accomplishment:
Accomplishment date:
B) Next week Priorities:
Actionable items next week list them in two categories:
1) Pending deliverables: Mark them in blue colour: These are previous weeks deliverables which should get released as soon as possible in this week.
Project:
Work update:
Scheduled date:
Reason for extending:
2) New tasks:
List all next weeks new task here. You can use black colour for this.
Project:
Scheduled Task:
Date of release:
C) Defect status:
Active defects:
List all active defects here with Reporter, Module, Severity, priority, assigned to.
Closed Defects:
List all closed defects with Reporter, Module, Severity, priority, assigned to.
Test cases:
List total number of test cases wrote, test cases passed, test cases failed, test cases to be executed.
This template should give you the overall idea of the status report. Don’t ignore the status report. Even if your managers are not forcing you to write these reports they are most important for your work assessment in future.
Try to follow report writing routine. Use this template or at least try to report it in your own words about the overall work of which you can keep some track.
http://www.softwaretestinghelp.com/how-to-write-software-testing-weekly-status-report/
jueves, agosto 16
Approaches To Unit Testing
I recently was involved in a discussion about unit testing. I'll
simplify the issues. There are many more aguments each way and the area
is more complex than presented here. I will lay out the supposed
advantages and disadvantages of side of the issue.
Unit tesing has been made quite popular lately with the advent of XP (eXtreme Programming). The idea is fairly simple. Have developers write tests to verify that their code works as intended. It is after this point, that views diverge. Well, some diverge before that and think that developers shouldn't be bothered to write tests but that, as they say, is a topic for another day. The XP community seems to think that unit testing should be done at a very granular level. Each individual function or object should be tested. Others think that the unit tests should be more wholistic.
Granular unit testing is often synonymous with unit testing. In it, developers test their code directly, trying each input and failing if the expected output isn't returned. Often scaffolding is used to instantiate the object or call the function apart from the surrounding code. Mock objects might also be used to insulate the real code from its reliance upon systems below, above, or beside it. The advantage of this technique is that it is fast and thorough. Each function, method, or class can be tested fully. Each piece of code, even those difficult or impossible to reach from the public interfaces, can be tested. The disadvantage is that testing each piece in isolation doesn't test the system as a whole. It doesn't test the interaction between the code that will be interacting in the real world. It is also not useful to test the shipping binary. As each object is tested by standalone code, you don't get to see how the whole system really works.
The other form of unit testing is testing only from exposed interfaces. This is sometimes known as functional testing. The only item being tested is the shipping binary and the only entry points used are those available on that shipping binary. The advantages of this type of testing are that it tests the whole system. It tests the interaction between each part of the system as it will be used in the real world. These tests are also easily utilized by the test team to run as part of their testing suite. The disadvantage is that it can be hard or even impossible to test many of the system internals. Sometimes a simple interface can have a lot of code hidden behind it. It also requires a greater amount of code to be written before the testing can even begin.
In this particular discussion that I had, the advocacy was for the second type of testing only. I advocated a blended approach thinking that would cover all bases. It was argued that the granular form of testing tests at too low a level--that most bugs are found in the interaction between components and not in the functions themselves. It therefore made little sense to even write them.
What do you think? Do isolated unit tests have much return on investment or are they better off left undone? Have you had any experience with either of these approaches? If so, did it work out well or poorly?
http://blogs.msdn.com/b/steverowe/archive/2004/10/30/249913.aspx
Unit tesing has been made quite popular lately with the advent of XP (eXtreme Programming). The idea is fairly simple. Have developers write tests to verify that their code works as intended. It is after this point, that views diverge. Well, some diverge before that and think that developers shouldn't be bothered to write tests but that, as they say, is a topic for another day. The XP community seems to think that unit testing should be done at a very granular level. Each individual function or object should be tested. Others think that the unit tests should be more wholistic.
Granular unit testing is often synonymous with unit testing. In it, developers test their code directly, trying each input and failing if the expected output isn't returned. Often scaffolding is used to instantiate the object or call the function apart from the surrounding code. Mock objects might also be used to insulate the real code from its reliance upon systems below, above, or beside it. The advantage of this technique is that it is fast and thorough. Each function, method, or class can be tested fully. Each piece of code, even those difficult or impossible to reach from the public interfaces, can be tested. The disadvantage is that testing each piece in isolation doesn't test the system as a whole. It doesn't test the interaction between the code that will be interacting in the real world. It is also not useful to test the shipping binary. As each object is tested by standalone code, you don't get to see how the whole system really works.
The other form of unit testing is testing only from exposed interfaces. This is sometimes known as functional testing. The only item being tested is the shipping binary and the only entry points used are those available on that shipping binary. The advantages of this type of testing are that it tests the whole system. It tests the interaction between each part of the system as it will be used in the real world. These tests are also easily utilized by the test team to run as part of their testing suite. The disadvantage is that it can be hard or even impossible to test many of the system internals. Sometimes a simple interface can have a lot of code hidden behind it. It also requires a greater amount of code to be written before the testing can even begin.
In this particular discussion that I had, the advocacy was for the second type of testing only. I advocated a blended approach thinking that would cover all bases. It was argued that the granular form of testing tests at too low a level--that most bugs are found in the interaction between components and not in the functions themselves. It therefore made little sense to even write them.
What do you think? Do isolated unit tests have much return on investment or are they better off left undone? Have you had any experience with either of these approaches? If so, did it work out well or poorly?
http://blogs.msdn.com/b/steverowe/archive/2004/10/30/249913.aspx
Métricas de Pruebas de Software
What are two types of Metrics ?
Explanation :- Metircs are used in order to track and measure the entire testing process. These test metircs are collected at each phase of the testing life cycle/SDLC and analyzed and apropriate process improvements are determined and implemented as a result of these test metrics that are constantly collected and evaluated as a parallel activity together with testing both for manual and automated testing irrespective type of application
Broadly Classified into 3 categories:
Project Related Metircs : Such as Test size, Number of Test Cases created per day, Number of Test Cases Tested per day for both Manual and Automation , Total number of Review defects (RD) ,Total number of Testing defects (TD), etc...
Process Related Metrics :- Such as Schedule variance , Effort Variance ,Review Defects.
Schedule Variance =
Actual No. of days - Planned No. of days
--------------------------------------- * 100
Planned No. of days
Data source is would be from ur project plan...
Effort Variance =
Actual Effort - Planned Effort
--------------------------------------- * 100
Planned Effort
Data source is would be from ur project plan....
Review Defects =
Total Number of Defects found during review
------------------------------------------- * 100
Size ( No of pages )
Data source is would be Review Records.
Customer Related Metircs:- :- Such as Percentage of defects leaked per release ,Percentage of automation per release per release , Application Stability Index etc.
Explanation :- Metircs are used in order to track and measure the entire testing process. These test metircs are collected at each phase of the testing life cycle/SDLC and analyzed and apropriate process improvements are determined and implemented as a result of these test metrics that are constantly collected and evaluated as a parallel activity together with testing both for manual and automated testing irrespective type of application
Broadly Classified into 3 categories:
Project Related Metircs : Such as Test size, Number of Test Cases created per day, Number of Test Cases Tested per day for both Manual and Automation , Total number of Review defects (RD) ,Total number of Testing defects (TD), etc...
Process Related Metrics :- Such as Schedule variance , Effort Variance ,Review Defects.
Schedule Variance =
Actual No. of days - Planned No. of days
--------------------------------------- * 100
Planned No. of days
Data source is would be from ur project plan...
Effort Variance =
Actual Effort - Planned Effort
--------------------------------------- * 100
Planned Effort
Data source is would be from ur project plan....
Review Defects =
Total Number of Defects found during review
------------------------------------------- * 100
Size ( No of pages )
Data source is would be Review Records.
Customer Related Metircs:- :- Such as Percentage of defects leaked per release ,Percentage of automation per release per release , Application Stability Index etc.
Fuente: http://ssoftwaretesting.blogspot.mx/2010/08/what-are-two-types-of-metrics.html
jueves, agosto 9
So You Want to Be a Test Developer
So you have an interest in becoming a test developer, eh? You’ve come to the right place. In
this paper I will lay out what I think is the best path to take to
becoming a test developer or really any sort of developer. Test
developers are really just developers who happen to report in the test
org. Becoming a good test developer requires learning not only a
programming language but also what to do with it. You’ll need to learn not only C++ but also about operating systems, algorithms, object-oriented programming, etc. Becoming a test dev won’t be easy. It is going to take a lot of reading and a lot of practice. If you stick with the regimen, you’ll be a real developer. You’ll have much of what someone would have if they had acquired a CS degree. If not, well, you’ll be where you are today.
Starting With a Language
The place to start is obviously with a language. If you don’t know how to program, you won’t be much of a programmer. It is important to know, however, that knowing a language is just the beginning. Too many people equate knowing C/C++ with knowing how to program. Sadly, this is not the case. Think of an author as an example. Learning the English vocabulary and the rules of grammar will make you a proficient English speaker. That does not, however, make you a great writer. Mark Twain wasn’t a great writer because he knew how to diagram sentences. Knowing what to say is even more important than knowing how to say it. It is the same way with a programming language. You can know C/C++ forward and backward and still be a bad programmer.
The first decision you should make is what language to use to start your journey. As you progress, you will probably pick up other languages but you have to start somewhere. You should start with a language that encourages you to do the right things. C# is an easy language to begin with. C is simple and easy to learn. C++ is its more complex big brother. Most system-level work is done in C++. That’s probably where you want to start. If
something like VB or Perl strike your fancy, you can use them but know
that you’ll be limited in where you can utilize your skills.
To learn C++, I recommend starting with a book. The two best ones I know of are C++ Primer by Stanley Lippman and Thinking in C++ by Bruce Eckels. I prefer these over the “24 Hours” or “21 Days” sort of books. They may claim to teach you C++ quickly but they won’t. C++ is huge. It will take some time to master. Pick one and read it cover to cover. Do the exercises if you are that sort of person. These are both thick books. Don’t be intimidated. They’ll both cover everything you need to move onto the next phase of programming.
Don’t stop there though. You need to put this into action for it to make sense. Come up with a project and start implementing it. The project can be anything. Implement a tool you’ve been itching to have, write a game, whatever. I started by trying to implement the game of Robo Rally. I never finished. You might not either. The point isn’t to complete the project. It is to come to an understanding of how the parts work together. Books teach us to program in discrete little packets. Each packet makes sense atomically. The point of writing a real program is to understand how these packets interact.
If you want to become wiser in the ways of C++, there are several other books you might want to read including Effective C++ by Scott Meyers and Inside the C++ Object Model by Stanley Lippman. Neither book is essential but both will help make you a better programmer.
Going Beyond Language
As I said, learning a language is just a first step. Once you are here, you are proficient in some language (C++ most likely). You are conversant in a language. You are still a terrible writer. The next step is to learn what makes a good program.
This is where things get more complicated. Learning pointers is hard but understanding algorithms is harder. If your math is weak, you’ll struggle with some of these concepts. Fear not. None of this is rocket science. It just may take you a bit longer. If necessary, put down the book you are reading, read some math, then come back. This should help you get further. If you run into a roadblock, learn more math and continue until you get through.
There are three major subjects you need to come to understand. The first is data structures and algorithms. The second is operating systems and processors. Finally, there is object-oriented programming. Once you have these three areas understood, you’ll be well on your way to becoming a good programmer.
Algorithms are what make your programs run fast or slow. When you are dealing with only a small amount of data, the algorithm you use doesn’t matter too much. However, when you are dealing with hundreds of thousands or millions of items, the algorithm can matter a lot. It is important to understand the types of algorithms available and to understand how to measure their efficiency. Different ways of sorting would be an example of algorithms. Data structures are ways to organize large amounts of data and maintain accessibility. Examples are linked lists, trees, hash tables, etc. Two books which are used a lot for this are Introduction to Algorithms by Thomas Cormen and Algorithms in C++ by Robert Sedgewick.
Understanding how processors and operating systems work will help you understand why the language and the OS do what they do. Many things that C/C++ do make more sense once you understand how the underlying system works. Understanding the OS and being able to read assembly language will come in invaluable when it comes time to debug. The best OS book I know of is Operating System Concepts by Abraham Silberschatz. It is accessible and covers the critical topics. For processor/assembly topics, look to Computer Organization and Design by David Patterson.
Finally, there is object-oriented programming. Learning about OO techniques will help you understand how to write a program that is maintainable and extensible. The best place to start is with Design Patterns by Gamma et al. I suggest using to book to understand how patterns work in general, not to memorize every pattern. A good book showing the application of patterns and good OO design techniques is Design Patterns Explained by Alan Shalloway.
Where to Next?
So you read all 5,000 pages and you still want more? What can you study next? The tree branches from here. There are lots of possibilities. Some good places to look are for general programming are debugging and good programming practices. You can also start going down a specific path such as windows programming, graphics, ATL/COM programming, etc.
Programmers spend an inordinate amount of time debugging the code they just wrote or even code written by others. Debugging is an art, not a science. The only way to get good at it is practice. There are some books you can read which will give you a jump start though. The best of these is Debugging Applications by John Robbins.
Learning
how to become a better programmer also takes practice but there are a
few books written which give you some good pointers. I won’t list them all here. Everyone raves about Code Complete by Steve McConnell. I don’t. I find it long-winded. A better book is The Practice of Programming by Brian Kernighan and Rob Pike. Another book which is really useful is Refactoring by Martin Fowler. He explains how to take old code and make it better. Like Design Patterns, I find the first several chapters more useful than the reference that makes up the bulk of the book. Finally, go find a copy of your groups coding standards document. Often times these are chock full of good techniques. They will usually tell you what sort of things to avoid and what good practices to emulate are.
http://blogs.msdn.com/b/steverowe/archive/2005/01/06/so-you-want-to-be-a-test-developer.aspx
Técnica de Análisis de Valor Límite - Pruebas de Caja Negra
Esta
técnica complementa a la de partición equivalente.
En lugar de seleccionar
cualquier elemento de una clase de equivalencia, el AVL lleva a la elección de
casos de prueba "en los bordes" de la clase.
En vez de centrarse solamente en
las condiciones de entrada, el AVL deriva casos de prueba también para el campo
de salida.
1. Si
una condición de entrada especifica un rango delimitado por los valores a y b,
se deben diseñar casos de prueba para los valores a y b y para valores justo por
debajo y justo por encima de a y b, respectivamente.
2. Si una condición de entrada especifica un número de valores, se deben desarrollar casos de prueba que ejerciten los valores máximo y mínimo.
2. Si una condición de entrada especifica un número de valores, se deben desarrollar casos de prueba que ejerciten los valores máximo y mínimo.
También se deben probar los valores
justo por debajo del máximo y del mínimo.
3. Aplicar las directrices 1 y 2 a las condiciones de salida.
3. Aplicar las directrices 1 y 2 a las condiciones de salida.
Por ejemplo, supongamos que se requiere una tabla como
salida deun programa, entonces se deben diseñar casos de prueba que creen un
informe de salida que produzca el máximo ( y el mínimo) número permitido de
entradas en la tabla.
4. Si las estructuras de datos internas tienen límites
preestablecidos(por ejemplo, un arreglo de 100 entradas) hay que asegurarse de
diseñar un caso de prueba que ejercite la estructura de datos en sus
límites.
Etiquetas:
Análsis de Valor Límite,
Caja Negra,
Herramientas.,
Técnicas,
Técnicas de Pruebas,
Testing
Ubicación:
45500 Tlaquepaque, JAL, Mexico
martes, agosto 7
SQA Processes. How to Test complete application?
Is there any standard procedure to test the application as a whole? Or How can I test complete application right from the requirement gathering?
Here are the broad steps to test the application :These are the standard SQA peocesses to be followed in any application testing.
- Marketing Requirements
Objectives remaining to be completed are carried forward for the next release.
- Branching for the development cycle
- Objectives settings for the Major and customized releases
- A Detailed Project Plan and the release of design Specifications
- QA – Develop Test Plan based on Design Specifications
This includes Objectives, Methodology adopted while testing, Features to be tested and not to be tested, risk criteria, testing schedule, cross-platform support and the resource allocation for testing.
Feature Test Plan : This document talks about how the testing is going to be carried out for each type of testing.
- QA – Functional Test Specifications
Application Development : Considering the product features, tester has to make ready his/her own test suit.
- QA – Writing of Test Cases (CM)
- Sanity Test cases
- Regression Test Cases
- Extended Test Cases
- Negative Test Cases
- Development – Goes on Module by Module.
- Installers Binding and Building
- Release Engg is responsible for releasing the Builds.
- Release Engg collects fresh/developed code from the developer’s initially/module wise.
- Installers have to take care of jar files updating, registry entries, and additional software installations.
- Build Procedure :
- For each build one or 2 CDs are made/build the same are pushed to Pune/Chennai based on the priority.
- Each product build are received in the form of zip files under http://pun-qaftp/ftproot/QA-Builds ( Pune-Specific). The procedure goes along for Chennai. San Jose QA picks up builds from Nexus ( Build Specific Server )
- The same zip files need to be unzipped to get the desired installer and the same is put at Pun_fps1/QA/Builds folder. ( Pune-Specific)
- QA – Testing :
- Smoke Test results has to be posted on http://Chaquita which is an official site for posting Smoke
- Test results and to share the basic testing information.
- Prepare Smoke Test procedure.
- QA – Extensive testing :
- Document review
- Cross-platform testing
- Stress testing and memory leakage testing.
- QA- Bug Reporting :
- Further FET (Fixed Effectiveness testing) of the filed bugs.
- Exception study and verification.
- Development – Code freeze :
- QA – Testing :
- QA- Media Verification :
- Decision to release the product :
- Post-release Scenario :
Post release review meeting to decide upon the objectives for the next release.
http://www.softwaretestinghelp.com/sqa-processes-how-to-test-complete-application/
lunes, agosto 6
How to Improve Tester Performance?
Many Companies don’t have resources or can’t afford to hire the
required number of testers on the project. So what could be the solution
in this case?
The answer is simple. Companies will prefer to have skilled testers instead of a army of testers!
So how can build skilled testers on any project?
You can improve testers performance by assigning him/her to the single project.
Due to this the tester will get the detail knowledge of the project domain, Can concentrate well on that project, can do the R&D work during the early development phase of the project.
This not only build his/her functional testing knowledge but also the project Domain knowledge.
Company can use following methods to Improve the Testers performance:
1) Assign one tester to one project for long duration or to the entire project. Doing this will build testers domain knowledge, He/She can write better test cases, Can cover most of the test cases, and eventually can find the problem faster.
2) Most of the testers can do the functional testing, BV analysis but they may not know how to measure test coverage,How to test a complete application, How to perform load testing. Company can provide the training to their employees in those areas.
3) Involve them in all the project meetings, discussions, project design so that they can understand the project well and can write the test cases well.
4) Encourage them to do the extra activities other than the regular testing activities. Such activities can include Inter team talk on their project experience, Different exploratory talks on project topics.
Most important is to give them freedom to think outside the box so that they can take better decision on Testing activities like test plan, test execution, test coverage.
http://www.softwaretestinghelp.com/how-to-improve-tester-performance/
The answer is simple. Companies will prefer to have skilled testers instead of a army of testers!
So how can build skilled testers on any project?
You can improve testers performance by assigning him/her to the single project.
Due to this the tester will get the detail knowledge of the project domain, Can concentrate well on that project, can do the R&D work during the early development phase of the project.
This not only build his/her functional testing knowledge but also the project Domain knowledge.
Company can use following methods to Improve the Testers performance:
1) Assign one tester to one project for long duration or to the entire project. Doing this will build testers domain knowledge, He/She can write better test cases, Can cover most of the test cases, and eventually can find the problem faster.
2) Most of the testers can do the functional testing, BV analysis but they may not know how to measure test coverage,How to test a complete application, How to perform load testing. Company can provide the training to their employees in those areas.
3) Involve them in all the project meetings, discussions, project design so that they can understand the project well and can write the test cases well.
4) Encourage them to do the extra activities other than the regular testing activities. Such activities can include Inter team talk on their project experience, Different exploratory talks on project topics.
Most important is to give them freedom to think outside the box so that they can take better decision on Testing activities like test plan, test execution, test coverage.
http://www.softwaretestinghelp.com/how-to-improve-tester-performance/
Too Much Test Automation?
There was a time when testing software was synonymous with manual testing. Now with the rise of test development and the advent of unit testing, automation is becoming more and more prevalent. Test automation has many benefits but it is not a silver bullet. It has its costs and is not always the right answer. Software development organizations must be careful not to let the pendulum swing too far in the direction of automation. Automating everything is a way to guarantee you miss many bugs. It is incumbent upon those in charge of test organizations to find a balance between automated and manual testing.
It is best to start out with a definition. Test automation is code or script which executes tests without human intervention. Simple automation will merely exercise the various features of the product. More advanced automation will verify that the right actions took place. Results of these tests are stored in log files or databases where they can be rolled up into reports listing a pass/fail rate.
Automation has many advantages. Manual testing is expensive. It requires people to click buttons and observe results. This isn’t terribly expensive the first time through but the costs stay fixed as the product progresses. If you have a daily build (as you should), you pay the cost daily. This quickly adds up. Automation is expensive to create but the incremental cost is very cheap. Automation is also consistent. No one will ever forget to run a test case with automation.
Because it is cheap, automation can be run in places where manual testing cannot. Extensive tests can be run on daily builds. These can even be run in the wee hours of the morning before everyone shows up for the day. The inexpensive nature of automated testing is what allows unit tests and test driven development to be possible.
Some test cases cannot easily be done manually. When testing an API such as the DirectShow API I often work with, there is no UI to drive it. At least some minimal coding must be done to expose the API to a user. In these cases, automation is the obvious choice.
Despite its advantages, automation is not a panacea. First, it is not free. Automation is expensive to create. If you are able to amortize that cost over a lot of runs, the incidental cost becomes low. On
the other hand, if the test is something that will only be run a few
times, automation may be more expensive than manual testing. Decision makers must consider the high initial cost before committing their organization to automated tests.
Secondly, automation is limited in scope. After you have run your automated tests for the first time, you are done finding new bugs. Never again will you find a new issue. You
might catch a regression, but if you missed the fact that clicking a
particular combination of buttons causes the app to crash, you’ll never
find that issue. This is true no matter how many times you run the automation. On the other hand, high quality manual testers will take the opportunity to explore corner cases. In
doing this, they will find many issues that would otherwise go
unnoticed until real users find them after the product is shipped. This lack of exploratory ability is, in my mind, the Achilles heel of automated testing.
The third drawback of automation is that there are things that simply cannot be automated well. I work in the field of audio-video playback. Trying to automate video quality tests is hard. It is very expensive—specific media must be pared with specific test cases. If
you have to account for variable content (say, testing television
tuners) or if the algorithm is not fixed (varying video cards or
decoders), the task becomes even harder. Using
something like PSNR is possible but is not well tuned to the human
visual system and thus is susceptible to false positives and false
negatives. Sometimes there is no replacing a person staring at the screen or listening to the speakers.
Other problems also exist with automated tests. Bugs in an automated test may mask a bug in the product and go unnoticed for long periods of time. Automated testing does not emulate real users using a product nearly as well as a real person. Manual tests can be brought online more quickly than automated tests allowing bugs to be found sooner.
What then, is a test manager to do? Manual testing is too expensive but automated testing is imperfect. It is imperative that the two sorts of testing are mixed. Automated testing should be used to verify basic quality but cannot be used exclusively. There need to be real people simulating the experience of real users. With the hiring of developers and not scripters as test developers, the quality of automated tests can be made very high. Unit testing can ensure that automated tests are available early in the cycle. With
the increased sophistication of test developers and automation
techniques, automated testing can take a larger role than in the past. It cannot, however, replace real testers.
http://blogs.msdn.com/b/steverowe/archive/2005/01/28/362668.aspx
¿Cuál es la diferencia entre prueba de regresión y re-testing?
This is one of the mostly commonly asked question in any software testing interview. In this article I will explain what is the difference between regression testing and retesting.
Regression Testing is the execution of all or a select group of test cases that have passed on the earlier build or release of the application under test so that you can validate, the original functions and features are still working as they were in the past.
The test cases we use are derived from the functional specification, the user manuals, user tutorials, and defect reports in relation to corrected problems.
First regression test should be done using functional test cases to make sure no new defect has been introduced. In addition, we repeat some of the tests using a performance tool, to see how a new version handles, with regards to time and memory usage.
In most of the organization, the regression testing is incremental. So as we add new functionality, or correct a defect, we add test cases, which are included in subsequent rounds of regression testing.
In addition to new software releases at our end, we repeat the tests when MS introduces a new service pack, and to check ongoing compatibility for supported legacy operating systems.
As you can probably see from the above, automation is the key for regression testing. If you haven't gone down the automation route yet, regression testing is a good time to consider it. Manual regression testing tends to get more expensive with each new release.
Some of the possible failure of a regression test could be due to new functionality, changes in other applications or hardware that interfaces to your app. or even changes in the environment, like updated browser, security updates, or maybe a change in the screen controls.
Retesting is the execution of one or a set of test cases, which previously failed due to a suspected defect in the software which now is documented as being remedied.
However, the fix may introduce fault elsewhere in the software. The way to detect these "unexpected side effects" of fixes is to do regression testing as mentioned above.
So on the conclusion, in retesting we need to check the defects that have been logged by the tester and fixed by the developer. We need to test if that particular bug has been fixed but in regression testing, we need to check if that has been fixed or not and also make sure that change in code by the developer to fix that particular bug has not left any impact on any other phase of application. So regression testing is a kind of testing the whole scenario after fixation of every bug.
Regression Testing is the execution of all or a select group of test cases that have passed on the earlier build or release of the application under test so that you can validate, the original functions and features are still working as they were in the past.
The test cases we use are derived from the functional specification, the user manuals, user tutorials, and defect reports in relation to corrected problems.
First regression test should be done using functional test cases to make sure no new defect has been introduced. In addition, we repeat some of the tests using a performance tool, to see how a new version handles, with regards to time and memory usage.
In most of the organization, the regression testing is incremental. So as we add new functionality, or correct a defect, we add test cases, which are included in subsequent rounds of regression testing.
In addition to new software releases at our end, we repeat the tests when MS introduces a new service pack, and to check ongoing compatibility for supported legacy operating systems.
As you can probably see from the above, automation is the key for regression testing. If you haven't gone down the automation route yet, regression testing is a good time to consider it. Manual regression testing tends to get more expensive with each new release.
Some of the possible failure of a regression test could be due to new functionality, changes in other applications or hardware that interfaces to your app. or even changes in the environment, like updated browser, security updates, or maybe a change in the screen controls.
Retesting is the execution of one or a set of test cases, which previously failed due to a suspected defect in the software which now is documented as being remedied.
However, the fix may introduce fault elsewhere in the software. The way to detect these "unexpected side effects" of fixes is to do regression testing as mentioned above.
So on the conclusion, in retesting we need to check the defects that have been logged by the tester and fixed by the developer. We need to test if that particular bug has been fixed but in regression testing, we need to check if that has been fixed or not and also make sure that change in code by the developer to fix that particular bug has not left any impact on any other phase of application. So regression testing is a kind of testing the whole scenario after fixation of every bug.
jueves, agosto 2
TDD - First Impressions
I spent the last two days in a class
covering test driven development, unit testing, and refactoring. I
hope to provide a more detailed discussion of what I learned at some
later point but for now I thought I'd post my initial impressions.
I've read a lot about TDD, Unit Testing, Refactoring, etc. but I'd never
actually *done* test driven development. The class had several
hands-on exercises. We were using C# and NUnit. Let me say right here that NUnit is a slick app. It is unobtrusive which is exactly what you want in a test harness. This is the report of that experience.
First off, it felt really strange. Generally when developing a program, you think about the big picture first and work your way to the details. With TDD, you end up taking the opposite tack. Because you have to write a test, fail the test, then write the code, you cannot start with the big parts, instead you start with the small internals. Of course it is necessary to put some thought into the bigger picture, but very quickly you are driven into the implementation details and that creates a feedback loop for your big picture design. This feels strange but you become accustomed to it. The designs that come out seem clean. Forcing testability causes you to think about cohesion, coupling, redundancy, etc.
When you are doing test driven development, you are constantly switching between the editor, compiler, and test harness. You are compiling often. This is a good thing. It means that your code is always working. You don't go on long coding sprees followed by long bug fixing sprees. Instead, you intermingle the two a lot. I find it easier to fix an issue as I'm writing it rather than later when I compile some larger chunk of code.
TDD feels good. Every time you run the unit tests, you get positive feedback that things are working. When you make a change, even a radical change, you know that if all the test cases pass that everything is working. It gives you peace of mind that you cannot get in older coding models. If you've never done any test driven development, give it a try. Who knows, you might like it.
The class was taught by Net Objectives. This is the 3rd class I've taken from them. I've also attended a few of their free seminars. If you have an interest in OO or Agile techniques, check them out. I highly recommend their work.
http://blogs.msdn.com/b/steverowe/archive/2005/02/03/366681.aspx
First off, it felt really strange. Generally when developing a program, you think about the big picture first and work your way to the details. With TDD, you end up taking the opposite tack. Because you have to write a test, fail the test, then write the code, you cannot start with the big parts, instead you start with the small internals. Of course it is necessary to put some thought into the bigger picture, but very quickly you are driven into the implementation details and that creates a feedback loop for your big picture design. This feels strange but you become accustomed to it. The designs that come out seem clean. Forcing testability causes you to think about cohesion, coupling, redundancy, etc.
When you are doing test driven development, you are constantly switching between the editor, compiler, and test harness. You are compiling often. This is a good thing. It means that your code is always working. You don't go on long coding sprees followed by long bug fixing sprees. Instead, you intermingle the two a lot. I find it easier to fix an issue as I'm writing it rather than later when I compile some larger chunk of code.
TDD feels good. Every time you run the unit tests, you get positive feedback that things are working. When you make a change, even a radical change, you know that if all the test cases pass that everything is working. It gives you peace of mind that you cannot get in older coding models. If you've never done any test driven development, give it a try. Who knows, you might like it.
The class was taught by Net Objectives. This is the 3rd class I've taken from them. I've also attended a few of their free seminars. If you have an interest in OO or Agile techniques, check them out. I highly recommend their work.
http://blogs.msdn.com/b/steverowe/archive/2005/02/03/366681.aspx
Técnica de Particion Equivalente - Pruebas de Caja Negra
Una partición equivalente es una técnica de prueba de Caja Negra que
divide el dominio de entrada de un programa en clases de datos de los
que se pueden derivar casos de prueba.
El diseño de estos casos de prueba para la partición equivalente se basa en la evaluación de las clases de equivalencia.
El diseño de estos casos de prueba para la partición equivalente se basa en la evaluación de las clases de equivalencia.
El diseño de casos de prueba para la partición equivalente se basa en una evaluación de las clases de equivalencia para una condición de entrada. Una clase de equivalencia representa un conjunto de estados válidos o inválidos para condiciones de entrada.
Regularmente, una condición de entrada es un valor numérico específico, un rango de valores, un conjunto de valores relacionados o una condición lógica.
Las clases de equivalencia se pueden definir de acuerdo con las siguientes directrices: Si un parámetro de entrada debe estar comprendido en un cierto rango, aparecen 3 clases de equivalencia: por debajo, en y por encima del rango.
Si una entrada requiere un valor concreto, aparecen 3 clases de equivalencia: por debajo, en y por encima del rango.
Si una entrada requiere un valor de entre los de un conjunto, aparecen 2 clases de equivalencia: en el conjunto o fuera de él.
Si una entrada es booleana, hay 2 clases: si o no.
Los mismos criterios se aplican a las salidas esperadas: hay que intentar generar resultados en todas y cada una de las clases.
Aplicando estas directrices se ejecutan casos de pruebas para cada elemento de datos del campo de entrada a desarrollar. Los casos se seleccionan de forma que ejerciten el mayor número de atributos de cada clase de equivalencia a la vez.
Para aplicar esta técnica de prueba se tienen en cuenta los siguientes pasos:
- Primero se deben identificar las clases de equivalencia lo cual se hace tomandocada condición de entrada y aplicándole las directrices antes expuestas.
Para definir las clases de equivalencia hace falta tener en
cuenta un conjunto de reglas:
Si una condición de entrada especifica un rango, entonces se
confeccionan una clase de equivalencia válida y 2 inválidas.
Si una condición de entrada especifica la cantidad de valores,
identificar una clase de equivalencia válida y dos inválidas.
Si una condición de entrada especifica un conjunto de valores de entrada y existen razones para creer que el programa trata en forma diferente a cada uno de ellos, identificar una clase válida para cada uno de ellos y una clase inválida.
Si una condición de entrada especifica una situación de tipo “debe ser”, identificar una clase válida y una inválida.
Si existe una razón para creer que el programa no trata de forma idéntica ciertos elementos pertenecientes a una clase, dividirla en clases de equivalencia menores.
Si una condición de entrada especifica un conjunto de valores de entrada y existen razones para creer que el programa trata en forma diferente a cada uno de ellos, identificar una clase válida para cada uno de ellos y una clase inválida.
Si una condición de entrada especifica una situación de tipo “debe ser”, identificar una clase válida y una inválida.
Si existe una razón para creer que el programa no trata de forma idéntica ciertos elementos pertenecientes a una clase, dividirla en clases de equivalencia menores.
- Luego de tener las clases válidas e inválidas definidas, se procede a definir los casos de pruebas, pero para ello antes se debe haber asignado un identificador único a cada clase de equivalencia.
- Luego entonces se pueden definir los casos teniendo en cuenta lo siguiente:
- Escribir un nuevo caso de cubra tantas clases de equivalencia válidas no cubiertas como sea posible hasta que todas las clases de equivalencia hayan sido cubiertas por casos de prueba.
- Escribir un nuevo caso de prueba que cubra una y solo una clase de equivalencia inválida hasta que todas las clases de equivalencias inválidas hayan sido cubiertas por casos de pruebas.
Con la aplicación de esa técnica se obtiene un conjunto de pruebas
que reduce el número de casos de pruebas y nos dicen algo sobre la
presencia o ausencia de errores. A menudo se plantea que las pruebas a
los software nunca terminan, simplemente se transfiere del desarrollador
al cliente. Cada vez que el cliente usa el programa está llevando a
cabo una prueba.
Aplicando el diseño de casos de pruebas al software en cuestión
se puede conseguir una prueba más completa y descubrir y corregir el
mayor número de errores antes de que comiencen las “pruebas del
cliente”.
Como ejemplo, consideremos los datos contenidos en una aplicación de automatización bancaria. Este software acepta datos en la siguiente forma:
Código de área: En blanco ó un número de 3 dígitos
Prefijo: Número de 3 dígitos que no comience por 0 ó 1
Sufijo: Número de 4 dígitos
Contraseña: Vvalor alfanumérico de 6 dígitos
Ordenes: "Comprobar", "Depositar","Pagar factura", etc.
Las condiciones de entrada relacionadas con cada elemento de la aplicación bancaria se pueden especificar como:
Código de área: Condición de entrada lógica - el código de área puede estar o no presente
Condición de entrada rango - valores definidos entre 200 y 999
Prefijo: Condición de entrada rango - valor especificado > 200
Sufijo: Condición de entrada valor- longitud de 4 dígitos
Contraseña: Condición de entrada lógica - la palabra clave puede estar o no presente
Condición de entrada valor - cadena de seis caracteres
Orden: Condición de entrada conjunto, contenida en las órdenes listadas anteriormente.
Los casos de prueba se seleccionan de manera que se ejercite el mayor número de atributos de cada clase de equivalencia a la vez.
Como ejemplo, consideremos los datos contenidos en una aplicación de automatización bancaria. Este software acepta datos en la siguiente forma:
Código de área: En blanco ó un número de 3 dígitos
Prefijo: Número de 3 dígitos que no comience por 0 ó 1
Sufijo: Número de 4 dígitos
Contraseña: Vvalor alfanumérico de 6 dígitos
Ordenes: "Comprobar", "Depositar","Pagar factura", etc.
Las condiciones de entrada relacionadas con cada elemento de la aplicación bancaria se pueden especificar como:
Código de área: Condición de entrada lógica - el código de área puede estar o no presente
Condición de entrada rango - valores definidos entre 200 y 999
Prefijo: Condición de entrada rango - valor especificado > 200
Sufijo: Condición de entrada valor- longitud de 4 dígitos
Contraseña: Condición de entrada lógica - la palabra clave puede estar o no presente
Condición de entrada valor - cadena de seis caracteres
Orden: Condición de entrada conjunto, contenida en las órdenes listadas anteriormente.
Los casos de prueba se seleccionan de manera que se ejercite el mayor número de atributos de cada clase de equivalencia a la vez.
Etiquetas:
Caja Negra,
Pruebas,
Pruebas de Software,
Software Testing,
Técnicas,
Técnicas de Pruebas,
Tester,
Testing
Ubicación:
45500 Tlaquepaque, JAL, Mexico
Suscribirse a:
Entradas (Atom)