viernes, septiembre 28

Why Does Software Have Bugs?

What is a software bug?
A software bug is a failure or flaw in a program that produces undesired or incorrect results. It’s an error that prevents the application from functioning as it should.
Why does Software have bugs?
There are many reasons for software bugs. Most common reason is human mistakes in software design and coding.
Once you know the causes for software defects it will be easier for you to take corrective actions to minimize these defects.
software bugs

Top 20 reasons for software bugs

1. Miscommunication or no communication
Success of any software application depends on communication between stakeholders, development and testing teams. Unclear requirements and misinterpretation of requirements are two major factors causing defects in software. Also defects are introduced in development stage if exact requirements are not communicated properly to development teams.
2. Software complexity
The complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well-engineered.
3. Programming errors
Programmers, like anyone else, can make mistakes. Not all developers are domain experts. Inexperienced programmers or programmers without proper domain knowledge can introduce simple mistakes while coding. Lack of simple coding practices, unit testing, debugging are some of the common reasons most issues get introduced at development stage.
4. Changing requirements
The customer may not understand the effects of changes, or may understand and request them anyway – redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected.
In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.
5. Time pressures
Scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made. Unrealistic schedules though not common but major concern in small scale projects/companies results in software bugs. If there is not enough time for proper design, coding and testing, it’s quite obvious that defects will be introduced.
6. Egotistical or overconfident people
People prefer to say things like:
‘no problem’
‘piece of cake’
‘I can whip that out in a few hours’
‘it should be easy to update that old code’
instead of:
‘that adds a lot of complexity and we could end up making a lot of mistakes’
‘we have no idea if we can do that; we’ll wing it’
‘I can’t estimate how long it will take, until I take a close look at it’
‘we can’t figure out what that old spaghetti code did in the first place’
If there are too many unrealistic ‘no problem’s’, the result is software bugs.

7. Poorly documented code
It’s tough to maintain and modify code that is badly written or poorly documented; the result is software bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it’s usually the opposite: they get points mostly for quickly turning out code, and there’s job security if nobody else can understand it (‘if it was hard to write, it should be hard to read’).
Any new programmer starting to work on this code may get confused due to complexity of the project and poorly documented code. Many times it takes longer to make small changes in poorly documented code as there is huge learning curve before making any code change.
8. Software development tools
Visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs. Continuously changing software tools used by software programmers. Keeping pace with the different versions and their compatibility is a major ongoing issue.
9. Obsolete automation scripts
Writing automation scripts takes lot of time especially for complex scenarios. If automation teams record/write any test script but forget to update it over the period of time that test could become obsolete. If the automation test is not validating the results properly it won’t be able to catch the defects.
10. Lack of skilled testers
Having skilled testers with domain knowledge is extremely important for success of any project. But appointing all experienced testers is not possible for all companies. Domain knowledge and the tester’s ability to find defects can produce high quality software. Compromise on any of this can result in buggy software.
Here are few more reasons for software bugs. These reasons are mostly applicable for software testing life cycle:  
11. Not having proper test setup (test environment) for testing all requirements
12. Starting to write code or test cases without understanding the requirements clearly.
13. Incorrect design which leads to issues being carried out in all phases of software development cycle.
14. Releasing software patches frequently without completing the software testing life cycle.
15. Not providing training to resources for the skills needed for developing or testing the application properly.
16. Giving very less or no time for regression testing.
17. Not automating repetitive test cases and depending on the testers for manual verification every time.
18. Not prioritizing test execution.
19. Not tracking the development and test execution progress continuously. Last minute changes are likely to introduce errors.
20. Wrong assumption made while coding and testing stages.

http://www.softwaretestinghelp.com/why-does-software-have-bugs/

jueves, septiembre 27

What is a Test Architect?

I was asked a few questions via mail.  Here is the first of some quick answers to these:
   What is the role of a Test Architect?  There is not a single definition of the test architect role.  A test architect is an advanced test developer whose scope is larger and who solves harder problems than the average SDE/T.  The specifics of what they do varies greatly.  They might be solving hard, particular problems.  They might be the one called in to solve the seemingly intractable issue.  They may be designing a new test platform.  Or, they may be determining group test policy.  Any and all of these fall in the scope of the test architect.  The work they do is often similar to that of an SDE/T.  The difference is often one of scope.  An SDE/T will often own a specific technology or be responsible for implementing a part of a system.  A test architect will own the approach to testing or developing entire systems.
   Are you a test architect or do you have a different idea of what one is?  Please leave your opinions in the comments section.  I'd love to see a dialog develop on this subject.  It's something I'm interested in but that isn't all that well understood yet in most quarters.

http://blogs.msdn.com/b/steverowe/archive/2005/02/23/378710.aspx

lunes, septiembre 24

Why building software isn’t like building bridges


I was having a conversation with a friend the other night and we came across the age-old “software should be like building buildings” argument.  It goes something like this:  Software should be more like other forms of engineering like bridges or buildings.  Those, it is argued, are more mature engineering practices.  If software engineering were more like them, programs would be more stable and projects would come in more on time.  This analogy is flawed.
Before I begin, I must state that I’ve never engineered buildings or bridges before.  I’m sure I’ll make some statements that are incorrect.  Feel free to tell me so in the comments section.
First, making software, at least systems software, is nothing like making buildings.  Engineering a bridge does not involve reinventing the wheel each time.  While there may be some new usage of old principles, there isn’t a lot of research involved.  The problem space is well understood and the solutions are usually already known.  On the other hand, software engineering, by its very nature, is new every time.  If I want two bridges, I need to engineer and build two bridges.  If I want two copies of Windows XP, I only engineer and build it once.  I can then make infinite perfect copies.  Because of this software engineering is more R&D than traditional engineering.  Research is expected to have false starts, to fail and backtrack.  Research cannot be put on a strict time-line.  We cannot know for certain that we’ll find the cure for cancer by March 18, 2005.
Second, the fault tolerances for buildings are higher than for software.  More often than not, placing one rivet or one brick a fraction off won’t cause the building to collapse.  On the other hand, a buffer overflow of even a single byte could allow for a system to be exploited.  Buildings are not built flawlessly.  Not even small ones.  I have a friend who has a large brick fireplace inside their room rather than outside the house because the builders were wrong when they built it.  In large buildings, there are often lots of small things wrong.  Wall panels don’t line up perfectly and are patched over, walls are not square to each other, etc.  These are acceptable problems.  Software is expected to be perfect.  In software, small errors are magnified.  It only takes one null pointer to crash a program or a small memory leak to bring a system to its knees.  In building skyscrapers, small errors are painted over.
Third, software engineering is incredibly complex—even compared to building bridges and skyscrapers.  The Linux kernel alone has 5.7 million lines of code.  Windows 98 had 18 million lines of code.  Windows XP reportedly has 40 million lines of code.  By contrast, the Chrysler building has 391,881 rivets and 3.8 million bricks.
Finally, it is a myth that bridge and building engineering projects come in on time. One has to look no further than Boston's [thanks Mike] Big Dig project to see that.  Software development often takes longer and costs more than expected.  This is not a desirable situation and we, as software engineers, should do what we can to improve our track record.  The point is that we are not unique in this failing.
It is incorrect to compare software development to bridge building.  Bridge building is not as perfect as software engineers like to think it is and software development is not as simple as we might want it to be.  This isn’t to excuse the failings of software projects.  We can and must explore new approaches like unit tests, code reviews, threat models, and scrum (to name a few).  It is to say that we shouldn’t ever expect predictability from what is essentially an R&D process.  Software development is always doing that which has not been done before.  As such, it probably will never reliably be delivered on time, on budget, and defect free.  We must improve where we can but hold the bar at a realistic level so we know when we've succeeded.

http://blogs.msdn.com/b/steverowe/archive/2005/02/28/why-building-software-isn-t-like-building-bridges.aspx

viernes, septiembre 21

Disadvantages of Ruby for Test Automation

In the past few years there has been a push to use Ruby as a programming language for test automation. Ruby as a programming language has some benefits compared to other scripting languages. However, for test automation or any other serious application development the disadvantages of Ruby certainly outweigh any of the recent sensational hype promoting the language. I may be a bit biased, but I honestly can’t fathom why a tester wanting to learn a programming language would spend time learning Ruby, especially with the ease of use, availability of resources, and broad adoption of C#.
1. Ruby lacks informational resources. A search on Barnes & Noble, or Amazon reveals about 50 or so books on Ruby programming. But, that is barely a drop in the bucket compared to more than 400 books written about C#. These numbers certainly don’t inspire a lot of confidence in Ruby as a broadly accepted programming language in the industry at large. Web searches for available online resources also reflect this tremendous disparity. Sure, the Ruby zealots support the few websites and respond to requests for assistance. But, there are a greater number of C# forums with greater numbers of registered members who frequently participate and provide solutions to questions. Additionally, you won't find too many universities or community colleges offering courses in Ruby programming. If Ruby is so good then why are there such limited resources? The answer is because there simply isn't the business demand, or other compelling reasons for adoption.
2. Ruby is not a high demand skill among employers. Take a look at any of the technical job sites such as Dice, Monster, etc. and you will not find a plethora of jobs asking for Ruby programming skills. For example, of the approximately 16,500 software testing jobs on Dice only 65 contain the keyword Ruby as compared to 1,668 job listings containing the keyword C#.  That means there are 25 times more employers desiring C# as compared to Ruby. IT Jobs Watch in the UK has an interesting site with lots of statistics relative to software testing positions in the UK. Looking at software testing jobs C# is listed in the top 5 desired programming languages. (Ruby doesn’t even make the top 20 list anywhere on this site.) The job trends on Indeed provides a visual perspective comparing jobs with C#, Ruby, Perl, and VB.NET keywords.

3. Ruby has performance problems. Scripting languages are notoriously slower than compiled languages, but it seems that Ruby is often slower (CPU time) and requires a larger memory footprint as compared to other scripting languages. The Computer Language Shootout site provides some very interesting benchmark results for various languages. One benefit of automation is the decreased time to execute a specific test. Now, you may be thinking that it only takes a few more seconds to execute a test in Ruby as compared to Java or C#. So, let’s arbitrarily say that each automated script in Ruby takes 5 seconds longer to execute as compared to the same automated test in C#. That may not seem like a big deal. But, instead of running one or two automated tests you want to run a test library of 200,000 tests. That’s about 12 hours of additional time needed to run the test automation written in Ruby. Now, I am sure Ruby advocates will discuss the reduced cost of development time, but time to develop an automated test is a one time expense (this does not include sustained maintenance which is a variable cost compounded over time for all automation libraries). Depending on your product’s shelf-life, you may need to rerun your test automation suites for the next 7 to 10 years for sustained engineering and product maintenance.

4. Ruby has not been widely adopted in application development. So, you may ask why this is a weakness for testers writing test automation? The simple fact is that if the development team is programming in say C/C++ or Java, and the test automation is in Ruby you probably won’t get a lot of support from the development team to help reviewing or debugging test automation. Also, it is very likely the developers may not want to install the Ruby interpreter on their Windows machine to use test automation to reproduce a defect, and instead ask for the manual steps. The test libraries the development team creates will require porting to Ruby which increases the cost and effort. Since many developers are familiar with at least the basic syntax of C/C++ and Java it is easier for them to pick up C# syntax and understand automated test code.

5. Ruby is just as cryptic as any other programming language. All programming languages use unique syntax, and users must learn the language’s syntax to code effectively. Now, I am no expert in Ruby but let’s compare a Ruby script to launch Windows calc.exe as compared to a C# program.

--------------------------------------------------------------

# Launch calc.exe in Ruby
require “win32/guitest”
include win32::GuiTest

run_cmd “calc.exe”

-----------------------------------------------------------------

// Launch calc.exe in C#
using System;
using System.Diagnostics;

namespace MyNameSpace
{
    class MyClass
    {
        static void main ()
        {
            Process.Start(“calc.exe”);
        }
    }
}

-----------------------------------------------------------------

Obviously there are more lines of code in the C# program as compared to the Ruby script. But, considering the fact the template in Visual Studio auto-generates the framework for a console application (the primary method of writing an automated test case) the only thing I need to add to the .cs file are the ‘using System.Diagnostics’ namespace declaration, and the ‘Process.Start(“calc.exe”);’ statement. Additionally, the IntelliSense feature of the Visual Studio IDE references language elements, and even inserts the selected element into the code. Also, perhaps it is a matter of personal taste, but Process.Start() seems a lot more ‘readable’ than run_cmd.

Ruby activists boast how quickly they can teach Ruby scripting to non-programming testers. I have been teaching C# to testers for more than 3 years. I have been very successful at teaching testers with no previous programming skills to write automated GUI tests in C# that will launch a Windows application, manipulate the application, generate and send data, verify and log results, and clean up the test environment within a day.

There may be some interesting features of Ruby, but don’t get sucked in by all the fanatical propaganda. Ruby has been around for more than 10 years and hasn't replaced any language or garnered a significant following! The simple fact is that Ruby simply doesn't offer anything revolutionary, and thus hasn't compelled the development community to rush to adopt or support it. All programming languages have strengths and weaknesses depending on the application. But, for test automation Ruby is not the best choice as a programming language.

In my (biased) opinion, I think C# is a much better choice, and in a later post I will outline the benefits of C# as a programming language for test automation.

http://blogs.msdn.com/b/imtesty/archive/2006/06/08/621755.aspx

viernes, septiembre 14

How to find a bug in application? Tips and Tricks

A very good and important point. Right? If you are a software tester or a QA engineer then you must be thinking every minute to find a bug in an application. And you should be!
I think finding a blocker bug like any system crash is often rewarding! No I don’t think like that. You should try to find out the bugs that are most difficult to find and those always misleads users.
Finding such a subtle bugs is most challenging work and it gives you satisfaction of your work. Also it should be rewarded by seniors. I will share my experience of one such subtle bug that was not only difficult to catch but was difficult to reproduce also.
I was testing one module from my search engine project. I do most of the activities of this project manually as it is a bit complex to automate. That module consist of traffic and revenue stats of different affiliates and advertisers. So testing such a reports is always a difficult task. When I tested this report it was showing the data accurately processed for some time but when tried to test again after some time it was showing misleading results. It was strange and confusing to see the results.
There was a cron (cron is a automated script that runs after specified time or condition) to process the log files and update the database. Such multiple crons are running on log files and DB to synchronize the total data. There were two crons running on one table with some time intervals. There was a column in table that was getting overwritten by other cron making some data inconsistency. It took us long time to figure out the problem due to the vast DB processes and different crons.
My point is try to find out the hidden bugs in the system that might occur for special conditions and causes strong impact on the system. You can find such a bugs with some tips and tricks.
So what are those tips:
1) Understand the whole application or module in depth before starting the testing.
2) Prepare good test cases before start to testing. I mean give stress on the functional test cases which includes major risk of the application.
3) Create a sufficient test data before tests, this data set include the test case conditions and also the database records if you are going to test DB related application.
4) Perform repeated tests with different test environment.
5) Try to find out the result pattern and then compare your results with those patterns.
6) When you think that you have completed most of the test conditions and when you think you are tired somewhat then do some monkey testing.
7) Use your previous test data pattern to analyse the current set of tests.
8) Try some standard test cases for which you found the bugs in some different application. Like if you are testing input text box try inserting some html tags as the inputs and see the output on display page.
9) Last and the best trick is try very hard to find the bug ;-) As if you are testing only to break the application!

http://www.softwaretestinghelp.com/how-to-find-a-bug-in-application-tips-and-tricks/

The STE vs. SDET Debate

What is the difference between a software test engineer (STE) and a software design engineers in test (SDET)? This is a question that is often asked because there is a lot of ambiguity regarding the roles of the STE and the SDET. Is the difference in the primary role in the organization, the skill set of a particular tester, or is it seniority in the team or company? As confusing as this subject is for many people outside the company, believe it or not, it was just as puzzling for many of us within Microsoft.

About 13 years ago when I first joined Microsoft the role of a SDET wasn’t so convoluted. In those days, it was very clear that a SDET was a tester whose primary role in the organization was to design, develop and maintain test tools and other testing infrastructure. But, it was also expected that many full-time testers in the STE role were capable of writing test automation, designing white box tests, and performing basic debugging skills.

But, over the years things changed. Some testers hired into some groups in the company lacked sufficient skills or aptitude to write automated tests or perform in-depth technical analysis of a software program. Sure, they were good at finding bugs. But, the less technically skills testers had difficulty performing additional testing tasks such as API testing or analyzing code coverage results to design additional tests from a white box test design approach to reduce under-tested areas of the product.

To differentiate the less technical testers from the testers capable of performing technical types of testing some groups decided to use the SDET title for almost any tester on the team who could write code in a modern programming language (particularly C/C++). This diluted the job requirements for both the STE role and the SDET roles. Also, the increased number of testers with the SDET title led to problems managing career growth.

Diluting job requirements leads to organizational nightmares. In some groups all testers with the STE title were still expected to understand a programming language or possess very specialized expert domain knowledge in a technical area. But, in other groups the STE testers were only relied upon to simply perform exploratory or ad hoc testing. Likewise, some groups continued to reserve the SDET title for those whose primary role in the organization was to build and maintain the testing infrastructure while other groups used the SDET title for practically any tester who knew a programming language.

This made it virtually impossible to establish standardized expectations for testers in comparable skill levels across the company. It also made internal transfers between some groups within the company very difficult for some testers, and it led some testers with coding skills to migrate to the groups simply to get the SDET title. Let’s face it; people want job titles that actually reflect their peer group. And, SDET sounds way sexier than STE. Of course the disparity in skills in the same group meant the more technically adept testers were getting higher review scores and being promoted at a much faster rate as compared to their less technical counter-parts.

The problem with career management primarily occurred because the SDET job role was never officially adopted by Microsoft HR, so our managers lacked guidelines for expectations based on an individual’s level. Some managers used the developer level guidelines, and some managers used a mix of the test guidelines and the developer guidelines. Similar to STE testers, SDET testers were not evaluated equitably across the various groups during annual reviews. Also, the proverbial ‘glass ceiling’ for promotions in testing forced a large majority of SDET testers at higher skill levels to transfer into developer positions where expectations and career progression requirements were better defined.

Recently, Microsoft implemented new career profiles designed to eliminate ambiguity in job titles, and establish clear guidelines for career progression. The new career profiles are well-intentioned, but haven’t solved all the problems. But, that is a discussion for a different day.

For more perspectives on the whole STE vs. SDET debate see Adam Ulrich’s posting and also Steve Rowe’s posting on the subject.

http://blogs.msdn.com/b/imtesty/archive/2006/05/19/601600.aspx

QA, Test engineer’s Payscale

Friends,
If you are working as a Test engineer or a QA engineer then this might be shocking for you, Specifically my Indian friends.
See the salary chart below:
This is the salary for Test/QA engineers by their experience. This salary structure is for those who does the job of design, implement, execute and debug information technology test cases and scripts, Automate test cases, Find bugs, defects, and regressions, Verify fixes, Validate and document completion of testing and development. Here get the detail company wise, US city wise salary structure.
QA Test engineers Payscale
Come on what you are thinking now



http://www.softwaretestinghelp.com/qa-test-engineers-payscale/


jueves, septiembre 13

Three Reasons To Consider Being a Test Developer

When it comes to careers in the world of software most people think of programmers or what are more formally known as developers.  Developers are the people who write the software which is consequently sold or utilized by the organization.  I’ll call them dev-developers to distinguish them from test developers.  Sometimes people will also think of testers.  Testers are the people who don’t program but run the software written by developers to find bugs.  What gets lost in the shuffle is a specialized class of developers who are also testers.  We like to call them test-developers.  These are the people that write software to test software.  It is they who will be the subject of this essay. 
            The term “test developer” is sometimes used to refer to a tester who knows a scripting language like Javascript or Perl or maybe even knows VB.  Usually this person has no formal training and takes on only simple tasks.  That is not what I refer to in this essay.  The test developers I am referring to are a specialization of developers.  They write complex code in production languages utilizing computer science techniques.  See my previous essay on why test developers are real developers. 
            So, why should you consider becoming a test developer?  Why not just become a dev-dev instead of a test-dev?  That is the subject of this essay.  There are three primary reasons to become a test-dev rather than a dev-dev.  These are that it makes you a better programmer, the code you write is more broad, and it is at a sweet spot in the development cycle.
            Programmers who are or have been test developers are, on average, better programmers than those who have not.  They have a feel for what is likely to go wrong with software and so code for failure instead of coding for success.  All too often those who have not been testers write code until it works and then stop.  They write so that it can work but not so it will always work.  Test-developers have practice breaking programs and so know where they will encounter problems.  They are thus more successful at anticipating what error conditions may happen and writing resilient code.
            Secondly, test developers tend to write code which exercises the product at a higher level.  Instead of focusing all of your effort on a way to make a widget spin, instead you get to see what happens if multiple widgets spin at the same time or how spinning widgets interact with the IFoo class.  Test developers write code that exercises the product as a whole which is often more interesting and more rewarding than spending time optimizing one little corner case.  You get a more wholistic view of the product which leads to better understanding of how the various pieces interact.  Additionally, you often get to do more fun work like determining ways to break the system, put various pieces together, analyze its performance, etc.
            Finally, test development is at a sweet spot in the product release cycle.  Dev-developers work furiously in the early cycle getting their pieces code complete.  They then work furiously late in the cycle fixing the final few bugs.  The end result is often very long hours.  Test developers, on the other hand, are generally under less pressure.  In the early product cycle, you can only work as fast as code becomes available to you.  In the late cycle, your tests are already in place.  If they aren’t, it is too late to add them.  I don’t mean to imply that test developers don’t work hard.  They do.  They just tend to feel less deadline pressure than dev-devs.

http://blogs.msdn.com/b/steverowe/archive/2005/01/19/356361.aspx

¿Cuál es la diferencia entre pruebas de humo y pruebas funcionales?

Smoke testing is done to make sure the application is testable, ie., its basic and critical features are working fine, here it is tested for only positive(correct) values and not for negative values. we don't take much time to do smoke testing. It is done bcos if we find the same bugs at a later point of time it will lead to lot of rework.

But in functional testing we test the functionality of all the features against the requirement thoroughly.

Fuente: http://ssoftwaretesting.blogspot.mx/2010/07/what-is-diff-between-smoke-testing-and.html

lunes, septiembre 10

Quality Assurance != Testing

Stuart Feldman of IBM research has some thoughts on the issue of testing and quality assurance.  He contends that QA is a process whereas testing tends to be technology-based and an afterthought.  He describes the difference here:
"What goes into QA? Testing, of course, is a key activity. There is, however, an adage that “you can’t test quality into a product.” A solid test plan should catch errors and give a measure of quality. A good QA plan ensures that the design is appropriate, the implementation is careful, and the product meets all requirements before release. An excellent QA plan in an advanced organization includes analysis of defects and continuous improvement. "
I think what we call what testers do is largely semantic but agree with Stuart on what this activity should be.  If testing is merely about finding bugs, it is insufficient.  It needs to be about assessing the quality of the product as a whole. 
Stuart also spends some time detailing the different levels of testing required for various levels of products.  This ties in well with the software engineering discussion we're having elsewhere on this blog.
hat tip to /.


http://blogs.msdn.com/b/steverowe/archive/2005/03/03/384637.aspx

Pruebas de Integración

La prueba de integración se realiza posteriormente a las pruebas de unidad y su foco de atención es el diseño y la construcción de la arquitectura del software. 

Después de la integración viene la validación y por último la prueba del sistema, ésta última consiste en probar el software junto con los otros elementos de la empresa o entidad, como la gente, departamentos, la base de datos.

Las pruebas de integración pueden ser descendentes o ascendentes o en sandwich, pero estas tienen sentido con ese nombre en los sistemas hechos en lenguajes estructurados, en los que el diagrama de la estructura de módulos del sistema permite definir un tipo dado de integración.

 En los orientados a objeto, otra vez cobra importancia el concepto de caso de uso, pues éste guía a la prueba en cuanto a los requisitos que deben satisfacer un determinado número de clases de programación que tienen que interactuar entre sí gobernadas por alguna clase de control del caso de uso.

jueves, septiembre 6

How to write a good bug report? Tips and Tricks

Why good Bug report?
If your bug report is effective, chances are higher that it will get fixed. So fixing a bug depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell you how to achieve this skill.
“The point of writing problem report(bug report) is to get bugs fixed” – By Cem Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”, “Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)
What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to distinguish between average bug report and a good bug report. How to distinguish a good or bad bug report? It’s simple, apply following characteristics and techniques to report a bug.
1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported.
2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix.
3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way. Do not combine multiple problems even they seem to be similar. Write different reports for each problem.
How to Report a Bug?
Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you are writing bug report manually then some fields need to specifically mention like Bug number which should be assigned manually.
Reporter: Your name and email address.
Product: In which product you found this bug.
Version: The product version if any.
Component: These are the major sub modules of the product.
Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.
Severity:
This describes the impact of the bug.
Types of Severity:
  • Blocker: No further testing work can be done.
  • Critical: Application crash, Loss of data.
  • Major: Major loss of function.
  • Minor: minor loss of function.
  • Trivial: Some UI enhancements.
  • Enhancement: Request for new feature or some enhancement in existing one.
Status:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.
URL:
The page url on which bug occurred.
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.
Description:
A detailed description of bug. Use following fields for description field:
  • Reproduce steps: Clearly mention the steps to reproduce the bug.
  • Expected result: How application should behave on above mentioned steps.
  • Actual result: What is the actual result on running above steps i.e. the bug behavior.
These are the important steps in bug report. You can also add the “Report type” as one more field which will describe the bug type.
The report types are typically:
1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem
Some Bonus tips to write a good bug report:
1) Report the problem immediately:If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic nature of the bug.
3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that bug in one module can occur in other similar modules as well. Even you can try to find more severe version of the bug you found.
4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will unnecessarily increase the development and testing time. Communicate well through your bug report summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.
5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report.
6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or to attack any individual.
Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing good bug reports, spend some time on this task because this is main communication point between tester, developer and manager. Mangers should make aware to their team that writing a good bug report is primary responsibility of any tester. Your efforts towards writing good bug report will not only save company resources but also create a good relationship between you and developers.
For better productivity write a better bug report.

http://www.softwaretestinghelp.com/how-to-write-good-bug-report/

Quality Assurance: Much More than Testing

Good QA is not only about technology, but also methods and approaches.

STUART FELDMAN, IBM RESEARCH

Quality assurance isn’t just testing, or analysis, or wishful thinking. Although it can be boring, difficult, and tedious, QA is nonetheless essential.
Ensuring that a system will work when delivered requires much planning and discipline. Convincing others that the system will function properly requires even more careful and thoughtful effort. QA is performed through all stages of the project, not just slapped on at the end. It is a way of life.

WHAT IS SOFTWARE QUALITY ASSURANCE?

IEEE Standard 12207 defines QA this way: “The quality assurance process is a process for providing adequate assurance that the software products and processes in the product life cycle conform to their specific requirements and adhere to their established plans.”
This sentence uses the word process three times. That is a key aspect of QA—it is not a single technology, but also a method and approach.
Another key point is that quality is not treated as a philosophical issue, but, rather, as measurably meeting expectations and conforming to requirements. The rigor of the process should be chosen to suit the needs of the product and organization.
Finally, QA is about providing assurance and credibility: the product should work right, and people should believe that it will work right.
What goes into QA? Testing, of course, is a key activity. There is, however, an adage that “you can’t test quality into a product.” A solid test plan should catch errors and give a measure of quality. A good QA plan ensures that the design is appropriate, the implementation is careful, and the product meets all requirements before release. An excellent QA plan in an advanced organization includes analysis of defects and continuous improvement. (This feedback loop is characteristic of mature organizations.)
For physical products, QA involves manufacturing process control, design reviews, test plans, statistical methods, and much more. In relaxed implementations, there is occasional monitoring of the production line, and a few pieces are examined at each stage. In extreme cases, every step is monitored and recorded, intermediate products are torture tested with stresses exceeding the specifications, and many final products are destroyed. (Crash testing isn’t always a metaphor.) Only a few outputs make it into the field.
For software products, there are many QA process choices, depending on the structure of the organization, the importance of the software, the risks and costs of failure, and available technologies. These should be conscious decisions that are recorded and revisited periodically.

HOW STRINGENT MUST QA BE?

In an ideal world, perfection would be the norm. In the real world, you must make trade-offs. Although some people claim that “quality is free,” that is rarely the case. After much trial and error, you may arrive at a well-honed process that delivers high quality reliably and efficiently. Until you achieve that stage, demonstrably higher quality usually involves a longer and more expensive process than simply pushing the product out the door.
There are many types of requirements to be QA’d. Some involve meeting basic functional specifications: the system or program does the right thing on expected (or unexpected) inputs. Some involve performance measures such as throughput, latency, reliability, and availability.
Other major considerations depend on the operating environment. If the users will have limited understanding or ability to repair problems, the system must be validated on novices. If the system must operate in many contexts, interoperability and environmental tolerance must be verified.
In certain applications, the costs of failure are so high that it is acceptable to delay until every imagined test and cross-check has been done. In others, repairs are acceptable or affordable, or misbehaviors are tolerated. Just as a bank runs different credit checks on people who want to borrow $1,000 and those who want $1 million, different QA processes are appropriate for spelling checkers and cardiac pacemakers. Much of the fundamental work on high-reliability systems was done for military, aerospace, and telecommunications applications that had extremely rigorous requirements (and large project budgets); telephone switches and mainframes rarely fail.
The spectrum of QA rigor covers a wide range:
Research and experimental software. Requirements for quality may be quite low, and the process may be little better than debugging and a few regression tests. Nonetheless, the risks of embarrassment from failed public demos and withdrawn papers suggest a greater investment.
Business and productivity and entertainment tools. These are expected to work, but the occasional failure is (alas) no surprise. When the consequences of a crash or invalid result are acceptable, it may not be worthwhile to invest in a long QA cycle (or so many vendors say).
Business-critical tools. A much higher standard of planning and testing is required for key organizational software. Software that manages transactions for significant amounts of money, affects people directly, or is required for legal compliance needs to be credible, as well as functional. Errors can destroy an organization or its executives. Any development plan needs a significant investment in quality assurance, including careful record keeping and analysis.
Systems that are widely dispersed or difficult to repair. When it is difficult or expensive to access all the products, there is justification for extensive testing and design for remote repair. If 1 million copies of a game are sold with a highly visible flaw, the cost of upgrading and repairing could easily exceed the profit. A chip with a design flaw or an erroneous boot ROM can lead to the same unfortunate result. Heavy testing in a wide variety of environments is needed to build confidence, even if product launch is repeatedly delayed. In the extreme, it may be impossible to get to the product because it is embedded in equipment or extremely distant; if the mission is important, the products must be designed for remote repair and/or have unusually high quality standards. Examples include famous exploits for repairing space missions millions of miles from home.
Life- and mission-critical software. Failures of some systems can cause loss of life (braking systems, medical devices) or large-scale collapses (phone switching systems, lottery management systems). In such cases, elaborate QA is appropriate to avert disaster. It is not unusual for testing and other QA steps to absorb more than half of the elapsed development time and budget. Analysis must extend far beyond single components and functions—the behavior of the entire system must be assured.

ORGANIZATIONAL IMPLICATIONS AND APPROACHES

Since QA is a process, it is natural to expect special roles and organizations to be assigned to it. In simple and undemanding projects, the designers and developers may also perform QA tasks, just as they do in traditional debugging and unit testing. Unfortunately, people are usually loath to spend a lot of time on assurance tasks; developing new features is much more exciting. Furthermore, the people who miss a special case during design will also be likely to miss it during testing.
Therefore, in larger organizations or for products with stringent requirements, QA is usually the responsibility of a separate group. Ideally, that group is independent of the development organization and has authority to require redevelopment and retesting when needed. The independent QA people are typically responsible for defining the process and monitoring the details of execution. Sadly, QA people rarely remain best friends with developers.
A separate organization is capable of the deep analysis that supports improvement of the process and the product. High levels of SEI CMM (Software Engineering Institute Capability Maturity Model) certification and ISO quality certification require significant levels of analysis and feedback; the QA organization is the natural home for those activities.
A QA organization need not be huge to be effective. Relatively small groups can do a good job, so long as they have independence, knowledge of the process, and understanding of the product. The QA staff also needs to be experienced in the many ways that products can be botched, processes can be short-circuited, and people can be careless. Assigning QA tasks to the most junior member of the team dooms the product and the staff.

ACTIVITIES

Numerous textbooks and standards documents define the stages of QA. If you are going to be responsible for assuring quality, read them.
QA touches all stages of a software project. It typically requires careful capture and control of many artifacts, as well as strict version management. It is impossible to have a solid and replicable test plan without agreed-upon requirements and specifications.
In a traditional development process, the QA organization requires reviews at each stage, with careful records, verification, and signatures. Tests and release criteria are based on requirements, and release is based on test results. If there are product requirements for reliability and availability, you will need special testing environments and adequate amounts of time to acquire data.
In an agile programming environment, where requirements may be updated every few weeks as customers examine drafts of the software, the QA process needs to be more flexible than in a traditional environment. Nonetheless, someone must be responsible for assuring testing of basic requirements, rapidly updating and recording regression tests, and ensuring progress reviews. (Extreme programming does not excuse curdled databases.)

IS SOFTWARE REALLY HARDER THAN HARDWARE?

People are accustomed to software having more bugs than hardware. There are many reasons for this:
  • The difficult, irregular, human-oriented parts of a system are left to the software.
  • Conversely, the hardware usually has replicated components and considerable regularity, so the logical complexity may be much lower than the size of the design suggests. On the other hand, analog and mechanical issues can introduce new dimensions to the problem.
  • Despite decades of experience, managers often plan the software after the hardware designs and tests have been completed. There is then neither time nor budget to support appropriate QA of the software.
  • Hardware engineers have a deeply ingrained respect for quality, both through their education and because they know how hard it is to change a badly designed physical object.
Software engineers can learn a lot from their hardware colleagues about rigorous planning, process, and testing. Hardware people, on the other hand, can learn a lot about usability, flexibility, and complexity.

MAXIMIZE THE REWARD

The details of the QA process depend on the organization, staff, and expected use of the product. It can be difficult, tedious, odious, time-consuming, and expensive.
But it is also necessary. Learn to do QA well to minimize the pain and maximize the reward.

http://queue.acm.org/detail.cfm?id=1046943

Pruebas de validación

Pruebas de validación

 
La validación se logra cuando el software funciona de acuerdo a las expectativas del cliente.

La validación seconsigue mediante una serie de pruebas de la caja negra que demuestran la conformidad con los requisitos. Si aparecen deficiencias, hay que negociar con el cliente un método para resolverlas.

La prueba alfa es conducida por un cliente en el lugar de desarrollo. Se usa el software de manera natural, con el encargado de desarrollo "mirando por encima del hombro" del usuario" y registrando errores y problemas de uso. Las pruebas alfa se llevan a cabo en un entrono controlado.

La prueba beta se lleva a cabo en uno o más lugares de clientes por los usuarios finales del software. A diferencia de la prueba alfa, el encargado de desarrollo, normalmente, no está presente. 

La prueba beta es una aplicación "en vivo" del software en un entrono que no puede ser controlado por el equipo de desarrollo. El cliente registra todos los problemas (reales o imaginarios) que encuentra y los informa a intervalos regulares. 
Como resultado, el equipo de desarrollo lleva a cabo modificaciones y así prepara una versión del producto para toda la base de clientes.

miércoles, septiembre 5

What if there isn’t enough time for thorough testing?

Sometimes Tester need common sense to test a application!!!
I am saying this because most of the times it is not possible to test the whole application within the specified time. In such situations it’s better to find out the risk factors in the projects and concentrate on them.
Here are some points to be considered when you are in such a situation:
1) Find out Important functionality is your project?
2) Find out High-risk module of the project?
3) Which functionality is most visible to the user?
4) Which functionality has the largest safety impact?
5) Which functionality has the largest financial impact on users?
6) Which aspects of the application are most important to the customer?
7) Which parts of the code are most complex, and thus most subject to errors?
8) Which parts of the application were developed in rush or panic mode?
9) What do the developers think are the highest-risk aspects of the application?
10) What kinds of problems would cause the worst publicity?
11) What kinds of problems would cause the most customer service complaints?
12) What kinds of tests could easily cover multiple functionalities?
Considering these points you can greatly reduce the risk of project releasing under less time constraint.

http://www.softwaretestinghelp.com/what-if-there-isnt-enough-time-for-thorough-testing/

lunes, septiembre 3

All I Ever Need to Know about Testing I Learned in Kindergarten

Summary:
In addition to presenting a tutorial and a keynote address at the EuroSTAR testing conference in Copenhagen, Lee Copeland was asked to give the after dinner speech at the closing gala reception overlooking Tivoli Gardens. He chose to model his comments after Robert Fulghum's book "All I Really Need to Know I Learned in Kindergarten." But in his speech, Lee changes the rules of childhood into guidelines for living life as a tester.

In 1986, Robert Fulghum published a book, "All I Really Need to Know I Learned in Kindergarten." It contains some wonderful ideas. I'd like to discuss how those might apply to us as testers.
Share everything
Once I observed a situation in which a tester, with better knowledge of an application domain than an inexperienced developer, used his knowledge to find and report bugs in a system. He could have shared this knowledge with the developer, but wanted to stroke his own ego and pump up his bug report count.
Our profession advances when we share information instead of using it for our own purposes.
Play fair
Here are some other things I've seen testers do:
One tester reported the same defect over and over again with slight variations to pump up her bug report count.
Another tester discovered a significant defect during a design review but did not inform the developers. He waited until the defect was implemented in code and then filed a scathing defect report.
What goes around, comes around. When we don't play fair, we become untrustworthy. Then others won't play fair with us. It's lose-lose all around.
Don't hit people
If you find a defect in someone's work, first tell him informally, personally, and discreetly.
Once a co-worker gave me a document he had written and asked for my review. I didn't get to it until the last minute. Rather than talk with him in private, I blasted his work publicly in a meeting. Later, he came to me and simply asked, "Why?" I still remember the look in his eyes, and I have never done that again.
As a tester, remember that we are paid to "hit" software, not the people who wrote it. It's the software that's buggy, full of holes, not worth the ink used to print it, and, as James Whittaker likes to quote Neil Young, "A piece of crap."
Rather, remember Norm Kerth's gentle words: "Regardless of what we discover, we understand and believe that everyone did the best job they could, given what they knew at the time, with their skills, abilities, and the resources available."
Put things back where you found them
You probably use a test lab. It's probably a common resource used by other testers. When you are finished, put things back--reconfigure the hardware, restore the software, reload the test data, set up the accounts, and reset the parameters.
In one organization I visited, the lab had a sign on the door that read "Test Lab." Everyone else in the organization read it as "Spare Parts Room."
Clean up your own mess
And while you're at it, throw away those pizza boxes and coffee cups.
We have a rule at my house, "It's OK to spill." No one ever gets yelled at for spilling. But we have another rule, "Clean up your mess." That one you will get yelled at for not doing.
Even better, try not to create messes in the first place. One way to do this is to write clear bug reports--ones that will really help your developers find defects quickly; not reports that will lead them on wild goose chases for your amusement.
Don't take things that aren't yours
One thing people take that isn't theirs is credit. Once my boss asked me to research something. Later, I wrote a memo, which began, "To: Boss, From: Lee." The next time I saw the memo it read "To: Big Boss, From: Boss." He took my work and didn't give me any credit. I learned something from that experience. From then on, I always ook memos that my staff had prepared and put a sticky note on them that read, "My staff member wrote this . . . I think it's good work . . . I hope you concur."
Another thing people take that isn't theirs is guilt. You will not find every defect. Try hard, use your skills, do a good job; but remember, some will sneak by you and that's OK. As Boris Beizer says, "We need devious testers." But sometimes, as devious as we are, our developers and users exceed our capacity.
Say you're sorry when you hurt someone
No matter how careful we are, at some place and time, we will hurt someone. Most of us will never intentionally hurt anyone physically, but we will hurt him emotionally. We'll say something or do something--perhaps intentional, perhaps in ignorance, or perhaps in jest--that will reach into his chest and rip out his heart.
As testers, we're in the error-discovery business. Our job is to find other people's mistakes. When we find them, we report them publicly. We know to always focus our reports on the errors, not the person who made the errors. But still, sometimes egos are bruised; sometimes feelings are hurt.
Say "I'm sorry." It is one of the most powerful, healing phrases in the human language.
Wash your hands before you eat
In other words--start clean. Once the system fails, it may not be in a stable state to look for more defects. Reboot or reload often.
Flush
This is always good advice. And, as a professional user of airport toilets, I am amazed at the number of men who don't know to do this. Of course, a real tester would flush all the toilets at once, just to see what happens. Could you do that with your software too?
Also, always remember to flush the cache when doing performance testing.
Sometimes features need to be flushed from the product before shipment because they are so problematic. Sometimes entire projects need to be flushed. Perhaps you can help--maybe you can even pull the handle.
Warm cookies and cold milk are good for you
Yes, they are. Enough said. (Oh, it's better if your employer furnishes them. And chocolate chip cookies are the best.)
Live a balanced life
There are things in life in addition to testing--friends, family, travel, sex, food, rest, sex, health, fitness, art, recreation, good deeds, sex, spirituality, learning, play, and, of course, introspection.
It is difficult, especially in the early years of our careers, to put work aside and focus our attention on other things.
But, as the great philosopher Ferris Bueller once said, "Life moves pretty fast. If you don't stop and look around once in a while, you could miss it."
From a testing viewpoint, create diversified test teams and develop diversified test strategies.
Learn some, think some, draw some, paint, sing, dance, play, and work every day
This one is more difficult to apply. How about "Learn some, think some, model some, explore some, document some, communicate, and test every day"?
Take a nap every afternoon
If you work in an office with cubicles, taking a nap in plain sight is probably not a good way to win friends and influence people. However, we all need quiet time to be with ourselves--time to think, time to reflect, time to rest, time to regenerate. Try to establish your own quiet time--a time when you don't read email, answer the phone, attend meetings, or allow interruptions.
Taking a step away from your project will give you fresh insight and a different outlook. When
you come back to the problem, you often have your own "a ha!" moment.
When you go out in the world, watch for traffic, hold hands, and stick together
There is great strength in teams. The days of "us vs. them" are over. The days of "throw it over the wall to the testers" is over. It turned out that idea was about as successful as Communism.
Synergy is the concept that the whole of us is more than the sum of us. In years past I ran an experiment in one of my seminars. It was based on a "Lost in the Desert" exercise in which individuals are given a problem to solve, and then they solve the same problem again in teams. When working together rather than as individuals, 98 percent of the time, the team score was better than the average of the individual scores. And 95 percent of the time, the team score was better than every one of the individual scores on the team. Working together as a team is better, smarter, and more powerful than working as individuals.
Be aware of wonder
I have a four-year-old granddaughter and a two-year-old grandson who live with me. Imagine, at my age, I'm doing the "father" thing all over again. And it is a fabulous experience. You see, I had forgotten the "wonders" in the world: the wonder of butterflies and bugs; the wonder of the rainbow; the wonder of first words; the wonder of fire trucks and cement trucks and bulldozers and diggers of all kinds; the wonder of heartfelt hugs; and the wonder in a child's eyes and smile.
Be aware of wonder as a tester: the wonder that they made so many stupid mistakes; the wonder that so much actually does work; the wonder that your organization is still in business; the wonder of your own talent as you discovered an amazingly convoluted bug in the code; and the wonder that you have so much fun and get paid for it.
The world is full of wonder. It is a wonder-full world. I wish you a wonderful life. Good night.

http://www.stickyminds.com/article/all-i-ever-need-know-about-testing-i-learned-kindergarten

Black Box Testing

I attended a talk on campus yesterday discussing various aspects of testing.  Part of the talk discussed the need for testers to become better versed in the formalities of testing.  I'll leave that subject for another day.  A portion of the talk, however, discussed an experiment done with some inexperienced testers.  They were asked to create test cases for the Myers Triangle Test.  A lot of the test cases they came up with were not useful.  By that I mean they didn't test the algorithm or they were redundant with other tests.  Some would try inputting something like an 'A' which is invalid and won't pass the string->int conversion function or they would try lots of different numbers that all went down the same code path.  If you look at the underlying code, it is obvious why these tests don't make sense.  Too often though, test plans are full of cases like these.  Why is that?
   I contend that we often test things only at the surface level and don't consider the implementation.  At some point in time I was told that black box testing was a good idea because if you looked at the underlying code, you might make the same flawed assumptions that the developer made.  This is probably also where we got the notion that you shouldn't test your own code.  I never really agreed with the concept of purposeful black box testing but didn't fully challenge the assumption in my mind.  After some reflection though, I am pretty sure that black box testing is almost always less useful than white box testing. 
   Just in case you don't follow, let me define some terms.  Black box testing is testing where you don't understand the implementation details of the item you are testing.  It is a black box.  You put in data, you get out different data, how it tranforms the data is unknown.  White box testing is testing where you have the source code available (and look at it).  You can see that there are 3 distinct comparisons going on in the Meyers Triangle Test. 
   Black box testing can be useful if we don't have the time or the ability to understand what we are testing but if we do, it is always better to take advantage of it.  Without knowing the details, I have to try every potential input to a program to verify that all of the outputs are correct.  If I know the inputs, however, I can just test each code path.  If all triangles are tested for A == B == C, I don't need to Triangle(3,3,3) and Triangle (4,4,4) and Triangle(5,5,5).  After the first one, I'm not trying anything new.  Without looking at the code, however, I don't know that.  Not only does white box testing allow you to see where you don't need to test, it lets you see where you do.  Some years ago I was testing our DirectShow DVD Navigator software.  There is a function for fast forward that takes a floating point number.  From a black box perspective, one would have no idea what numbers to pass.  Just try some and call it good.  In this particular implementation, however, there were different behaviors depending on which number you put in.  For a certain range of numbers, all frames were decoded and just played quickly.  For a higher range, only I-frames were played.  For everything above that range, the navigator started playing only some of the I-frames.  Without looking at the code, I could not have known which test cases were interesting.  I couldn't guarantee that I tried something from every range.
   What about making wrong assumptions if you look at the code?  Won't that cause you to miss things?  Perhaps.  However, test driven development, unit testing, etc. have proven that testing done by the developer is quite effective.  Testers should also have a spec outlining what proper behavior should be.  If the code deviates from that spec, you found a bug (somewhere--it might in be the spec).  If you use common sense, you are unlikely to miss a bug because you make the same assumption as the developer.  If you do, the trade-off for greater test efficiency is probably worth it.  You'll have found many new bugs for each one you miss.

http://blogs.msdn.com/b/steverowe/archive/2005/04/28/413093.aspx

Pasos básicos de Testing Funcional

I was having a chat with my friend who is a beginner in testing. He was sharing his problem with me on how to initiate testing. Although there are many books and articles on theories and concepts of testing but there is no help or information available on how to start testing and how a tester should proceed with the same. I thought of penning down my experience and sharing with all about my style on how to start testing. There are no specific norms but based on my experience while working on various projects I have listed down items which I have been doing while performing functional testing.

During the project initiation, when "Business Requirement Document" or "Software Requirement Document" or "Functional Specifications" are defined at that stage, tester should be involved to understand the background of the project. This also helps tester in getting first-hand information about user needs, their pain areas and expectations from the system which is yet to be developed.

While the project is in second stage of the cycle, which is Development Stage, Test Plans should be prepared. This helps tester in analyzing the requirements from testing aspect such as Test Case preparation based on features to be tested, Test Data requirements, defining defect priority and severity norms.

The Test plan comprises of schedule and timeline for test case writing and execution, entry and exit criteria for testing, number of regression test cycles to be performed and test cases to be executed in each cycle. Test plan also defines the status reporting template and frequency for the same.

After preparation of test plans, we should not rush to writing test cases but we need to analyze the needs of the system apart from functional requirements of the system. We need to identify test scenarios, covering alternate flows, boundary values and business rules/validation needs for writing test cases.  During our testing, we need to ensure that our testing is also covering those requirements as well. One example of such requirements could be browsers to be supported by the web application. Further, we need to prioritize Test Scenarios based on user needs and business criticality. This helps in focusing the critical areas of application and helps in performing comprehensive testing.

The test cases provide details about steps to be performed during testing. Usually every test case comprises of information about steps to be performed, data to be consumed, expected and actual results. The test cases are written to perform testing for both negative and positive scenarios. While positive test scenario related test cases helps the tester to find obvious defects, similarly negative test scenario related test cases help the tester to find defects and behavior of software/application under abnormal circumstances. This helps in checking the failure conditions and how error handling is performed in the application.

To provide full coverage of testing we can maintain traceability matrix of use cases or requirements with test cases. This helps us to ensure that all the requirements are covered while writing test cases.

Once we have written test cases, we will execute the test cases as per test plan defined earlier. While executing test cases, we will compare the actual results with expected results. In case of any discrepancy between actual output and expected output, we need to log it as a defect. The priority and severity of the defect is defined and are reported to Development team.

Further, as a practice you should maintain test results for each test case for every cycle of testing performed. We need to ensure that new code drop is not causing any defect to reopen during development cycles.

After defects are reported to development team, development team will deploy code after fixing the defects. Next step will be to perform regression testing. Regression testing will be based on executing the test scenarios with high priority and related to areas where code changes have been done by development team.

During the last and final stage of testing we need to ensure that all test cases are executed and there are no defects found in the verification. You can use automation testing, specifically for the areas of application, which are stable and will undergo minimum changes. This will help in keeping maintenance effort for the automation suite to minimum and will help in reducing manual testing effort by providing testing coverage for the critical areas. My general approach is always to re-run the test cases against which defects are detected, and the test cases associated with the related areas of application. This helps in detecting defects, which could have been injected by the developer in the code while fixing the reported defect.  In case of 100% success, all the defects are closed, and testing report is submitted. However, in case any of the defects is re-opened or new defects are logged then the above-mentioned steps are executed again after the development team deploys fresh code.

These are the steps which are usually followed under normal circumstances for performing functional testing. However, in certain projects, testers are supposed to perform functional testing under following scenarios:

Incomplete requirement documentation or no business requirements defined.
Testing is initiated after the project is developed.
Time allocated for testing is too short to perform thorough testing of the software.
The approach for functional testing will be different in all the above stated cases. Let’s discuss each scenario and approach tester should follow in each case:

1. Due to lack of functional or business requirements, it is very difficult for us to write test cases or define test scenarios. It is always recommended that in such cases tester should communicate with the business users and developers to define valid test scenarios. The tester should then envision features and functionality of software being developed and create a scenario based on the same. Tester should get these test scenarios validated by Users and developers. Based on signed off scenarios Tester should create test cases. The execution cycle in this scenario will remain same as explained above.

2. The second case is relatively simple, since the software has been developed, then the test time in the test plan is even easier to arrange, more conducive to the implementation. The base for writing test cases should be mixed of functional specifications and the actual software. This helps in ensuring that developers have not misinterpreted any requirement or if the development team has missed out implementation of any requirement.

3. For third scenario, we can perform risk based assessment of the application. With the help of business users, we will define the most critical areas of the application from the business aspect. Similarly, after coordinating with Development team, we will define the most complicated areas of the application from development perspective. Our test scenarios and test cases will focus on these two aspects and cover the areas defined under both the scenarios. If you have experience in testing, then you can further save time by not writing the test cases and performing testing in accordance to test scenarios identified. The reporting hence will comprise of only the test scenarios covered, and defects found.

In case you are stuck with the projects with no requirements, tight timeline and no experience in similar testing then you need to burn overnight fuel to prepare for it.

These are the personal opinions which I have for software functional testing. Certain aspects may differ from case to case basis, for example, in case of project following agile methodology for development then the approach for Agile testing will be different than stated above. This article is for your reference only on how to initiate testing and move forward with test plans, test cases and test reports.

Fuente: http://softwareqatestings.com/introduction-to-software-testing/basic-steps-of-functional-testing.html