viernes, noviembre 30

Chistes sobre Testing -Parte 2

 Parte 1    Parte 2    Parte 3    Parte 4
Signs That You’re Dating A Tester
  • Your love letters get returned to you marked up with red ink, highlighting your grammar and spelling mistakes.
  • When you tell him that you won’t change something he has asked you to change, he’ll offer to allow you two other flaws in exchange for correcting this one.
  • When you ask him how you look in a dress, he’ll actually tell you.
  • When you give him the “It’s not you, it’s me” breakup line, he’ll agree with you and give the specifics.
  • He won’t help you change a broken light bulb because his job is simply to report and not to fix.
  • He’ll keep bringing up old problems that you’ve since resolved just to make sure that they’re truly gone.
  • In the bedroom, he keeps “probing” the incorrect “inputs”.
Who Is Who
  • A Project Manager is the one who thinks 9 women can deliver a baby in 1 month.
  • An Onsite Coordinator is the one who thinks 1 woman can deliver 9 babies in 1 month.
  • A Developer is the one who thinks it will take 18 months to deliver 1 baby.
  • A Marketing Manager is the one who thinks he can deliver a baby even if no man and woman are available.
  • A Client is the one who doesn’t know why he wants a baby.
  • A Tester is the one who always tells his wife that this is not the right baby.
Programmer Responses
Some sample replies that you get from programmers when their programs do not work:
  • “It works fine on MY computer”
  • “It worked yesterday.”
  • “It must be a hardware problem.”
  • “What did you type in wrong to get it to crash?”
  • “You must have the wrong version.”
  • “Somebody must have changed my code.”
  • “Why do you want to do it that way?”
  • “I thought I fixed that.”
Assessment Of An Opera
A CEO of a software company was given a ticket for an opera. Since he was unable to go, he passed the invitation to the company’s Quality Assurance Manager.
The next morning, the CEO asked him how he enjoyed it, and he was handed a report, which read as follows:
For a considerable period, the oboe players had nothing to do. Their number should be reduced, and their work spread over the whole orchestra, thus avoiding peaks of inactivity. All twelve violins were playing identical notes. This seems unnecessary duplication, and the staff of this section should be drastically cut. If a large volume of sound is really required, this could be obtained through the use of an amplifier. Much effort was involved in playing the demi-semiquavers. This seems an excessive refinement, and it is recommended that all notes be rounded up to the nearest semiquaver. No useful purpose is served by repeating with horns the passage that has already been handled by the strings. If all such redundant passages were eliminated, the concert could be reduced from two hours to twenty minutes.

 Fuente: http://softwaretestingfundamentals.com/software-testing-jokes/

jueves, noviembre 29

Web Testing: Complete guide on testing web applications

In my previous post I have outlined points to be considered while testing web applications. Here we will see some more details on web application testing with web testing test cases. Let me tell you one thing that I always like to share practical knowledge, which can be useful to users in their career life. This is a quite long article so sit back and get relaxed to get most out of it.
Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
1) Functionality Testing:
Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.
Check all the links:
  • Test the outgoing links from all the pages from specific domain under test.
  • Test all internal links.
  • Test links jumping on the same pages.
  • Test links used to send the email to admin or other users from web pages.
  • Test to check if there are any orphan pages.
  • Lastly in link checking, check for broken links in all above-mentioned links.
Test forms in all pages:
Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?
  • First check all the validations on each field.
  • Check for the default values of fields.
  • Wrong inputs to the fields in the forms.
  • Options to create forms if any, form delete, view or modify the forms.
Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing.
Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing)
Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawlable to different search engines.
Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.
2) Usability Testing:
Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.
Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing
Other user information for user help:
Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.
3) Interface Testing:
The main interfaces are:
Web server and application server interface
Application server and Database server interface.
Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?
4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:
  • Browser compatibility
  • Operating system compatibility
  • Mobile browsing
  • Printing options
Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.
OS compatibility:
Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.
Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.
Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.
5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.
Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.
In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors,
6) Security Testing:
Following are some test cases for web security testing:
  • Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
  • If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
  • Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs.
  • Web directories or files should not be accessible directly unless given download option.
  • Test the CAPTCHA for automates scripts logins.
  • Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
  • All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

http://www.softwaretestinghelp.com/web-application-testing/

The Dangers of Test Automation

One of my favorite TV shows right now is House.  In it a brilliant but antisocial doctor and his staff try to solve medical mysteries.  If you don't watch it, you should.  The writing is great.  Earlier this season, there was an episode where a journalist hits his head and has slurred speech.  The team is trying to diagnose this and having no luck.  Toward the end, House tells them to go do the blood work again and "don't use a computer."  When they look at the blood under a microscope, the cause becomes readily apparent.  There are parasites in the blood.  The patient has malaria.
   Just like the computers examining the blood were not programmed to look for parasites, so too the software that we write to test a program is often not programmed to look for all of the potential failures.  When we write test automation, we are focused on one thing.  If testing a video renderer, the test will make sure that video output is correct.  What if that output causes the UI to be distorted?  What if it causes audio to glitch?  The test wasn't programmed to look for that.
   About a year ago I first visited the concept of test automation.  In that article I gave several reasons why the best testing must be a mix of both manual testing and test automation.  The idea that all tests should be automated continues to pervade the industry.  It is thought that testers are expensive and automation is cheap.  Over the long haul, that may be true.  It is especially true when projects are in sustaining mode that you want primarily automated tests.  However, when developing a new product, relying solely on automated tested can be disastrous.  In addition to the issues I talked about in my last post, there is a danger I didn't discuss.  That is the danger of missing something obvious but unforseen.
   Before diving in, let me get some definitions out of the way.  Manual testing is just that:  manual.  It involves a human being interacting with the program and observing the results.  Test automation is the use of a programming language to drive the program and automatically determine whether the right actions are taking place. 
   In addition to the higher cost and thus higher latency of automated testing, it is also possible that automated testing will just miss things.  Sometimes really obvious things.  Just like the computers in House that missed the parasites, so too will test automation miss things.  Just recently I came across a bug that I'm convinced no amount of automation would ever find.  The issue was this:  while playing a CD, pressing next song caused the volume to maximize.  To a human, this jumped out.  To a computer designed to test CD playback, all would seem normal.  The next song did indeed play.  Even a sophisticated program that knew what each chapter sounds like would probably not notice that the volume was too high.  A programmer would specifically have to go looking for this. 
   There are a near-infinite amount of things that can go wrong in software.  Side effects are common.  Automation cannot catch all of these.  Each one has to be specifically programmed for.  At some point, the returns diminish and the test is set in stone.  Any bugs which lie outside that circle will never be found.  At least, not until the program is shipped and people try to actually use it.
   The moral of this story:  never rely solely on automation.  It is costly to have people look at your product but it is even costlier to miss something.  You have to fix it late in the process--perhaps after you ship--which is really expensive.  You lose credibility which is even more expensive.  Deciding the mix of manual and automated testing is a balancing act.  If you go too far in either direction, you'll fall.

http://blogs.msdn.com/b/steverowe/archive/2006/03/17/553905.aspx

jueves, noviembre 22

What is SEI? CMM? ISO? IEEE? ANSI? Will it help?

· SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
· CMM = ‘Capability Maturity Model’, developed by the SEI. It’s a model of 5 levels of organizational ‘maturity’ that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 – software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
Level 3 – standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 – metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 – the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
· ISO = ‘International Organization for Standards’ – The ISO 9001, 9002, and 9003 standards concern quality systems that are assessed by outside auditors, and they apply to many kinds of production and manufacturing organizations, not just software. The most comprehensive is 9001, and this is the one most often used by software development organizations. It covers documentation, design, development, production, testing, installation, servicing, and other processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 certification does not necessarily indicate quality products – it indicates only that documented processes are followed.
· IEEE = ‘Institute of Electrical and Electronics Engineers’ – among other things, creates standards such as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE Standard for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730), and others.
· ANSI = ‘American National Standards Institute’, the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

http://www.softwaretestinghelp.com/what-is-sei-cmm-iso-ieee-ansi-will-it-help/

Advanced Test Harness Features

In a past post I talked about the basic functionality of a test harness.  That is, it should be something that provides a reusable framework for an application, for running and reporting the results of test cases.  There is much more that a test harness can do, however.  It can provide mechanisms for lightweight test development, model based testing, and scripting.

Basic test harnesses like the Shell98 I spoke of last time or cppunit are what I would call heavyweight harnesses.  By that I mean that the testing code is statically bound to the harness at compile time.  In the most basic form the harness comes in the form of source code that is compiled along with the test code.  A slightly more advanced model involves a separate library that is statically linked with the test code to form a standalone executable.  This is fine but it means longer compile times, larger binaries, and potentially less flexibility.

There is a better way to do this.  The test harness and test cases can be separated.  The harness is compiled into an executable and the test cases are loaded by it dynamically.  I call this a lightweight harness.  As the harness no longer knows what test cases it will be tasked with executing, this requires that the tests are discoverable in some manner by the test harness.  The test cases are usually collected in dlls, jar files, or assemblies which are loaded by the harness.  The harness uses reflection (C# or Java) or a custom interface to discover which tests are in the test file.  This system is much more complex than the static binding but it offers several advantages.  It separates the test cases from the harness, allowing them to be varied independently.  It decreases compile times and reduces binary size.  It can also allow a single instance of a test harness to run many different types of tests.  With a static model, this scenario would require running many different executables.  Nunit is an example of a lightweight test harness.

Another feature that advanced test harnesses will have is support for model based testing.  Model based testing is a method of testing where test cases are generated automatically by the test framework based on a well-defined finite state machine.  A good description can be found on Nihit Kaul’s blog.  A test harness which supports model based testing will provide mechanisms for defining states and state transitions (actions) which it will use to generate and execute test cases.  The harness will also need to support a mechanism for verifying that the system is still correct after each transition.  Setting up model based testing usually requires a lot of work up front.  They payoff can be quite high though.

In a simple test harness, a test case is not provided any context in which to run.  Imagine the test cases as functions which take no parameters.  They will run exactly the same each time they are executed.  An advanced test harness will provide a mechanism to modify the test cases via parameters or scripting.  The simple method is to allow parameters to be passed to the test cases via a configuration file.  This is useful when you want several tests which vary only on a single parameter.  I wished for this feature when I was testing DVD playback.  In order to test the performance of our DVD navigator on different discs, I needed test cases to play different chapters.  I was forced to create a test case per chapter I wanted to play.  It would have been much simpler to write one test case and then allow a parameter which supplied the chapter to play.  Some harnesses might even provide a full scripting model where you could programmatically call test cases and provide parameters.  It is easy to envision embedding VB, Python, or even Lisp into a test harness and using that to control the test case execution.  The advantages to both of these methods are the ability to easily vary the test coverage without being required to program.  This makes test case creation accessible to non-programmers and saves programmers a lot of time over compilation.

Very advanced test harnesses provide mechanisms to execute the tests on a pool of machines.  We might call these test systems rather than test harnesses.  I’ll discuss these in another post.

http://blogs.msdn.com/b/steverowe/archive/2006/05/06/591764.aspx

Lista de Testers certificados en Hispano-américa HASTQB


 

Lista de los testers Certificados en Hispanoamérica
Última Actualización (Octubre 2013)


Lista de Personas Certificadas Nivel Básico HASTQB
 1714 personas de 13 países, principalmente de Argentina, Colombia y México

Lista de Personas Certificadas Nivel Avanzado HASTQB
 39 personas de 7 países, prácticamente todos como Test Manager (Excepto dos personas)

martes, noviembre 20

REMEMBER SOFTWARE TESTING 10 RULES

Way to become a good tester:
Remember these ten rules and I am sure you will definitely gain very good testing skill.
1. Test early and test often.
2. Integrate the application development and testing life cycles. You’ll get better results and you won’t have to mediate between two armed camps in your IT shop.
3. Formalize a testing methodology; you’ll test everything the same way and you’ll get uniform results.
4. Develop a comprehensive test plan; it forms the basis for the testing methodology.
5. Use both static and dynamic testing.
6. Define your expected results.
7. Understand the business reason behind the application. You’ll write a better application and better testing scripts.
8. Use multiple levels and types of testing (regression, systems, integration, stress and load).
9. Review and inspect the work, it will lower costs.
10. Don’t let your programmers check their own work; they’ll miss their own errors.

http://www.softwaretestinghelp.com/remember-software-testing-10-rules/

lunes, noviembre 19

Test Harness Basics

A short while back one of my readers asked what a test harness was.  I will answer that in a pair of posts.  This first post will describe the basics of a test harness.  The second post will talk about more advanced features that a test harness might have.
   When you are writing test cases , whether they are unit tests, tests for test-driven development, regression tests, or any other kind, there are some functions you will find yourself doing each time.  These include the basics of an application, a mechanism for launching tests, and a way to report results.  You could write these functions each time you start testing a new feature or you could write them once and leverage them every time.  A test harness, at its most simple, is just that—a system to handle the elements you’ll repeat each time you write a test.  Think of it as scaffolding upon which you will build your tests.
   A basic test harness will contain at least the following elements:  application basics, test launching, and result reporting.  It may also include a graphical user interface, logging, and test case scripting.  It should be noted that harnesses will generally be written for one language or runtime (C/C++, Java, .Net, etc.).  It is hard to write a good harness which will work across runtimes.
   To run a test, the first thing you need is an application.  Each operating system has a different way to write an application.  For example, on windows if you want any GUI, you need things like a message pump and a message loop.  This handles the interaction with the operating system.  The harness will include code to start the program, open any required files, select which cases are run, etc.
   The next thing you need is a way to actually launch the tests.  Most of the time a test case is just a function or method.  This function does all of the actual work of the test case.  It calls the API in question, verifies the results, and informs the framework whether the case passed or failed.  The test harness will provide a standardized way for test cases to advertise themselves to the system and an interface by which they will be called.  In the most simple system, a C/C++ harness might advertise its cases by adding function pointers to a table.  The harness may provide some extra services to the test cases by allowing them to be called in a specified order or to be called on independent threads.
   The third basic pillar of a test harness is a way to inform the user of which cases pass and which fail.  They provide a way for test cases to output messages and to report pass/fail results.  In the most basic harness, this could be just a console window in which test cases will print their own messages.  Better than that is a system which automatically displays each test case name and its results.  Usually there is a summary at the end of how many cases passed or failed.  This could be textual or even a big red or green bar.  More advanced systems have built-in logging systems.  They provide a standardized way for the test cases to output trace messages informing the user of each important call as it is being made.  The harness may simply log to a text file but it may also provide a parsable format like XML or even interact directly with a database for result retention.
   At Microsoft, many groups write their own test harnesses which are specialized to support the unique needs of the organization.  For example, my team uses a harness called Shell98 which has, among other things, support for device enumeration.  Good examples of a freely available test harnesses are the xUnit series of tests like cppUnit, nUnit, and jUnit.  These are designed for unit testing and are not very feature-rich.  I’ve used cppUnit which is very basic and nUnit which is pretty slick.  The xUnit harnesses do not do any logging and you do not have control over the order in which tests are run.  They are intended for a user to run and visually inspect the results.  The harness I use allows for scripting and outputs its results to a database.
 http://blogs.msdn.com/b/steverowe/archive/2006/04/27/585758.aspx

jueves, noviembre 15

Testing Systems

This is the third in my series on test harnesses. In this post, I'll talk about systems that do much more than simple test harnesses. Test harnesses provide a framework for writing and executing test cases. Test harnesses focus on the actual execution of test cases. For complex testing tasks, this is insufficient. A test system can enhance a test harness by supplying a back end for results tracking and a mechanism to automatically run the tests across multiple machines.
A test harness provides a lot of functionality to make writing tests easier, but it still requires a lot of manual work to run them. Typically a test application will contains dozens or even hundreds of test cases. When run, it will automatically execute all of them. It still requires a person to set up the system under test, copy the executable and any support files to the system, execute them, record results, and then analyze those results. A test system can be used to automate these tasks.
The most basic service provided by a testing system is that of a database. The test harness will log test results to a database instead of (or in addition to) a file on the drive. The advantages to having a database to track test results are numerous. The results can be compared over time. The results of multiple machines running the same tests can be combined. The aggregate pass/fail rate can be easily determined. An advanced system might even have the ability to send mail or otherwise alert users when a complete set of tests is finished or when certain tests fail.
It is imperative that any database used to track testing data have good reporting capabilities. The first database I was forced to use to track test results was, unfortunately, not strong in the reporting area. It was easy to log results to the database, but trying to mine for information later was very difficult. You basically had to write your own ASP page which made your own SQL calls to the database and did you own analysis. I lovingly called this system the "black hole of data." A good system has a query builder built in (probably on a web page) which lets users get at any data they want without the necessity of knowing the database schema and the subtleties of SQL. The data mining needs to go beyond simple pass/fail results. It is often interesting to see data grouped by a piece of hardware on a machine or a particular OS. The querying mechanism needs to be flexible enough to handle pivoting on many different fields.
Another feature often provided by a test system is the ability to automatically run tests across a pool of machines. For this to work, there is a specified set of machines set aside for use by the testing system. Upon a specified event, the test system invokes the test harness on specific machines where they execute the tests and record the results back to the database. These triggering events might be a specified time, the readiness of a build of the software, or simply a person manually scheduling a test.
Part of this distributed testing feature is preparing the machines for testing. This may involve restoring a drive image, copying down the test binaries and any supporting files they might need, and conducting setup tasks. Setup tasks might be setting registry entries, registering files, mapping drives, and installing drivers. After the tests are run, the testing system will execute tasks to clean up and restore the machine to a state ready to run more tests.
Having a testing system with these capabilities can be invaluable on a large project. It can be used to automate what we at Microsoft call BVTs or Build Verification Tests. These are tests that are run at each build and verify basic functionality before more extensive manual testing is done. Through the automation of distributed testing, substantial time can be saved setting up machines and executing tests. People can spend more time analyzing results and investigating failures instead of executing tests.
It is important that I note here the downside of testing systems. Once you have a full-featured testing system, it is tempting to try to automate everything. Rather than spending money having humans run tests, it is possible to just use the test system to run everything. This is fine to a point but beyond that, it is dangerous. It is very easy to get carried away and automate everything. This has two downsides. First, it means you'll miss bugs. Remember, once you have run your automated tests the first time, you will never, ever find a new bug. You may find a regression, but if you missed a bug, you'll never find it. As I've discussed before, it is imperative to have someone manually exploring a feature. Second, it means that your testers will not develop a solid understanding of the feature and thus will be less able to find bugs and help with investigation. When a system is overly automated, testers tend to spend all of their time working with the test system and not with the product. This is a prescription for disaster.
When used properly, a good test harness coupled with a good test system can save subtantial development time, improve the amount of test coverage you are able to do in a given period of time, and make understanding your results much easier. When used poorly, they can lull you into a false sense of security.

http://blogs.msdn.com/b/steverowe/archive/2006/05/18/600735.aspx

lunes, noviembre 12

Should companies charge the clients on the basis of number of bugs?

I came across an interesting software testing outsourcing technique here based on pay per bug approach. This testing consultancy is offering testing services and clients will be charged on the basis of number of bugs found in the application.Why outsourcing?
Many companies are outsourcing software testing work to concentrate on their core business competencies. The company will save time and money on testing processes that is tedious to perform in house.
Current offshore teams are working on per project deal means cost of software testing is decided according to the:
  • Project size
  • Time for testing
  • Number of resources
  • or Deliverables.
What if offshore teams start charging the clients on the basis of number of bugs?
Many companies don’t want to pay for documentation, as they might not require it in future. So such clients are more interested in getting the testing work done in less cost and within their budget. Pay per bug approach can work greatly for small projects where requirements are straightforward and clear. In this approach client have freedom to select the testing area as only UI or only Functional or only Security testing as per their requirements.
Will Pay per bug outsourcing technique work?
But I see many complexities in this model. It might not work for all kinds of projects. What if the tester is working to find only smaller and easy to find bugs to increase the bug count and not concentrating on the complex modules? One solution for this could be – Charge the client depending on fixed price per bug or can you can vary this amount according to the bug complexity and the type of testing.
Another problem with this module is – How to decide the severity of the bug? Tester will log bugs with high severity if they are earning high rate on the basis of bug complexity. Client will try to reduce the bug severity as to reduce the testing cost. It’s also difficult to decide whether it is a bug or not Or it is a feature itself? And what about the rejected or ‘won’t fix’ type bugs?
Worst case is if you are getting paid on the basis of number of bugs and you spent enough time to find the bugs, but if application is robust enough that you end up with very few or no bugs then how would you cover the testing recourses cost in this case?

There are lots many such areas of controversies but the concept is good and can be a good outsourcing solution if handled effectively.

http://www.softwaretestinghelp.com/charge-client-on-basis-of-number-of-bugs/

Making Hard Choices - Which Bugs Do We Fix?

We just shipped Vista Beta 2 and we are now on the march toward shipping what we call RCs or Release Candidates.  During this time, we are forced to make many hard choices about what bugs to fix and what bugs we can live with.  There are those who believe that software should ship without bugs but those people have probably never tried to ship a product of any significant size.  There will be bugs in software, just like there are bugs in a new house or a new car.  The important thing is to make sure there are no important bugs. 
Eric Sink is a software developer at SourceGear and I recently ran across an article by him about shipping with bugs.  Eric makes a few points I think worth repeating.  He says there are 4 questions that must be asked:
1) How bad is its impact? (Severity)
2) How often does it happen? (Frequency)
3) How much effort is required to fix it? (Cost)
4) What is the risk of fixing it? (Risk)
The first two determine whether the bug is worth fixing at all.  If the bug doesn't pass muster there, you don't even look at the 3rd and 4th.  A bug being easy to fix is never a reason to take the fix late in a product cycle.  Just because it is a single line doesn't mean we should do it.  There is risk with all fixes.  I've seen the easiest ones go wrong.
The second two can be reasons to reject even righteous bugs.  If the bug is bad and happens often but is very hard or very risky, it still might not be worth fixing.  This is a hard balancing act.  Sometimes the bug is bad enough that you delay the product.  Other times it is possible to live with it.  Making that choice is never a fun one.  As owners of a product, we want all bugs to be easy to fix and have low risk.  Unfortunately, that isn't always the case and when it isn't, hard choices have to be made.

http://blogs.msdn.com/b/steverowe/archive/2006/05/28/609149.aspx

Tester Foundation level - Certificación ISTQB CTFL

La certificación Certified Tester Foundation Level (CTFL) – ISTQB, es la más aceptada a nivel mundial. Permite mejorar la visión general de las principales prácticas que ayudan a ejecución de las pruebas del software dentro de la rama de ingeniería de software y brinda elementos sólidos en la gestión e implementación del testing.

Objetivos

·  Proporcionar principios, habilidades y terminologías para los procesos de pruebas de software.
·  Definir estructuras y roles para la gestión de pruebas.
·  Determinar niveles de riesgo para la calidad de un producto  de software.
·  Utilizar la documentación de los requerimientos de un proyecto para definir guiones y casos de prueba.
·  Proveer métodos para el diseño, planificación y gestión de pruebas a través del ciclo de vida de software.
·  Seleccionar las técnicas y herramientas de pruebas apropiados.
·  Levantamiento, reporte y seguimiento de defectos acertadamente.

Capítulo I – Fundamentos de pruebas

  1. ¿Porqué son necesarias las pruebas?
  2. ¿Qué son las pruebas?
  3. Principios Generales de pruebas
  4. Proceso de pruebas básico
  5. Psicología en el proceso de prueba
  6. Código ético

Capítulo II – Pruebas a través del ciclo de vida software

  1. Modelos de desarrollo software
  2. Niveles de prueba
  3. Tipos de pruebas
  4. Pruebas de mantenimiento

Capítulo III – Técnicas estáticas

  1. Técnicas estáticas y el proceso de pruebas
  2. Proceso de revisiones
  3. Análisis estático con herramientas

Capítulo IV – Técnicas de diseño de pruebas

  1. Proceso de desarrollo de prueba
  2. Categorías de las técnicas de diseño de prueba
  3. Técnicas basadas en la especificación o de caja negra
  4. Técnicas basadas en la estructura o de caja blanca
  5. Técnicas basadas en la experiencia
  6. Selección de las técnicas de prueba

Capítulo V – Gestión de pruebas

  1. Organización de prueba
  2. Planificación y estimación
  3. El proceso de prueba
  4. Seguimiento y control del estado de las pruebas
  5. Gestión de la configuración
  6. Riesgo y proceso de prueba

Capítulo VI – Herramientas de pruebas

  1. Gestión de incidencias
  2. Tipos de herramientas de prueba
  3. Uso efectivo de herramientas de prueba
  4. Introducción de herramientas de prueba en una organización
  5. Resumen

jueves, noviembre 8

Requirements

Maybe it was that southern drawl.
Or maybe it was because I got mad.
I'm not sure why I still remember this moment so clearly, but I do.  It happened when I was at Spyglass, over ten years ago.  Several of us developers were in a meeting with Steve Stone, then recently-hired as director of the Champaign office.  We were talking about a possible new feature.  Steve, in his Alabama accent, asked,
"So is that a requirement?"
A couple years later, I realized that I misunderstood the question.  I didn't have enough project management background to know the particular way that he was using the word "requirement".  For me at the time, the word "requirement" had connotations of absolute necessity.  So when Steve asked the question, here is what I heard:
"So is this feature something that absolutely must be in the next release of the product?"
On top of that, I'll confess I was sort of generally crabby at that point in my life, especially with respect to Steve Stone.  Instead of promoting me or one of the other lead developers to run the Champaign office, Spyglass had hired Steve from the outside.  In fact, Spyglass asked me to interview Steve, but only after the interview did they tell me I had actually been interviewing my new boss.
Anyway, I was in a generally foul mood when I misunderstood this question.  I suppose that's why I answered Steve by saying something like this:
"How the @%$* should I know if this feature has to be in the product or not?  You're new here, so let me explain how things go.  Management moved the headquarters to Chicago after years of promising that they never would.  Here in Champaign, nobody tells us anything.  We've got no marketing people except the team who spent 3 months deciding which Pantone color is the right shade of red for our company logo, which nobody ever sees because our product is an OEM component.  The only way we ever know that a feature absolutely must be in the product is when one of our Sales Guys calls up and tells us that he already promised it."
Steve was a very patient man.  I assume anybody who lived in Alabama would have to be.  :-)  He just smiled as he listened to my rant (footnote 1).
But my career with Spyglass didn't last too much longer after that.  A few months later, in a moment when I was ready to throw another tantrum, I decided to just quit instead.
And I went out on my own and founded SourceGear.  We started out doing contracting projects.  One of our first clients asked me for a Software Requirements Specification (SRS) and a Traceability Matrix.  That wasn't a very good day.
But not long after that, I learned what the word "requirement" means when used in the context of software project management.
And I learned what Steve Stone had really meant when he asked that infuriating question.  When Steve said:
"So is that a requirement?"
What he was really asking was:
"So it sounds like we just identified something that should become part of our spec.  You guys have a spec around here somewhere, right?  Who is responsible for updating that spec to capture this new item?"

What is a Requirement?

I define a requirement as "one piece of a spec".  Is that definition complete and immune to attack?  No, but I think it's the simplest definition that works.
Of course, it relies on the definition of a "spec", so let's go there.

What is a Spec?

A spec is short for "specification".  A spec is something that describes what a piece of software should do.
For the moment I am being deliberately broad and inclusive.  If you are experienced in software project management, you probably have something very specific in mind when think of the words "spec" or "requirement".  In fact, it is possible that you are not willing to acknowledge that something is a spec unless it matches up fairly well with your image of same.  That's okay.  Just stay with me.
For now, I'm saying that anything that is a "description of what a piece of software should do" can be considered a spec.  This may include:
  • A document
  • A bunch of 3x5 note cards
  • A spreadsheet containing a list of features
I am currently involved in a project where my role is "The Walking Spec".  In other words, I am the person who mostly knows everything about how this piece of software should mostly behave.  When people need a spec, they ask me a question (footnote 2).  I'm not saying that I am a good spec, but I don't think I'm the worst spec I have ever seen, and I am certainly better that no spec at all.  :-)
Seriously, a spec needs to be in a form which is accessible to more than one person.  It needs to be written down, either in a computer or on paper.
But how?

Document or Database?

There is a constant tension over the form of a spec.  Should it be a document or a database?
I'm using the words "document" and "database" as names for the two extremes which create this tension. 
  • When a spec is more like a document, it looks like a bunch of paragraphs and prose and pictures. 
  • When a spec is more like a database, it looks like a bunch of bullets and lists and outlines.
When a spec is being written, it wants to be a document.  It's easier to describe what a piece of software should do when we can use paragraphs and prose and formatting.
Maybe this is because the primary content of a spec is usually coming from someone other than a developer.  We developers sometimes write apps for ourselves, but that's not the common case.  More often, we're writing software that somebody else wants.  We don't know how the software should behave.  They do.  In order for the software to be born, they need to express to us everything they know about what the software should do.  That expression is a spec.
And in all likelihood, that expression is more naturally going to be like a document and less like a database.  The person will want to tell stories and give examples and rationale.  They may want to include pictures or video to explain.
But right after a spec is written, a document is usually the wrong form.  It started out as a document only because that form was most convenient for the author.  But a document is not the most convenient form for the people who are reading or using the spec, and those people have the author outnumbered.  Most of those readers/users want that spec to be a database instead of a document.
They want the spec to be logically broken up into a bunch of little pieces.  Each piece should be a self-contained statement about one single detail of how the software should behave.
Breaking a spec into little pieces allows us to use that spec more effectively.  We can more easily divide the software construction tasks across a team by assigning different pieces to different people.  We can then print the pieces as a list, put boxes to the left of each one and use it as a checklist to make sure we're getting everything done.
So, let's return to the original question.  What is a "requirement"?
A requirement is a piece of a spec.  When we take a spec and put it into its more useful form by breaking it into bite-sized pieces, each of those pieces is a requirement.

Corollary

If you are in the habit of ignoring specs, you can ignore requirements in exactly the same way.  They're no different.  :-)

Writing Requirements

"Dad, where do products come from?"
"Well, son, when a company and a market segment really love each other, they..."
Every software product starts out as a gleam in the eye of some guy who wants to make money.  He sees a bunch of people who have money.  He pauses to reflect upon how much nicer life would be if that money were moved from their wallets into his own.
So, he pursues a process which involves the following two steps:
  1. Find an idea for a product
  2. Build that product
Things usually fall apart between steps 1 and 2, mostly because these two steps are done by different people.  The product is not being built by the same person who had the idea and the gleam.  Step 1 is usually somebody in marketing.  Step 2 is a team of developers.
So, in order for the developers to know what product to build and how, we need to describe it to them (via a spec) with lots of details (requirements).

Construction and Testing

With a well-written requirements spec, the development of a software project is easy.
Let's assume the project starts out with a spec that is:
  • Complete.  The spec describes everything the product needs to do.  Nothing was forgotten.
  • Stable.  The spec isn't in flux.  It's not going to change along the way.
  • List-oriented.  The spec is like a database; each item being a self-contained requirement.  All the prose has been appropriately broken up into little pieces.
This is the dream scenario for a development manager.  Translate all the requirements into a set of tasks.  Divide up all the tasks between the developers on the team.  How hard can that be?
Similarly, the testing lead has a very straightforward path with this kind of a requirements spec.  For every requirement, create one or more tests that can be used to verify that the software meets that requirement.  Automate as many of those tests as possible.  Every time the developers create a new build, run the tests and report what happened.  Easy, right?
Unfortunately, projects don't always work that way.
In fact, projects almost never work that way, because most requirements specs are badly written.

Bad Requirements

A bad requirements spec is considerably more likely than a perfect one.  Certain kinds of problems are common.
For example, let's suppose we are building a game which is designed to be played by middle school girls in a library.  The following examples show some typical problems with requirements:
Missing Requirements
Very often, the spec simply isn't complete.  Somebody forgot to include an important detail.
For example, since we know the game is supposed to be played in libraries, users will need to turn the sound down or off.  So we need the game to be playable without sound.  If we forget to mention this requirement specifically, there's a decent chance the dev team will create a game where sound is important to game play.
Unclear Requirements
Sometimes requirements are ambiguous.  Here's an unclear requirement:

  • The game must be compatible with DirectX.

  • Which version?  Can we use DirectX 10, thus requiring Windows Vista?  Or should we target DirectX 9 and stay compatible with Windows XP?  It's not clear.
    Non-prioritized Requirements
    A good requirements spec contains priority information to help the dev team make the right tradeoffs.  If some requirements are more important than others, the spec should say so.
    Consider these two requirements:

  • The user must be allowed to save a game in progress and resume it later.
  • The main character in the game must resemble Dakota Fanning without looking exactly like her.

  • The schedule is getting tight.  Only one of these two features is going to make it.  Do you want to leave this choice entirely up to the dev team?  Or do you want to make it clear that save/load is a more important feature than making the main character resemble a certain child actress? (footnote 3)
    Missing Anti-Requirements
    Sometimes the problem is that the development team tries to go above and beyond the call of duty and sneak something in that wasn't part of the spec.  This can be a good thing, but it can also be a bad thing.  A good requirements spec will contain "anti-requirements", explicitly spelling out things that should not be done.  For example:

  • This game must not have a grenade launcher.

  • Believe me, if you leave too much latitude on a game project like this, we developers will turn it into a first person shooter.  Yes, we can see from the spec that the target customer is a 12 year old girl playing in a library.  But still, our intuition is that all games need a grenade launcher, so you're gonna get one if you don't explicitly tell us otherwise.

    Changing Requirements

    If a project gets all the way to completion with bad requirements, the likelihood is that the software will be disappointing.  When this happens, the resulting assignment-of-blame exercise can be fun to watch.  From a safe distance.
    More often, during the project somebody notices a problem with the requirements and changes them along the way.
    Marketing:              By the way, I forgot to mention that the application has to be compatible with Windows 95.
    Development:         Windows 95?  You're kidding, right?  People stopped using Win95 over a decade ago!
    Marketing:              Oh, and Mac OS 7.6 too.
    Development:         What?  We're building this app with .NET 3.0 and we're already 40% done!
    Marketing:              You're half done?  That's great!  Oh, and I forgot to mention we need compatibility with the Atari ST.
    Development:         Why didn't you tell us this before we started?
    Marketing:              Sorry.  I forgot.  It's no problem to change it now, right?
    Changing requirements mid-project can be expensive and painful.
    However, it is very rare to have a project where all the requirements are known and properly expressed before development begins.  So, it behooves us to prepare for changes.  If we choose a development process which rigidly requires a perfect spec before construction can begin, we are just setting ourselves up for pain.  We need to be a bit more agile.

    Agile

    I lament the loss of the word "agile".
    A minute ago when I used the word "agile", most readers immediately thought I was talking about Agile software development practices such as Scrum or Extreme Programming.  That means your reaction was probably polarized toward one of the following two extremes:
    • Oh, great!  I'm five pages into this article and suddenly I find out Eric Sink is one of those Extreme Programming fanatics?  I guess that's 15 minutes of my life I'll never get back.  Sorry, I don't mind visiting once in a while like on Christmas or Easter, but I'm just not interested in having somebody tell me how to live my life.  And I don't want some Agile priest telling me that I'm not a true believer just because we don't do pair programming.
    • Oh, great!  Here's Eric Sink trying to pretend like he's a believer when everybody knows he's not.  Actually I guess I should check the Central Membership Roll just to be sure.  Nope, I was right.  He's not.  Even if he was, we would have to excommunicate him anyway.  Anybody who reads the drivel on his blog knows darn well that his doctrine is seriously screwed up.
    I just want to use the word "agile" without all those connotations.  My copy of Merriam Webster's Tenth Edition says that "agile" means "marked by ready ability to move with quick easy grace".  At a high level, that's all I'm trying to say.  Sometimes requirements change.  Be ready.
    In more practical terms, I'll admit that the body of wisdom literature produced by the Agile movement has some very good stuff in it.  But Agile is no different from any other major religion like Christianity or Buddhism.  You can learn some great principles and practices there, but formally becoming a member is a decision that should not be made lightly.
    :-)

    Traceability

    I've tried to write this article at a fairly high level, focusing more on principles than practices, staying inclusive of the broad range of viable methods for getting projects done.  However, the truth is that the word "requirement" is usually associated with stricter and more formal ways of doing things.
    We developers say that we don't like formality and strictness, but I think we're confused.
    We don't like being told what to do.  We don't like stupid rules that don't make sense.  We don't like working for some stupid pointy-haired-boss who draws arbitrary boundaries that we're not allowed to cross.
    But we spend our entire day using a compiler, and compilers are very formal and strict.  In C, if we type primtf instead of printf, the compiler will let us go no further until we stop and fix it.  In C#, if we try to use an uninitialized local variable, our compiler will scold us for stepping outside the boundaries.
    Do we go out after work and gripe about our compiler?
    "I am sick and tired of that stupid compiler!  When I do something right it never says a word, but if I do the slightest little thing wrong, it throws a fit.  Why does it have to nitpick about every little mistake I make?"
    Nope.  Actually, we like compilers.  We like the formality and strictness.  We know having a compiler to catch our mistakes is a good thing because it allows us to go faster.  It's safe to sit down and crank out a thousand lines of code as fast as we can because we know the compiler will find a lot of the little errors that happen.
    Wouldn't it be great if every phase of the software development process had a compiler?
    • I want a piece of software that tells me if I forget to implement one of the requirements.
    • When my requirements conflict with each other, my "spec compiler" should output an error.
    • When one of my requirements isn't being verified by anything in the test suite, some piece of software should tell me.
    The compiler I want doesn't exist today, but there are things we can do to approximate that style of work.  For example, code coverage can be used to help verify that things are getting tested.  Automated testing can help catch bugs that slip in.
    The concept which may eventually get us the compiler I want is called "traceability".  The idea is that everything should be traceable back to something else.
    • Every piece of code in the project should exist because it helps meet one or more requirements.  Traceability should allow us to ask, "Which requirement motivates this piece of code?"  If the answer is "none", then that piece of code should be excised.
    • Every requirement needs to be tested.  Traceability should allow us to ask, "Which tests verify that this requirement is being met?"  If the answer is "none", then we need to write some more tests.
    Lacking my super-duper application lifecycle compiler that verifies that everything is traceable, we can keep track of some of this stuff using a traceability matrix.
    When it functions more like a compiler than a pointy-haired-boss, a little extra formality and strictness can be very helpful.

    Requirements Management Software

    Naturally, we want to use software to manage our requirements.  Many folks do this with a general-purpose tool like a word processor or a spreadsheet.  That works fine.
    Some people track requirements in a bug-tracking system.  This can work, but it's not a perfect solution.  Requirements and bugs are different.  For example, requirements don't change status from Open to Closed to Verified.
    Another approach is to use something which is specifically designed to track requirements.  Application Lifecycle Management (ALM) software often contains features for managing and tracking requirements.  The ALM solutions from companies like IBM Rational, Serena and Borland are examples, but it should be noted that these solutions are very expensive and designed for large enterprise environments.
    My own company will soon be releasing an ALM solution which is designed specifically for smaller teams.  We call it SourceGear Fortress.  However, the 1.0 release will not have any features specifically designed for tracking requirements.  We do intend to include this and other features in the future as we evolve Fortress into a mature and complete ALM solution.
    Microsoft made a similar choice with Visual Studio Team System.  However, since their product is enterprise-focused, it has been criticized for not having any requirements features in the first release (footnote 4).  I suspect that this is a hole they plan to plug at some point in the future.

    Additional Reading

    This short article barely scratches the surface of a very complex topic.  For additional information, I recommend the book Software Requirements by Karl E. Wiegers.

    Footnotes

    (1)           I have no hard feelings toward my old boss at Spyglass.  I lost touch with Steve Stone, but I understand he later left the company and joined Microsoft.  A little searching with Google reveals that he is currently the CEO of a startup company called InfoFlows.  Steve, if you are reading this article, best regards.
    (2)           Rest assured that this project is not one of SourceGear's products.  It's a revision to one of our internal systems.
    (3)           Hypothetically, the reason this save feature might be so important is to ensure that when the hypothetical father of the hypothetical middle school girl arrives at the hypothetical library to pick her up, she can save her game and go promptly so her Dad doesn't have to wait.  Hypothetically.
     (4)          Third-party products are available to add requirements management features to VSTS.


    http://www.ericsink.com/articles/Requirements.html

    Certificaciones para Tester del ISTQB

    Beneficios para personas Los testers certificados de ISTQB®:
    • Obtienen de forma independiente conocimientos y las habilidades evaluadas.
    • Aumentan su capacidad de comercialización en toda la industria.
    • Tienen mayores oportunidades de carrera y un aumento potencial de ingresos.
    • Puede agregar el logo de "ISTQB Certified Tester ®" y credencial a su hoja de vida
    • Son reconocidos por haber suscrito un código de Código de Ética.
    Beneficios para los empleadores Los empleadores de los Testers Certificados de ISTQB®  pueden tener ventajas a muchos beneficios, como los siguientes:
    • Contar con personal certificado puede ser una ventaja competitiva para las organizaciones, que pueden beneficiarse de la adopción de prácticas más estructuradas de pruebas y la optimización de las actividades de prueba, derivadas de las competencias ISTQB ®.
    • Para las organizaciones de consultoría, el personal certificado puede ofrecer servicios de alto nivel a los clientes, aumentar los ingresos y el valor de la marca.
    • La adopción de sistemas de certificación ISTQB ® en una organización puede ayudar en el reclutamiento y la retención de personal de alta posición y puede ayudar a las organizaciones mantenerse al día con las innovaciones en pruebas.
    • El reconocimiento formal de las organizaciones que tengan adoptado el esquema de certificaciones ISTQB ® estará disponible en el futuro.(Partner Program).
    Beneficios para Proveedores de Cursos Los proveedores de cursos de ISTQB® pueden:
    • Acceder a un mercado internacional que reconoce ISTQB ®.
    • Distinguirse a través de la profesionalidad evaluada de forma independiente de sus profesores y la calidad / cobertura de su material de formación.
    • Ofrecer a sus clientes la información más al día del conocimiento en pruebas.
    • Proporcionar una continua expansión del camino del desarrollo profesional a sus clientes en el campo de pruebas.
    • Participar en las revisiones iniciales de los planes de estudio ISTQB ® y en otras actividades organizadas por ISTQB ®.
    • Utilizar el logo de ISTQB® “Accredited Training Provider”  y credenciales en sus materiales de marketing.

    Objetivos de la Certificación

    Los cursos están dirigidos a obtener una certificación en dos niveles; básica y avanzada.
    Foundation certification Las condiciones para este nivel son los siguientes:
    • Entender los conceptos de tipos generales de aplicaciones
    • No se requiere experiencia
    • No se requiere certificación
    Los objetivos son:


    •     Asegurar una comprensión cabal de los conceptos claves y fundamentales de las pruebas de software
    •     Suministrar un fundamento para el crecimiento profesional
    •     Asegurar una comprensión de los conceptos fundamentales en pruebas de software por profesionales de pruebas comprometidos.

    Sillaby/Conocimiento abarcado


    Fundamentos de pruebas, administración de pruebas, herramientas de pruebas, enfoques de planeación de pruebas y pruebas de rendimiento básico

    Los cursos de entrenamiento basados en el sillaby duran típicamente de 3 a 5 días

    Advanced certification 

    Las condiciones para este nivel son los siguientes:

    Comprensión de conceptos de aplicaciones especificas

    Experiencia de 18 meses mas un título que corresponda a 4 años de estudio

    Tipos de certificación



    •     Analista de pruebas técnicas
    •     Analista de pruebas
    •     Jefe de pruebas

    Objetivos



    •     Asegurar una solida comprensión y ejecución de técnicas avanzadas por profesionales expertos en pruebas
    •     Promover el crecimiento profesional
    •     Encaminar la profesión de pruebas de software

    Sillaby/Conocimiento abarcado


        Técnicas avanzadas de pruebas estructurales y funcionales para probadores y programadores, conceptos de administración de pruebas sofisticadas, mejoramiento de procesos de pruebas, automatización de pruebas, rendimiento avanzado, usabilidad y otros tipos de pruebas no funcionales

        Los cursos de entrenamiento basados en el sillaby duran entre 15 y 20 días.

        Exámenes


    Los exámenes se realizan en las dependencias del proveedor del curso de certificación, la duración de estos exámenes es de 90 minutos. Por otra parte la duración del segundo examen (Advanced Level) es de 270 minutos, 90 minutos para cada una de las tres partes de las que consta el examen, existe una pausa de 30 minutos entre cada una de estas tres partes. Después de evaluar el examen, se envía el resultado por mail, si el postulante no supera el examen, tendrá que inscribirse nuevamente para poder tener otra oportunidad. Las inscripciones a los exámenes se las puede realizar a través de los proveedores de los cursos, estos normalmente se los realizarán después de haber pasado los cursos correspondientes.