miércoles, julio 31

Chistes sobre Testing - Parte 3

  Parte 1    Parte 2    Parte 3    Parte 4
The Glass
  • To an optimist, the glass is half full.
  • To a pessimist, the glass is half empty.
  • To a good tester, the glass is twice as big as it needs to be.
Testing Definition
To tell somebody that he is wrong is called criticism. To do so officially is called testing.
Sign On Testers’ Doors
Do not disturb. Already disturbed!
Words
Developer: There is no I in TEAM
Tester: We cannot spell BUGS without U

Experience Counts
There was a software tester who had an exceptional gift for finding bugs. After serving his company for many years, he happily retired. Several years later, the company contacted him regarding a bug in a multi-million-dollar application which no one in the company was able to reproduce. They tried for many days to replicate the bug but without success.
In desperation, they called on the retired software tester and after much persuasion he reluctantly took the challenge.
He came to the company and started studying the application. Within an hour, he provided the exact steps to reproduce the problem and left. The bug was then fixed.
Later, the company received a bill for $50,000 from the software tester for his service. The company was stunned with the exorbitant bill for such a short duration of service and demanded an itemized accounting of his charges.
The software tester responded with the itemization:
  • Bug Report: $1
  • Knowing where to look: $49,999

Sandwich
Two software testers went into a diner and ordered two drinks. Then they produced sandwiches from their briefcases and started to eat. The owner became quite concerned and marched over and told them, “You cannot eat your own sandwiches in here!”
The testers looked at each other, shrugged their shoulders and then exchanged sandwiches.

Fuente: http://softwaretestingfundamentals.com/software-testing-jokes/ 

lunes, julio 29

¿Por qué son necesarias las pruebas?

Capítulo 1. Fundamentos de Pruebas > 1.¿Por qué son necesarias las pruebas?
La importancia económica del software
  • El funcionamiento de maquinarias y equipamientos depende en gran medida, actualmente y en diferentes industrias, del software.
  • No es posible imaginar grandes sistemas, en el ámbito de cualquier industria, funcionando sin software.
Calidad de software
  • Cada vez más, la calidad de software se ha convertido en un factor determinante del éxito de sistemas técnicos o comerciales y productos.
Pruebas para la mejora de la calidad
  • Las pruebas y revisiones aseguran la mejora de la calidad de productos de software así como de la calidad del proceso de desarrollo en sí.
Detalle
En un proyecto de desarrollo de software los errores (bugs en inglés) puede presentarse en cualquiera de las etapas del ciclo de vida del software.
Aún cuando se intente detectarlos despúes de cada fase utilizando técnicas como la inspección, algunos errores permanecen sin ser descubiertos.
Por lo tanto es muy probable que el código final contenga errores de requerimientos y diseño, adicionales a los introducidos en la codificación.
Las pruebas de software son una parte importante pero muy costosa del proceso de desarrollo de software.
Pueden llegar a representar entre el 30 y 50 % del costo total del desarrollo del software [Myers, 2004]
Sin embargo, los costos de las fallas en un software en operación pueden llegar a ser mucho mayores (catastróficos)
Algunos de los peores errores de la historia:
Se colapsa el aeropuerto de Los Angeles (2007)
Más de 17 mil personas se quedaron en tierra por un problema de software que provocó conflictos en una tarjeta de red que bloqueó toda la red informática.
El lanzamiento comercial y la producción del Airbus A380 se retrasa más un año (2006)
Diferencias entre versiones de las herramientas CAD (Computer Aided Design) usadas en las fábricas de Hamburgo y Toulouse provocaron un problema en el cableado (530km
de cables)
Sobredosis radiológica en el Instituto Nacional del Cáncer de Panama (2000)
Errores en los procedimientos y un fallo de software causan que se apliquen dosis erróneas de radiación 8 personas murieron y 20 tuvieron problemas de salud graves.
Los médicos responsables del hecho fueron acusados de asesinato.
Como pueden observar las pruebas de software tienen un rol muy importante en el aseguramiento de la calidad ya que permiten detectar los errores introducidos en las
fases previas del proyecto.
Conceptos básicos
La forma más común de organizar las actividades relacionadas al proceso de pruebas de software [Burnstein, 2003] son:
-Planeación, fija las metas y una estrategia general de pruebas
-Preparación, se describe el procedimiento general de pruebas y se generan los casos de prueba específicos
-Ejecución, incluye la observación y medición del comportamiento del producto
-Análisis, incluye verificación y análisis de resultados para determinar si se observaron fallas
-Seguimiento, si se detectaron fallas, se inicia un monitoreo para asegurar que se remueva el origen de éstas
Casos de prueba y criterios de prueba
-Generar casos de prueba efectivos que revelen la presencia de fallas es fundamental para el éxito del proceso de pruebas (etapa de preparación)
-Idealmente, se debería determinar un conjunto de casos de prueba tales que su ejecución exitosa implique que no hay errores en el software desarrollado
-Comúnmente este objetivo ideal no se puede lograr debido a las limitaciones prácticas y teóricas
-Cada caso de prueba cuesta dinero: esfuerzo para generarlo, tiempo de cómputo para ejecutarlo, esfuerzo para evaluar los resultados
-Por lo tanto, el número de casos de prueba necesarios para detectar los errores debe ser minimizado para reducir costos
Objetivo del proceso de pruebas
-Los dos objetivos principales del proceso de pruebas:
–Maximizar el número de errores detectados (cobertura)
–Reducir al mínimo el número de casos de prueba (costo)
-Como con frecuencia son contradictorios, el problema de seleccionar el conjunto de casos de prueba con el que un programa debe ser probado se vuelve una tarea muy
compleja.
Niveles de prueba
-Generalmente se comienza probando las partes más pequeñas y se continua con las más grandes
-Para el software convencional
–El módulo (componente) se prueba primero
–Se continua con la integración de módulos
-Para el software orientado a objetos
–Se prueba primero una clase (atributos, métodos, colaboración)
Pruebas Unitarias
-Se concentran en probar cada componente individualmente para asegurar que funcione de manera apropiada como unidad
-Emplean técnicas de prueba que recorren caminos específicos en la estructura de control de los componentes (pruebas estructurales)
-Herramientas
–JUnit
–TestNG (versión mejorada de JUnit)
–PHPUnit
–CPPUnit
–NUnit (.Net)
–MOQ (creación dinámica de objetos simuladores, mocks)
Pruebas de Integración
-Las pruebas de integración tienen dos objetivos principales:
–Descubrir errores asociados con las interfaces de los módulos
–Ensamblar sistemáticamente los módulos individuales para formar subsistemas y al final un sistema completo
-Principalmente se utilizan técnicas que verifican el correcto manejo de las entradas y salidas del software (pruebas funcionales)
-También pueden emplearse técnicas que recorren rutas específicas en la estructura de control del software (pruebas estructurales)
-Existen dos enfoques principales para las pruebas de integración (incremental):
–Integración descendente (componentes de funcionales)
–Integración ascendente (componentes de infraestructura, e.g. acceso a BD)
Pruebas de Integración, enfoque descendente
-El módulo principal es usado como controlador y todos sus módulos subordinados son remplazados por módulos simulados (stubs)
-Los módulos simulados se remplazan uno a la vez con los componentes reales (en profundidad) y se van probando
Pruebas de Integración, enfoque ascendente
-A diferencia del enfoque descendente, éste inicia la construcción y prueba de los módulos en los niveles más bajos de la estructura del programa
-No se requiere el uso de módulos simulados (stubs)
Pruebas de alto nivel
-Pruebas de validación, se enfocan en los requerimientos
–Pruebas de aceptación: desarrolladas por el cliente
–Pruebas alfa: realizadas por el usuario con el desarrollador como observador
–Pruebas beta: realizadas por el usuario en su entorno de trabajo (sin observadores)
-Pruebas del sistema, se enfocan en la integración del sistema (Hw, información, personas)
–Prueba de recuperación, forza el software a fallar en diferentes formas y verifica que éste se recupere adecuadamente
–Prueba de seguridad, verifica que los mecanismos de protección integrados en el sistema realmente impidan irrupciones inapropiadas
–Prueba de resistencia, ejecutan un sistema de manera que se demanden recursos en cantidades, frecuencias o volúmenes anormales
–Prueba de desempeño, prueba el desempeño del software en tiempo de ejecución dentro del contexto de un sistema integrado
Método de prueba
-Existen dos métodos básicos para diseñar casos de prueba
–De caja blanca (o estructurales)
–De caja negra (o funcionales)
Método de prueba, de caja blanca
-Verifican la correcta implementación de las unidades internas, las estructuras y sus relaciones
-Hacen énfasis en la reducción de errores internos
-Los métodos de caja blanca o estructurales permiten derivar casos de prueba que
–Garanticen que todas las rutas independientes dentro del módulo se ejecuten al menos una vez
–Ejecuten los lados verdadero y falso de todas las decisiones lógicas
–Ejecuten todos los ciclos dentro y en sus límites operacionales
–Ejerciten las estructuras de datos internas para asegurar su validez
-Algunos ejemplos de técnicas de caja blanca
–Prueba de ruta básica
—Complejidad ciclomática
–Pruebas de estructura de control
—Prueba de condición
—Prueba de flujo de datos
—Prueba de ciclos simples
—Prueba de ciclos anidados
Método de prueba, de caja negra
-Verifican el correcto manejo de funciones externas provistas o soportadas por el software
-Verifican que el comportamiento observado se apegue a las especificaciones del producto y a las expectativas del usuario
-Los casos de prueba se construyen a partir de las especificaciones del sistema
-Los métodos de caja negra o funcionales permiten derivar casos de prueba que buscan encontrar los siguientes tipos de errores:
–Funciones incorrectas o faltantes
–Errores de interfaz
–Errores en estructuras de datos o en acceso a BD externas
–Errores de comportamiento o desempeño
–Errores de inicialización o término
Método de prueba, prueba exhaustiva
-El procedimiento de prueba de caja negra más obvio es la prueba exhaustiva
-Para probar exhaustivamente este sistema con 5 componentes (parámetros) cada uno con 2 valores se requieren ejecutar 25 = 32 configuraciones diferentes (casos de
prueba)
-Para un sistema simple como el de nuestro ejemplo es posible ejecutar una prueba exhaustiva
-Sin embargo, este enfoque es impractico y no factible por que el número de casos de prueba crece muy rápidamente
-Por ejemplo para probar exhaustivamente un sistema con 5 parámetros cada uno con 10 valores se requieren ejecutar 105 = 1000000 casos de prueba
-Si se ejecuta y evalua un caso de prueba por minuto tardaríamos 694.44 días
-¿Existen otras alternativas?
Métodos de prueba, Estrategias combinatorias, pruebas de interacción
-Sí, las pruebas de interacción (ver [Grindal et al., 2005])
-Este enfoque identifica fallas que surgen de las interacciones de t (o menos) parámetros de entrada
-Para ello crea casos de prueba que incluyen al menos una vez todas las t-combinaciones entre estos parámetros y sus valores
-Los Covering arrays (CAs) son objetos combinatorios usados para representar pruebaa de interacción [Hartman, 2005]
-Permiten maximizar el número de errores detectados reduciendo al mínimo el número de casos de prueba [Kuhn et al., 2004]
Fuentes:
-ISTQB FL
-Importancia de las pruebas de software
http://www.tamps.cinvestav.mx/~ertello/swe/swTestingTecZacatecas.pdf

http://testingbaires.com/por-que-son-necesarias-las-pruebas/

Ejercicios Diagrama de flujo - parte 2

Ejercicios de diagrama de flujo sobre secuencias de números.

Los demás ejercicios Parte 1   Parte 2   Parte 3   Parte 4   Parte 5   Parte 6   Ejemplo 1   Ejemplo 2   Ejemplo 3   Ejemplo 4   Ejemplo 5

Leer un número y escribir el valor absoluto del mismo. Link

Leer un número y escribir el valor absoluto del mismo.


Imprime un conteo del 0 al 100. Link

Imprime un conteo del 0 al 100.


Diagrama de flujo que realiza e imprime la secuencia 0, 1, 10, 2, 9, 3, 8 ... 0,10. Link

Diagrama de flujo que realiza e imprime la secuencia 0, 1, 10, 2, 9, 3, 8 ... 0,10.


Diagrama de flujo que realiza un conteo del 1 al 100, y del 100 al 1. Link

Diagrama de flujo que realiza un conteo del 1 al 100, y del 100 al 1.



Diagrama de flujo que realiza e imprime la secuencia 0, -2, 3, -4, 5, -6 ... infinito. Link

Diagrama de flujo que realiza e imprime la secuencia 0, -2, 3, -4, 5, -6 ... infinito. 

 

Diagrama de flujo que realiza e imprime la secuencia 0, 1, 4, 3, 16, 5, 36 ... infinito. Link

Diagrama de flujo que realiza e imprime la secuencia 0, 1, 4, 3, 16, 5, 36 ... infinito.

 

Diagrama de flujo que realiza la secuencia de Fibonacci. Link

Diagrama de flujo que realiza la secuencia de Fibonacci.

Diagrama de flujo que realiza el factorial de X número. Link

Diagrama de flujo que realiza el factorial de X número


Fuente: http://mis-algoritmos.com/ejemplos/diagramas-flujo.html

Sanity, Regression, Smoke etc… Do we need all these testing types?

This topic may seem simple, but I’ve been asked about it a couple of times in the last month, and I also saw it in multiple questions on QA Forums, SoftwareTestingClub, and other places where I go visiting once in a while.
Let’s start by admitting we have a problem in the Testing Community. Not only do we call the same process by multiple names, but sometimes some of also use the same name to call different processes. This is starting to sound like The Bible’s story of the Tower of Babylon, but it is a fact and maybe part of the reason for the confusion.
So instead of trying to propose a Definitive Naming Convention I will present mine and I hope that it will make sense (pay attention to the content and if necessary disregard the names!).
Since this can become a very long post, I will condense it by naming each Test Type and then answering 3 simple questions:
- When (do we perform it)?
- What (is the content of the test)?
- Why (is it important)?
1. Development Integration Testing
WHEN? Before each build is delivered to QA.
WHAT? End to end scenarios covering the main path of the application. Should not last more than 30 min to 1 hour.
WHY? Assure that development does not “throw their builds over the wall” without making sure they meet the minimal stability requirements.
2. Build Acceptance Test
WHEN? The minute QA receives a build from development and before all the team starts deploying it on their testing environments.
WHAT? A 1 to 2 hour intensive test suit that comes to assure the build is good enough for the whole team to commit to it. This is a great candidate for automation (and if so, it can be run each day as part of nightly build system).
WHY? Even after developers perform their Integration Tests it is not always enough for the QA to start testing it. We need to make sure there are no blockers or other stability issues that would prevent you from executing your test plan.
3. New Feature Testing
WHEN? On each build that includes new features or functionality.
WHAT? Test the new functionality as deep and wide as you can, go over all the scenarios and the functionality you can think off.
WHY? The trivial answer: new features come with a high risk of containing bugs and thus need to be thoroughly tested.
4. Bug Verification Testing
WHEN? I do it on each new build, together or immediately after the New Feature Testing. There are some organizations where they do it at the end of the project but I think this is risky.
WHAT? For each bug that was fixed (or in some cases a family of bugs) you need to test the reported reproduction scenario, and then with the information provided by the developer regarding the root cause of the defect and the fix he implemented you should test additional scenarios that may still have bugs or new bugs that may result as a side-effect of the fix itself.
WHY? Similar to the New Feature Testing, places where people touch the code (even if it is done to fix a bug) can cause additional defects.
5. Regression Testing
WHEN? Once the new features of the product reach a minimal level of stability and you want to start testing and looking for bugs on other less-trivial areas.
WHAT? This test is the most complex suit to design. On the one hand it cannot be too extensive since you never have enough time to run all your tests, on the other hand it should cover all or most of the application and provide a good indication that the AUT or the specific Area that is being tested is CLEAN and stable.
WHY? Software Development is very risky, changes in one place can have unexpected negatives effects in other areas, and it is our job to lower the risk of these bugs being released to the field.
6. Sanity Testing
WHEN? When you need to check the application on a high level and don’t want to or cannot run a Regression Test (i.e. after all tests are done and you want to check the final CD that was burned and sent to mass-production).
WHAT? Similar to the Regression Testing in the fact that you choose specific scenarios that cover all the AUT or application area but shorter and based on the highest risk factors.
WHY? Same as Regression Testing, but here you are required to make a risk judgment call and compromise part of your testing.
7. User Acceptance Test
WHEN? When the user receives the final product in his environment, but sometimes since the testing team already has these scenarios up-front they can run before they release of the product.
WHAT? What the user decides to test. Usually End-to-End scenarios for their most important features and functionality.
WHY? Because they want to make sure the product works.
There is one additional test type I want to mention that does not really fit on the group of tests I already described, but since it is very useful I will add it anyway.

8. Smoke Test

WHEN? If you don’t have time, and need to make a fast analysis of the application in order to understand if you should run more tests on a specific area.
WHAT? Even shorter than the Sanity Test, the smoke will include only quick scenarios that will point at major areas of the product. The idea is to probe these areas in search of smoke that will signal there is a FIRE hiding bellow the surface.
WHY? You either don’t have time or you don’t know where to start and you need a quick test to serve as a very high level indicator for your version or product.
I am sure I missed some test types, and I will be happy to include them in my personal list if you care to comment on them, but these are my main and most important test types.
At the end of the day, these types will only serve as a starting template and you will need to create your custom test suites that will suit the needs of your product and project.
Good Luck & Have Fun!
qablog.practitest.com/2009/01/sanity-regression-smoke-etc-do-we-need-all-these-testing-types/

jueves, julio 25

You need to become a Project Trusted Adviser!

I was taking part on a discussion started by Rob Lambert on the SoftwareTestingClub where he talked about “hitting a brick wall”.  By this he meant the all-too-familiar feeling that regardless how much you raise your voice and warn all the project stakeholders about the imminent danger of following a certain path, they decide against your advice and follow it anyway.
Who hasn’t gone through this before?
I’ve certainly been in the same situation enough times myself, and over the years I’ve learned some tricks and developed a couple of techniques that help me avoid such situations, or at least reduce their number significantly.
1.  I make my homework before the meetings
I always try to talk about the important issues with the project stakeholders One-on-One before the actual meeting where the decision will take place.  This way I can spend the necessary time to thoroughly explain the issues and go over the repercussions it may have for each person and team individually; this also gives them time to think about the issue quietly and ask additional questions if necessary.
If possible I also try to understand from the people I meet what’s their position and thus I know what’s my status before going to the meeting itself.
2.  I make my objective to become the Trusted Adviser of my project Heavy-Weights
This one is easier said than done, but it is the most effective method.
In principle it means that you need to create a reputation for yourself that will make people listen whenever you speak your mind.
How to do this is a science by itself, and it’s definitely not something that happens overnight.
Personally I try achieving this using the principles of Testing Intelligence: “By providing correct and timely visibility into the product and process, that helps my Organization’s Stakeholders make the correct tactical and strategic decisions.”
An additional good approach is also to map your project heavy-weights (the stakeholders who usually carry the most weight in the important decisions) and start by becoming their trusted advisers, as soon as the other stakeholders realize these guys are listening to you they will start doing so themselves.
3. I’m open to change my mind when I understand new things about the problem at hand
Don’t be a hard-head and learn to change your mind, this is the reason I make a point of really listening to the other side when I’m arguing.  One of the dumbest mistakes you can make is to continue fighting for an argument even when everyone realizes you are not right.
The most common cases are when someone presents a new argument you were not aware off (e.g. customer pressure, market reality, etc) and you need to understand that the good of the Company and the Product require the team to take the path you did not support initially.
Smart people need to be smart enough to know when to change their minds…
4. I learned to pick my fights
Since I cannot fight over every decision, I make sure to choose what things are worth fighting for and what battles I need to conceive in order to concentrate on the important stuff.
5.  Learn how to communicate
Maybe the most important, and yet one of the hardest to things to achieve.  We all need to learn how to transmit our messages clearly to the other side.
Just to show a good example of how to do this I will point to another article by Rob Lambert that illustrates one of these techniques (and no, I am not getting any royalties from Rob today…).
To summarize, there are many things you can do in order to confront and even succeed on these kind of situations, and all of them start by recognizing you have additional human beings in front of you.
Try to think what you can do in order to help them understand your argument and your point of view, then you will have no choice but to let them make their own decisions.

http://qablog.practitest.com/2009/04/you-need-to-become-a-project-trusted-adviser/

lunes, julio 22

Fallas/bugs que causaron grandes daños en los últimos 5 años

En esta entrada les voy a estar mostrando un listado con los bugs que han afectado la vida de miles de personas y causado perdidas millonarias en los últimos 5 años (como para acortar la lista ;)).
Este listado apunta a concientizar un poco en la importancia que tiene el Software Testing hoy en día, que si bien no puede asegurar que los mismos no ocurran, puede minimizar la probabilidad de ocurrencia de las fallas que tienen un mayor riesgo para el correcto funcionamiento del software:

2010
  • Una aplicación de SmartPhone para bancos OnLine fue reportada en Julio del 2010 por tener un bug de seguridad que afectó a más de 100.000 clientes. Los usuarios tuvieron que actualizar a una nueva versión del software que solucionaría el problema.
  • En julio de 2010 un importante fabricante de teléfonos informó de que su software contiene un error a largo plazo que dio lugar a indicadores incorrectos de la intensidad de señal en la interfaz del teléfono. Según se informó a los clientes que se habían quejado por el problema por varios años. La compañía proporcionó una solución para el problema varias semanas más tarde.
2009
  • Se informó en agosto del 2009 que un gran distrito escolar suburbano introdujo un nuevo sistema informático que estaba “plagado de bugs” y dio lugar a que muchos estudiantes comenzaran el año escolar sin horarios o con horarios incorrectos. Estudiantes y padres molestos comenzaron un sitio de redes sociales para compartir las quejas.
  • En febrero de 2009 a los usuarios de un sitio importante de búsquedas se les impidió hacer clic a los sitios que figuran en los resultados de búsqueda de determinada fecha. Fue al parecer debido al software que no manejaba efectivamente un error. Los usuarios, en vez de poder hacer clic a los sitios de la lista, eran redirigidos a un sitio intermediario que, como resultado de la carga enorme sobre el mismo, era inutilizable.
2008
  • Problemas de software en el sistema automatizado de clasificación de equipajes de un aeropuerto importante en febrero de 2008 impidió a miles de pasajeros realizar el checking del equipaje para su vuelo. Se informó de que la ruptura se produjo durante una actualización de software, a pesar de la pre-prueba del software. El sistema siguió teniendo problemas en los meses siguientes.
  • En agosto de 2008 se informó de que más de 600 vuelos de aerolíneas de los EE.UU. se retrasaron considerablemente debido a un fallo de software en el sistema de las Fuerzas Áreas de EE.UU. para el control del tráfico aéreo. El problema era que pretendía ser una “conmutación de paquetes” que “no funcionaba debido a una falla de bases de datos”, y se produjo en la parte del sistema que controlaba los planes de vuelo.
2007
  • En noviembre de 2007 un gobierno regional anunció una demanda de varios millones de dólares en contra de un proveedor de servicios de software, debido a “la tan poca calidad” del proveedor en la entrega del software para un gran sistema de información sobre justicia penal, ya que el sistema no cumplía los requerimientos. El proveedor también demandó a su sub-contratista en el proyecto de software.
  • En junio de 2007 los informes de noticias dieron a conocer que una falla de software en un popular concurso de selección de acciones OnLine se podía utilizar para obtener una ventaja injusta en la búsqueda de los premios del juego que eran grandes sumas de dinero. Investigadores externos fueron llamados y en julio, el ganador del concurso fue anunciado. Al parecer el ganador había estado previamente en la 6 ª posición, lo que indica que los 5 primeros concursantes pueden haber sido descalificados.
2006
  • En septiembre de 2006 un informe indicaba problemas con el software utilizado en las elecciones primarias de un gobierno estatal de los EEUU, dando lugar a un inesperado reinicio periódico de las máquinas para registrar votantes, las cuales fueron separadas de las máquinas de votación electrónica, y dio lugar a una gran confusión y retrasos en los sitios de votación. El problema habría sido debido a pruebas insuficientes.
  • Un error de software reportado resultó en una sobre facturación de hasta varios miles de dólares a cada uno de los 11.000 clientes de una importante empresa de telecomunicaciones en junio de 2006. Se informó que el fallo del software se solucionó en cuestión de días, pero que la corrección de los errores en la facturación tomarían mucho más tiempo.
http://josepablosarco.wordpress.com/2010/12/29/fallasbugs-que-causaron-grandes-danos-en-los-ultimos-5-anos/

jueves, julio 18

Helpful Hints for Interviewing Experienced QA/Testing Candidates

Summary:
This article introduces suggested questions that can be presented to a candidate interviewing for a QA/Testing position. The suggested questions would help a test manager assess a candidate's knowledge of QA concepts and technical skills. The test manager can create a sample set of questions from this article and that will help them form a framework for efficiently interviewing future candidates.

Test managers often hold brief phone interviews to screen out candidates for QA/testing positions. With resume in hand, which the test manager has probably not even reviewed, before the interview commences the test manager is expected to determine within an hour or less whether the interviewed candidate would be a good fit for the project. Trying to evaluate in such a short period time a candidate's ability to perform the project's testing and QA tasks can be an inexact science. It can also be a subjective undertaking if the test manager fails to ask the appropriate questions from the interviewing candidate. Even relying on a resume to determine the candidate's aptitude for a position is unreliable since many candidates embellish their resumes.
A recommended approach for interviewing candidates is to draft a specific list of questions before the interview begins that are pertinent to the position that needs to be filled. These questions should be based on the candidate's QA accomplishments and skill sets as documented in the resume. Below are some sample questions that can help demonstrate a candidate's experience in the areas of quality assurance, and also their creativity and ability to comprehend basic testing concepts. The questions below can serve as criteria to screen out inexperienced candidates during the interviewing process.
Describing Testing/QA Terms
An experienced testing candidate with several years of experience will exhibit knowledge and understanding of well-established testing principles and testing terminologies. It behooves the test manager to ask the testing candidate to describe these concepts. Some suitable examples would be:
What is the objective of a peer review?
What is the Unified Modeling Language?
What are the components of a test plan?
What are the benefits of automated testing?
What are the benefits of Configuration Management?
What are the characteristics of a good test requirement?
What is a requirements traceability matrix is and why it is necessary?
What is the criterion for composing a test readiness review list (TRR)?
Provide descriptions for testing approaches (i.e. white box versus black box, etc).
The test manager needs to ascertain if the testing candidate is familiar with industry accepted terminologies that are commonly used within the project where the tester is being considered.
Thinking On Their Feet
In addition to understanding the testing requirements, a good testing candidate should have creativity and ingenuity when testing a software application. A tester should always be alert to potential scenarios that could cause an application to fail or yield defects and/or errors; even if such scenarios are not documented or presented in requirements. A thorough tester executes a particular test scenario with different sets of data, and conducts boundary testing to ensure that an application would not be deployed into production with overlooked problems.
A suggested question to discern a candidate's testing meticulousness is to have a candidate provide use cases and test cases for a commonly used machine, or a self service application. An example would be: what test cases and use cases can the candidate think of for operating a beverage dispensing machine, or for purchasing books via a website. The candidate should generate an extensive list of test cases and use cases for the two aforementioned examples.
Development of Test Scenarios and Test Scripts
A well written test scenario has information about: pre-conditions, post-conditions, traceability to a requirement, description, identification of authorship, a peer review and sign off section, roles to be tested, etc. A test script or test procedure on the other hand has detailed test steps with valid data values and expected results for each test step. In addition, a test script provides information

about the test execution results for "passes/failures" and mapped requirements for the test step.
The test manager can have the tester provide information as to how test cases and test scripts were documented at the previous project. What level of detail was presented for the test cases and test scripts? What exactly was documented for the test cases and test scripts? Another suggestion is to have the tester send a sample test case and test script that he/she documented at a previous project for review. A well-documented test case and test script will demonstrate the tester's attention to detail.
Life Cycle Methodologies
An experienced tester should have experience working with one or more IT methodologies such as waterfall spiral, evolutionary, incremental, rapid prototyping, etc. Some methodologies are more appropriate when requirements are well known/defined, or when requirements are not well known, or when the project has high risks, etc. The tester vying for the position should understand what the differences are between the main software development lifecycle methodologies. The test manager can present the tester with different hypothetical scenarios of IT projects and ask the tester what methodology would be most fitting based on the presented scenario.
Test Procedures and Test Standards
What testing standards and procedures the candidate has been exposed to is of paramount importance in determining whether the candidate would either be a good fit or adapt to a new testing environment. Did the candidate come from a regimented and disciplined type work environment like a CMM work environment with repeatable and defined processes? Or, did the candidate work in a chaotic test team that did not have any standards, procedures or defined processes?
Testers that come from a loose testing environment sometimes have difficulty adjusting to regimented testing environments that have defined processes and standards for things such as: lessons learned, test plans, naming standards, version control, test case templates, test execution matrix, test logs, test folders, reporting of test results, etc. Conversely, a tester that follows strict testing standards and procedures may struggle in a work environment that does not have defined processes, well documented scenarios, no structure for test cases, or templates for test scripts, etc. Questions that identify what the candidate's experience is with testing standards and procedures are critical for assessing how well the candidate would adjust to the current QA environment.
Defects
It's advisable to learn what sorts of defects a tester has identified and reported on in previous projects. Based on the candidate's answers, the test manager can learn whether the tester focuses on cosmetic/minor defects or defects that are show stoppers and would have caused havoc for a released or production deployed application.
The test manager can have the candidate expound on a significant defect that was discovered and, what the impact of the defect would have been had the candidate not caught it. In particular, the test manager can focus on the tester's approach for identifying the defect and how the tester re-tested the defect subsequently leading to the closing of the defect.
Test Script Automation
Many candidates list in their resumes that they have experience with various automated testing tools. However, when confronted with technical questions about their experience with automated testing tools many candidates respond that their experience with automated test tools is limited to capturing and playback. Candidates that manifest to have only recorded and played back test scripts are in fact devoid of significant experience with automation test tools. Even candidates who claim to hold certifications with test tools should have their background probed and examined with technical questions.
The test manager should find
out whether the candidates understand what a data driven test script is, how to create a parameterized test script, and why it would be necessary to construct a data driven script. The test manager should also ask the candidates to explain what data correlation is, how to create a driver script, how to synchronize scripts, why it is necessary to synchronize scripts, and how to report the execution results from a test script. Other key questions would be techniques for identifying and verifying recorded objects including providing examples of object's attributes that can be verified, how to create checkpoints, how to debug an automated test script, and in what modes a recorded script can be re-played. The candidate should also know how to map and learn custom objects that the recording test tool does not identify or recognize.
The test manager can present more technical questions based on the nuances of the recording test tool that is present at the project site. The main objective of technical questions is to ensure that a candidate has experience that goes beyond merely recording and playing back scripts.

http://www.stickyminds.com/article/helpful-hints-interviewing-experienced-qatesting-candidates

lunes, julio 15

Debate – Qué se necesita para Instalar nuestra Área de Testing

Sumario

Debate iniciado en el grupo de discusión TESTING & QA, comunidad de testers dentro de la red LinkedIn, para discutir las principales claves y obstáculos a la hora de la implementación de un área de Testing. Título del Debate: Qué se necesita para Instalar nuestra Área de Testing?

Implementación, Implementación área de testing

El contenido del mismo es el siguiente:
  • Introducción
  • Definir el área
  • RRHH
  • Cursos de Capacitación
  • Herramientas
  • Documentación Probatoria

Introducción
El objetivo de este artículo es el de mencionar las principales claves y obstáculos con los que nos encontraremos al armar un Área o Dpto. de Testing.
Todos sabemos que en nuestra vorágine laboral son muy pocas las oportunidades que tenemos para, detener el ritmo o disminuir nuestro trabajo cotidiano y tomarnos un tiempo para pensar en objetivos a mediano o largo plazo.
Por eso este artículo se centra en experiencias de una Gerencia de Sistemas de una mediana empresa Argentina llamémosla PYME, donde un buen día se tomó la decisión de crear un Área específica dedicada a la evaluación y prueba de sistemas informáticos.
Fueron muchos los interrogantes que se hicieron en la etapa inicial, si se justificaba, si iba a funcionar o no, etc.
Por suerte estos preconceptos apresurados ninguno se aproximo a la realidad, ya que gracias a la creación de este Área terminó con muchas cuestiones que generaban conflictos en las demás secciones de trabajo de esta gerencia.
Por ejemplo: Los programadores al terminar sus desarrollos, se ponían a probar estos sistemas de manera exhaustivas, siendo éstas las únicas pruebas antes de enviar a producción.
En síntesis la formación de un área exclusiva para el Testeo exhaustivo de sistemas, no implica solamente: contratar más personal y/o dejar conforme únicamente a la auditoria, sino dar más importancia a la calidad de los sistemas, mejorar la comunicación y la coordinación entre los miembros de un equipo de trabajo, que testing esté presente en todo el ciclo de vida del Software y no únicamente la última etapa.
Y para continuar destacaremos cuales son los principales tips o claves a tener en cuenta a la hora de crear un Área de Testing.
Definir el Área
Siempre la creación de algo nuevo dentro de una organización trae consigo retos y por qué no, conflictos.
Para eso es conveniente definir las respectivas Políticas, Funciones y Roles del Área.
Estas funciones le permitirán al personal saber con exactitud cuales son sus principales deberes y obligaciones, minimizando de esta forma posibles roses o conflictos con las demás áreas de trabajo.
Ejemplo de Funciones
Funciones del Área de Testing (Jefe Testing – TesterSenior  - TesterJunior)
  • Estimación y Planificación de las pruebas X
  • Seguimiento y Reporte del progreso de las pruebas X
  • Gestión de equipo de trabajo X
  • Evaluar requerimientos X
  • Gestión de Requerimiento de ambientes X
  • Realizar análisis y registrar estimaciones X
  • Definir plan de pruebas X
  • Definir estrategias X
  • Preparar informes de avance periódicos para Gerencia de Sistemas. X
  • Realizar el seguimiento del plan de pruebas X
  • Ejecutar plan de pruebas X
  • Coordinar las pruebas X
  • Replanificar las tareas y pruebas cuando la situación lo requiera X
  • Armado de condiciones de pruebas X X
  • Documentar y diseñar los casos de prueba X X
  • Control y seguimiento de la ejecución de las pruebas X X
  • Registrar evidencia de prueba X X
  • Realizar pruebas funcionales X X
  • Realizar pruebas de integración X X
  • Asegurar la conformidad de los requerimientos entregados X X
  • Controlar pendientes X X
  • Realizar seguimiento de incidentes X X
  • Reportar defectos X X
  • Conocimientos de base de datos SQL X X X
  • Conocimiento de herramientas para la registración y seguimiento de casos de prueba X X X
  • Conocimiento de herramientas para la registración y seguimiento de defectos X X X
  • Conocimiento de herramientas de automatización X X X
  • Conocimiento de herramientas para la gestión documental X X X
RRHH
Primero que todo o mejor dicho antes que nada, se deberá seleccionar al personal que formara parte en dicha área. Entonces tendremos que decidir cual será el perfil ideal de un tester.
Cabe aclarar que toda persona que tenga acceso a medios TIC, sin necesidad de conocimientos técnicos esta capacitado a ser un tester.
En el caso de ser un Informático como ser Lic. o Ing. deberá centrarse en el papel de tester, es decir deberá solamente detectar e informar las fallas o errores de los sistemas que él estará probando.
Otra alternativa viable seria un Analista Funcional, el cual esta empapado en la problemática del sistema.
Ellos tienen la particularidad de tener la mente más abierta y siempre están dispuestos a colaborar con los distintos grupos de trabajos (Programadores, DBA o Líderes de Proyectos).
Cursos de Capacitación
La capacitación en el mundo actual siempre es muy importante y mas si se trata del mundo informático.
Se deberá definir cuales y que tipo de capacitación técnica para el personal de la Gerencia de Sistemas, y para el Área de Testing. Estos cursos o certificación varían dependiendo el nivel del que el trate.
Actualmente existen dos niveles de certificación para profesionales de Testing en ISTQB;
Foundation Certification.
Advanced Certification.
Los cursos oficiales de ISTQB proponen una interesante actividad donde estos aportan al profesional los conocimientos necesarios para ingresar en el esquema de pruebas internacionalmente reconocido.
También existen otros tipos de cursos, los cuales vale la pena hacer mención como ser: “Operador de Testing” empleartec.org.ar/cursos/106/operador-de-testing.
El mismo tiene la ventaja de ser gratuito y de un elevado contenido técnico, pero lamentablemente es destinado únicamente a residentes de las Provincias de San Juan y Rosario, Santa Fe.
Herramientas
Se necesitara más un Bug Tracker como herramienta de cabecera. Existen muchos de uso comercial y otros Open Source. Por razones obvias es necesario contar con este tipo de soluciones no solamente para que gestione el ciclo de vida del software si no también para mejorar las tareas y comunicación con el equipo de trabajo.
Por otro lado tendremos que mencionar a las herramientas que se empleen en el testing automatizado, algunas de ellas son:
  • Selenium IDE
  • JUnit y NUnit
  • Mercury QTP (Quick Test Professional)
  • GXtest
  • Etc.
  • Documentación Probatoria
Cuando el tester se encuentra probando y cumpliendo rigurosamente los respectivos casos de pruebas, es sumamente importante contar con documentación probatoria para estas pruebas.
En ellas se irán capturando pantallas, anotando observaciones, etc. sobre los respectivos resultados que se van obteniendo en el transcurso de estas pruebas.
Este tipo de documentación existen de diversos tipos y formatos, pero básicamente la información que contienen es la siguiente:
  • Tiempo estimado de la Prueba
  • Fecha de inicio/ Fecha de Fin
  • Nombre del Tester o Usuario
  • Id de caso de prueba.
  • Módulo a probar
  • Descripción del caso
  • Pre-requisitos
  • Data necesaria (valores a ingresar)
  • Resultado esperado (correcto o incorrecto)
  • Resultado obtenido
  • Observaciones o comentarios
  • Analista de Pruebas (responsable de las pruebas)
  • Fecha de Ejecución
  • Estado (concluido, pendiente, en proceso)
  • Ejemplo de plantilla
Id Caso de prueba Modulo a probar Descripción del caso Pre requisitos Resultado esperado Resultado obtenido Estado
CP001 CTAS.CTES Verificar que se genere el archivo de ventas correctamente – Que exista data para el archivo.
- Que exista la ruta destino OK OK Concluido
CP002 PRESTAMOS Verificar que se graben los datos de ingreso en la tabla Movimientos. – Ingresar datos
-Tener Permisos de lectura a la BD. OK Pendiente
Saludos
Luis Alfonso Cutro
Colaborador del Blog

http://testingbaires.com/debate-que-se-necesita-para-instalar-nuestra-area-de-testing/

Testers: Put on Your End-User Hat

Summary:
The more you know about the end-user, the more effective you will be as a tester. Here are some tips for adding value by thinking like your customer.

One of the biggest criticisms about testers and QA organizations is that they do not understand the business or the end-user. If you believe this to be true, it sends a clear message about not having a value-adding testing team of professionals. The more you know about the ultimate customer or end-user, the more you will become an effective, risked-based tester.
When I led QA teams in the past, I made "knowing your customer" a major performance criteria for my staff. To ensure this, I arranged field trips with business development to customer sites and had the testing team view how and why the end-users actually used the system or application. Upon returning from these field trips, the QA team started to modify how it approached end-to-end and user acceptance tests. It was so beneficial that the number of critical end-user defects dropped by more than 20 percent in a very short period of time.
This result inspired me to continue my learning. I took the product management certification course from Pragmatic Marketing and was certified in pragmatic product management in December 2009. From the course, I learned how understanding the following questions will increase the effectiveness of tests and testing teams (note: It is your responsibility to ensure you are adding value to the delivery of the product):
  • What problem or problems will this upgrade, enhancement, or new feature solve? This is the value proposition.
  • For whom do we solve that problem? This is the target market.
  • How will we measure success? This is the business result. What metrics will be needed to validate success has been attained?
  • What alternatives are out there? What is the competition doing? Is it a "blue ocean industry” or a "red ocean industry”?
  • Why are we best suited to pursue this? What is our differentiator?
  • Why now, and when does it need to be delivered? This is the market window or window of opportunity.
  • How will we deploy this? What will be our deployment strategy?
  • What is the preliminary estimated cost and benefit? Will there be a return on investment, customer satisfaction increase, or cost avoidance?
If you understand these high-level questions, you will ensure a higher level of end-user quality by designing and executing tests from an end-user’s perspective. Defining, quantifying, and weighing the many quality dimensions as perceived by your end-users, you will be able to approach testing in a very efficient and effective manner. Knowing what the user wants and needs to do with the system will enable a proactive mindset regarding requirements and feature reviews, acceptable behaviors, operational inconsistencies, interactions, and interoperability.
I have found the user manual to be a great source of knowledge for a test team. Granted, a newly developed application is devoid of a manual, as the manual gets developed along with the application. But, during my independent consulting years, I relied heavily on these manuals to gain an operational business perspective. Be careful, though, as they can be dated and may become stale depending upon how much the end-user relies upon them.
This perspective naturally leads to an understanding of where the potential risks are to the business:
  • What are the most common and critical areas of the functionality from the user’s point of view?
  • How accepting should the system be to incorrect input?
  • Is the description complete enough to proceed to design, implement, and test the requirement processes?
  • What is an acceptable response time for the users? Are there stated performance requirements?
  • Which problems and risks may be associated with these requirements?
  • Are there limitations in the software or hardware?
  • Which omissions are assumed directly or indirectly?
The test plan should document the techniques and methods that will be employed to validate the system under test. The test plan should detail the estimates of the test cycles in relation to the delivery plan. A test estimation algorithm and how one derived these estimates are best guesses at this point, but be sure these assumptions are reviewed with the delivery team.
The same holds true for the development team. If the project is to be delivered in iterations, then naturally the team will jointly develop the estimated costs and duration. It is important to highlight to the delivery team the estimated defect rate for the delivery. As this rate is approached or exceeded, the impact to remaining deployments and regression can cause not only the timeline to skew but also costs to escalate. Feedback from these defect trends should trigger a re-estimation of remaining iterations or sprints, thereby increasing accuracy and confidence. The methods and techniques documented in the plan will support the estimation of costs and duration. Examples of these techniques include requirements-based, combinatorial, model, and scenario-based testing. Each technique has unique attributes that will be associated with the various levels of structural and functional testing. Test estimates will be challenged, so teams need to stay focused. Should all business-critical features and functions have more than 90 percent test case permutations covering the various combinations of valid, error, and user profiles? Do all the users view the feature set similarly and agree about what is critical to the operation of their needs and business?
Deciding at what level to stop testing is difficult, but there is the ever-present law of diminishing returns. Review the approach and risks with the business and product owners, and gain their insight into where there could be excessive testing and what is an acceptable level of risk. As a value-adding testing team, you must quantify the costs by articulating the number of test cases and how long it will take to deliver the quality the end-user is expecting. Understand where the greatest risks to the business exist—features frequently used by most or all customers, financial impacts from failure or errors, feature complexity, defect density, and historical data on problem areas or feature sets.
When it all comes together, the team can circle back to those quality dimensions highlighted earlier. Reliability, usability, and accuracy will manifest in the number of test cases and techniques used to satisfy the level of quality that the end-user is expecting and on which the business owner must plan to spend. Complete transparency enables the team to make sound business decisions and decide on appropriate levels of risk and tradeoffs when plans are not being met. The cost-risk-benefit equation of quality will be used as the team makes adjustments to content time and cost. There should be no surprises when the team is faced with the tough decisions that always arise during the deployment of software.
In the delivery of software, testers can wear many hats. Teams that are able to think like the end-user will make a significant contribution in ensuring that the test team is adding value and focused on meeting the client’s expectations.


http://www.stickyminds.com/article/testers-put-your-end-user-hat

Ejercicios Diagrama de flujo - parte 1

Aquí te presento una serie de ejercicios de diagrama de flujo básicos.

Los demás ejercicios
Parte 1   Parte 2   Parte 3   Parte 4   Parte 5   Parte 6   Ejemplo 1   Ejemplo 2   Ejemplo 3   Ejemplo 4   Ejemplo 5

Suma de dos números. Link

Suma de dos numeros 

 


Resta de dos números. Link

Resta de dos numeros 



Multiplicación de dos números. Link

Multiplicacion de dos numeros

 

 

Division de dos números. Link

Division de dos numeros

 

 

Fuente: http://mis-algoritmos.com/ejemplos/diagramas-flujo.html

jueves, julio 11

180+ Sample Test Cases for Testing Web and Desktop Applications – Comprehensive Testing Checklist

This is a testing checklist for web and desktop applications.
Note – This article is little long (over 2700 words). My goal is to share one of the most comprehensive testing checklist ever written and this is not yet done. I’ll keep updating this post in future with more scenarios. If you don’t have time to read it now, please feel free to share with your friends and bookmark it for later.
Make testing checklist as an integral part of test cases writing process. Using this checklist you can easily create hundreds of test cases for testing web or desktop applications. These are all general test cases and should be applicable for almost all kind of applications. Refer these tests while writing test cases for your project and I’m sure you will cover most testing types except the application specific business rules provided in your SRS documents.
Software Testing Checklist
Though this is a common checklist, I recommend preparing a standard testing checklist tailored to your specific needs using below test cases in addition with application specific tests.
Importance of Using Checklist for Testing:
- Maintaining a standard repository of reusable test cases for your application will ensure the most common bugs will be caught more quickly.
- Checklist helps to quickly complete writing test cases for new versions of the application.
- Reusing test cases help to save money on resources to write repetitive tests.
- Important test cases will be covered always making it almost impossible to forget.
- Testing checklist can be referred by developers to ensure most common issues are fixed in development phase itself.
Few notes to remember:
1) Execute these scenarios with different user roles e.g. admin user, guest user etc.
2) For web applications these scenarios should be tested on multiple browsers like IE, FF, Chrome, and Safari with versions approved by client.
3) Test with different screen resolutions like 1024 x 768, 1280 x 1024, etc.
4) Application should be tested on variety of displays like LCD, CRT, Notebooks, Tablets, and Mobile phones.
4) Test application on different platforms like Windows, Mac, Linux operating systems.

Comprehensive Testing Checklist for Testing Web and Desktop Applications:

Assumptions: Assuming that your application supports following functionality
- Forms with various fields
- Child windows
- Application interacts with database
- Various search filter criteria and display results
- Image upload
- Send email functionality
- Data export functionality

General Test Scenarios

1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red color)
4. General confirmation messages should be displayed using CSS style other than error messages style (e.g. using green color)
5. Tool tips text should be meaningful
6. Dropdown fields should have first entry as blank or text like ‘Select’
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record add/delete/update functionality
9. Amount values should be displayed with correct currency symbols
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than specified max limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting user’s first name should be labeled properly as ‘First Name’
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check application behavior after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of hard coding
21. Standard conventions should be followed throughout for naming resource keys
22. Validate markup for all web pages (validate HTML and CSS for syntax errors) to make sure it is compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation message should appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly

GUI and Usability Test Scenarios

1. All fields on page (e.g. text box, radio options, dropdown lists) should be aligned properly
2. Numeric values should be right justified unless specified otherwise
3. Enough space should be provided between field labels, columns, rows, error messages etc.
4. Scroll bar should be enabled only when necessary
5. Font size, style and color for headline, description text, labels, infield data, and grid info should be standard as specified in SRS
6. Description text box should be multi-line
7. Disabled fields should be grayed out and user should not be able to set focus on these fields
8. Upon click of any input text field, mouse arrow pointer should get changed to cursor
9. User should not be able to type in drop down select lists
10. Information filled by users should remain intact when there is error message on page submit. User should be able to submit the form again by correcting the errors
11. Check if proper field labels are used in error messages
12. Dropdown field values should be displayed in defined sort order
13. Tab and Shift+Tab order should work properly
14. Default radio options should be pre-selected on page load
15. Field specific and page level help messages should be available
16. Check if correct fields are highlighted in case of errors
17. Check if dropdown list options are readable and not truncated due to field size limit
18. All buttons on page should be accessible by keyboard shortcuts and user should be able to perform all operations using keyboard
19. Check all pages for broken images
20. Check all pages for broken links
21. All pages should have title
22. Confirmation messages should be displayed before performing any update or delete operation
23. Hour glass should be displayed when application is busy
24. Page text should be left justified
25. User should be able to select only one radio option and any combination for check boxes.

Test Scenarios for Filter Criteria

1. User should be able to filter results using all parameters on the page
2. Refine search functionality should load search page with all user selected search parameters
3. When there is at least one filter criteria is required to perform search operation, make sure proper error message is displayed when user submits the page without selecting any filter criteria.
4. When at least one filter criteria selection is not compulsory user should be able to submit page and default search criteria should get used to query results
5. Proper validation messages should be displayed for invalid values for filter criteria

Test Scenarios for Result Grid

1. Page loading symbol should be displayed when it’s taking more than default time to load the result page
2. Check if all search parameters are used to fetch data shown on result grid
3. Total number of results should be displayed on result grid
4. Search criteria used for searching should be displayed on result grid
5. Result grid values should be sorted by default column.
6. Sorted columns should be displayed with sorting icon
7. Result grids should include all specified columns with correct values
8. Ascending and descending sorting functionality should work for columns supported with data sorting
9. Result grids should be displayed with proper column and row spacing
10. Pagination should be enabled when there are more results than the default result count per page
11. Check for Next, Previous, First and Last page pagination functionality
12. Duplicate records should not be displayed in result grid
13. Check if all columns are visible and horizontal scroll bar is enabled if necessary
14. Check data for dynamic columns (columns whose values are calculated dynamically based on the other column values)
15. For result grids showing reports check ‘Totals’ row and verify total for every column
16. For result grids showing reports check ‘Totals’ row data when pagination is enabled and user navigates to next page
17. Check if proper symbols are used for displaying column values e.g. % symbol should be displayed for percentage calculation
18. Check result grid data if date range is enabled

Test Scenarios for a Window

1. Check if default window size is correct
2. Check if child window size is correct
3. Check if there is any field on page with default focus (in general, the focus should be set on first input field of the screen)
4. Check if child windows are getting closed on closing parent/opener window
5. If child window is opened, user should not be able to use or update any field on background or parent window
6. Check window minimize, maximize and close functionality
7. Check if window is re-sizable
8. Check scroll bar functionality for parent and child windows
9. Check cancel button functionality for child window

180+ Sample Test Cases for Testing Web and Desktop Applications – Comprehensive Testing Checklist

This is a testing checklist for web and desktop applications.
Note – This article is little long (over 2700 words). My goal is to share one of the most comprehensive testing checklist ever written and this is not yet done. I’ll keep updating this post in future with more scenarios. If you don’t have time to read it now, please feel free to share with your friends and bookmark it for later.
Make testing checklist as an integral part of test cases writing process. Using this checklist you can easily create hundreds of test cases for testing web or desktop applications. These are all general test cases and should be applicable for almost all kind of applications. Refer these tests while writing test cases for your project and I’m sure you will cover most testing types except the application specific business rules provided in your SRS documents.
Software Testing Checklist
Though this is a common checklist, I recommend preparing a standard testing checklist tailored to your specific needs using below test cases in addition with application specific tests.
Importance of Using Checklist for Testing:
- Maintaining a standard repository of reusable test cases for your application will ensure the most common bugs will be caught more quickly.
- Checklist helps to quickly complete writing test cases for new versions of the application.
- Reusing test cases help to save money on resources to write repetitive tests.
- Important test cases will be covered always making it almost impossible to forget.
- Testing checklist can be referred by developers to ensure most common issues are fixed in development phase itself.
Few notes to remember:
1) Execute these scenarios with different user roles e.g. admin user, guest user etc.
2) For web applications these scenarios should be tested on multiple browsers like IE, FF, Chrome, and Safari with versions approved by client.
3) Test with different screen resolutions like 1024 x 768, 1280 x 1024, etc.
4) Application should be tested on variety of displays like LCD, CRT, Notebooks, Tablets, and Mobile phones.
4) Test application on different platforms like Windows, Mac, Linux operating systems.

Comprehensive Testing Checklist for Testing Web and Desktop Applications:

Assumptions: Assuming that your application supports following functionality
- Forms with various fields
- Child windows
- Application interacts with database
- Various search filter criteria and display results
- Image upload
- Send email functionality
- Data export functionality

General Test Scenarios

1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red color)
4. General confirmation messages should be displayed using CSS style other than error messages style (e.g. using green color)
5. Tool tips text should be meaningful
6. Dropdown fields should have first entry as blank or text like ‘Select’
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record add/delete/update functionality
9. Amount values should be displayed with correct currency symbols
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than specified max limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting user’s first name should be labeled properly as ‘First Name’
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check application behavior after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of hard coding
21. Standard conventions should be followed throughout for naming resource keys
22. Validate markup for all web pages (validate HTML and CSS for syntax errors) to make sure it is compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation message should appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly

GUI and Usability Test Scenarios

1. All fields on page (e.g. text box, radio options, dropdown lists) should be aligned properly
2. Numeric values should be right justified unless specified otherwise
3. Enough space should be provided between field labels, columns, rows, error messages etc.
4. Scroll bar should be enabled only when necessary
5. Font size, style and color for headline, description text, labels, infield data, and grid info should be standard as specified in SRS
6. Description text box should be multi-line
7. Disabled fields should be grayed out and user should not be able to set focus on these fields
8. Upon click of any input text field, mouse arrow pointer should get changed to cursor
9. User should not be able to type in drop down select lists
10. Information filled by users should remain intact when there is error message on page submit. User should be able to submit the form again by correcting the errors
11. Check if proper field labels are used in error messages
12. Dropdown field values should be displayed in defined sort order
13. Tab and Shift+Tab order should work properly
14. Default radio options should be pre-selected on page load
15. Field specific and page level help messages should be available
16. Check if correct fields are highlighted in case of errors
17. Check if dropdown list options are readable and not truncated due to field size limit
18. All buttons on page should be accessible by keyboard shortcuts and user should be able to perform all operations using keyboard
19. Check all pages for broken images
20. Check all pages for broken links
21. All pages should have title
22. Confirmation messages should be displayed before performing any update or delete operation
23. Hour glass should be displayed when application is busy
24. Page text should be left justified
25. User should be able to select only one radio option and any combination for check boxes.

Test Scenarios for Filter Criteria

1. User should be able to filter results using all parameters on the page
2. Refine search functionality should load search page with all user selected search parameters
3. When there is at least one filter criteria is required to perform search operation, make sure proper error message is displayed when user submits the page without selecting any filter criteria.
4. When at least one filter criteria selection is not compulsory user should be able to submit page and default search criteria should get used to query results
5. Proper validation messages should be displayed for invalid values for filter criteria

Test Scenarios for Result Grid

1. Page loading symbol should be displayed when it’s taking more than default time to load the result page
2. Check if all search parameters are used to fetch data shown on result grid
3. Total number of results should be displayed on result grid
4. Search criteria used for searching should be displayed on result grid
5. Result grid values should be sorted by default column.
6. Sorted columns should be displayed with sorting icon
7. Result grids should include all specified columns with correct values
8. Ascending and descending sorting functionality should work for columns supported with data sorting
9. Result grids should be displayed with proper column and row spacing
10. Pagination should be enabled when there are more results than the default result count per page
11. Check for Next, Previous, First and Last page pagination functionality
12. Duplicate records should not be displayed in result grid
13. Check if all columns are visible and horizontal scroll bar is enabled if necessary
14. Check data for dynamic columns (columns whose values are calculated dynamically based on the other column values)
15. For result grids showing reports check ‘Totals’ row and verify total for every column
16. For result grids showing reports check ‘Totals’ row data when pagination is enabled and user navigates to next page
17. Check if proper symbols are used for displaying column values e.g. % symbol should be displayed for percentage calculation
18. Check result grid data if date range is enabled

Test Scenarios for a Window

1. Check if default window size is correct
2. Check if child window size is correct
3. Check if there is any field on page with default focus (in general, the focus should be set on first input field of the screen)
4. Check if child windows are getting closed on closing parent/opener window
5. If child window is opened, user should not be able to use or update any field on background or parent window
6. Check window minimize, maximize and close functionality
7. Check if window is re-sizable
8. Check scroll bar functionality for parent and child windows
9. Check cancel button functionality for child window
------------

Database Testing Test Scenarios

1. Check if correct data is getting saved in database upon successful page submit
2. Check values for columns which are not accepting null values
3. Check for data integrity. Data should be stored in single or multiple tables based on design
4. Index names should be given as per the standards e.g. IND_<Tablename>_<ColumnName>
5. Tables should have primary key column
6. Table columns should have description information available (except for audit columns like created date, created by etc.)
7. For every database add/update operation log should be added
8. Required table indexes should be created
9. Check if data is committed to database only when the operation is successfully completed
10. Data should be rolled back in case of failed transactions
11. Database name should be given as per the application type i.e. test, UAT, sandbox, live (though this is not a standard it is helpful for database maintenance)
12. Database logical names should be given according to database name (again this is not standard but helpful for DB maintenance)
13. Stored procedures should not be named with prefix “sp_”
14. Check is values for table audit columns (like createddate, createdby, updatedate, updatedby, isdeleted, deleteddate, deletedby etc.) are populated properly
15. Check if input data is not truncated while saving. Field length shown to user on page and in database schema should be same
16. Check numeric fields with minimum, maximum, and float values
17. Check numeric fields with negative values (for both acceptance and non-acceptance)
18. Check if radio button and dropdown list options are saved correctly in database
19. Check if database fields are designed with correct data type and data length
20. Check if all table constraints like Primary key, Foreign key etc. are implemented correctly
21. Test stored procedures and triggers with sample input data
22. Input field leading and trailing spaces should be truncated before committing data to database
23. Null values should not be allowed for Primary key column

Test Scenarios for Image Upload Functionality

(Also applicable for other file upload functionality)
1. Check for uploaded image path
2. Check image upload and change functionality
3. Check image upload functionality with image files of different extensions (e.g. JPEG, PNG, BMP etc.)
4. Check image upload functionality with images having space or any other allowed special character in file name
5. Check duplicate name image upload
6. Check image upload with image size greater than the max allowed size. Proper error message should be displayed.
7. Check image upload functionality with file types other than images (e.g. txt, doc, pdf, exe etc.). Proper error message should be displayed
8. Check if images of specified height and width (if defined) are accepted otherwise rejected
9. Image upload progress bar should appear for large size images
10. Check if cancel button functionality is working in between upload process
11. Check if file selection dialog shows only supported files listed
12. Check multiple images upload functionality
13. Check image quality after upload. Image quality should not be changed after upload
14. Check if user is able to use/view the uploaded images

Test Scenarios for Sending Emails

(Test cases for composing or validating emails are not included)
(Make sure to use dummy email addresses before executing email related tests)
1. Email template should use standard CSS for all emails
2. Email addresses should be validated before sending emails
3. Special characters in email body template should be handled properly
4. Language specific characters (e.g. Russian, Chinese or German language characters) should be handled properly in email body template
5. Email subject should not be blank
6. Placeholder fields used in email template should be replaced with actual values e.g. {Firstname} {Lastname} should be replaced with individuals first and last name properly for all recipients
7. If reports with dynamic values are included in email body, report data should be calculated correctly
8. Email sender name should not be blank
9. Emails should be checked in different email clients like Outlook, Gmail, Hotmail, Yahoo! mail etc.
10. Check send email functionality using TO, CC and BCC fields
11. Check plain text emails
12. Check HTML format emails
13. Check email header and footer for company logo, privacy policy and other links
14. Check emails with attachments
15. Check send email functionality to single, multiple or distribution list recipients
16. Check if reply to email address is correct
17. Check sending high volume of emails

Test Scenarios for Excel Export Functionality

1. File should get exported in proper file extension
2. File name for the exported Excel file should be as per the standards e.g. if file name is using timestamp, it should get replaced properly with actual timestamp at the time of exporting the file
3. Check for date format if exported Excel file contains date columns
4. Check number formatting for numeric or currency values. Formatting should be same as shown on page
5. Exported file should have columns with proper column names
6. Default page sorting should be carried in exported file as well
7. Excel file data should be formatted properly with header and footer text, date, page numbers etc. values for all pages
8. Check if data displayed on page and exported Excel file is same
9. Check export functionality when pagination is enabled
10. Check if export button is showing proper icon according to exported file type e.g. Excel file icon for xls files
11. Check export functionality for files with very large size
12. Check export functionality for pages containing special characters. Check if these special characters are exported properly in Excel file

Performance Testing Test Scenarios

1. Check if page load time is within acceptable range
2. Check page load on slow connections
3. Check response time for any action under light, normal, moderate and heavy load conditions
4. Check performance of database stored procedures and triggers
5. Check database query execution time
6. Check for load testing of application
7. Check for stress testing of application
8. Check CPU and memory usage under peak load condition

Security Testing Test Scenarios

1. Check for SQL injection attacks
2. Secure pages should use HTTPS protocol
3. Page crash should not reveal application or server info. Error page should be displayed for this
4. Escape special characters in input
5. Error messages should not reveal any sensitive information
6. All credentials should be transferred over an encrypted channel
7. Test password security and password policy enforcement
8. Check application logout functionality
9. Check for Brute Force Attacks
10. Cookie information should be stored in encrypted format only
11. Check session cookie duration and session termination after timeout or logout
11. Session tokens should be transmitted over secured channel
13. Password should not be stored in cookies
14. Test for Denial of Service attacks
15. Test for memory leakage
16. Test unauthorized application access by manipulating variable values in browser address bar
17. Test file extension handing so that exe files are not uploaded and executed on server
18. Sensitive fields like passwords and credit card information should not have auto complete enabled
19. File upload functionality should use file type restrictions and also anti-virus for scanning uploaded files
20. Check if directory listing is prohibited
21. Password and other sensitive fields should be masked while typing
22. Check if forgot password functionality is secured with features like temporary password expiry after specified hours and security question is asked before changing or requesting new password
23. Verify CAPTCHA functionality
24. Check if important events are logged in log files
25. Check if access privileges are implemented correctly
Penetration testing test cases – I’ve listed around 41 test cases for penetration testing on this page.
I ‘d really like to thank Devanshu Lavaniya (Sr. QA Engineer working for I-link Infosoft) for helping me to prepare this comprehensive testing checklist.
I’ve tried to cover all standard test scenarios for web and desktop application functionality. But still I know this is not a compete checklist. Testers on different projects have their own testing checklist based on their experience.

http://www.softwaretestinghelp.com/sample-test-cases-testing-web-desktop-applications/