jueves, junio 27

Cause and Effect Graph

Cause and Effect Graph – Dynamic Test Case Writing Technique For Maximum Coverage with Fewer Test Cases

Dynamic Testing Techniques – Cause and Effect Graph.

Test case writing forms an integral part in testing, whether it is manual or automation. Every project is exclusive and has <n> number of test conditions that needs to be covered.
We should focus on two points whenever we write test cases. These are:
Mitigating the risk
Coverage.
This paper revolves around the second point which is “Coverage”. To be precise – Requirements Coverage.

Test case writing techniques for dynamic testing

  1. Equivalence partitioning
  2. Boundary Value Analysis
  3. Decision table
  4. Cause and effect graph technique
  5. State transition diagram
  6. Orthogonal array testing(OATS)
  7. Error guessing.
We have some really good papers 1st, 2nd and 3rd points (Equivalence portioning, BVA and decision tables) here in STH. I am going to discuss about point 4 which is Cause and effect graph.
Introduction
Cause and effect graph is a dynamic test case writing technique. Here causes are the input conditions and effects are the results of those input conditions.
Cause-Effect Graphing is a technique which starts with set of requirements and determines the minimum possible test cases for maximum test coverage which reduces test execution time and ultimately cost.
The goal is to reduce the total number of test cases still achieving the desired application quality by covering the necessary test cases for maximum coverage.
But at the same time obviously there are some downsides of using this test case writing technique. It takes time to model all your requirements into this cause-effect graph before writing test cases.
The Cause-Effect graph technique restates the requirements specification in terms of logical relationship between the input and output conditions. Since it is logical, it is obvious to use Boolean operators like AND, OR and NOT.
Notations we are going to use:
Cause and effect graph testing 1
Now let’s try to implement this technique with some example.
1. Draw a cause and effect graph based on a requirement/situation
2. Cause and Effect graph is given, draw a decision table based on it to draw the test case.
Let’s see both of them one by one.

Let’s draw a cause and effect graph based on a situation

Situation:
The “Print message” is software that read two characters and, depending of their values, messages must be printed.
  • The first character must be an “A” or a “B”.
  • The second character must be a digit.
  • If the first character is an “A” or “B” and the second character is a digit, the file must be updated.
  • If the first character is incorrect (not an “A” or “B”), the message X must be printed.
  • If the second character is incorrect (not a digit), the message Y must be printed.
Solution:
The causes for this situation are:
C1 – First character is A
C2 – First character is B
C3 – Second character is a digit
The effects (results) for this situation are
E1 – Update the file
E2 – Print message “X”
E3 – Print message “Y”
LET’S START!!
First draw the causes and effects as shown below:
Cause and effect graph testing 2
Key – Always go from effect to cause (left to right). That means, to get effect “E” ,what causes should be true.
In this example, let’s start with Effect E1.
Effect E1 is to update the file. The file is updated when
-  First character is “A” and second character is a digit
-  First character is “B” and second character is a digit
-  First character can either be “A” or “B” and cannot be both.
Now let’s put these 3 points in symbolic form:
For E1 to be true – following are the causes:
-  C1 and C3 should be true
-  C2 and C3 should be true
-  C1 and C2 cannot be true together. This means C1 and C2 ae mutually exclusive.
Now let’s draw this:
Cause and effect graph testing 3
So as per the above diagram, for E1 to be true the condition is
(C1  1C2) 2 C3
The circle in the middle is just an interpretation of the middle point to make the graph less messy.
There is a third condition where C1 and C2 are mutually exclusive. So the final graph for effect E1 to be true is shown below:
Cause and effect graph testing 4
Lets move to Effect E2:
E2 states to print message “X”. Message X will be printed when First character is neither A nor B.
Which means Effect E2 will hold true when either C1 OR C2 is invalid. So the graph for Effect E2 is shown as (In blue line)
Cause and effect graph testing 5
For Effect E3.
E3 states to print message “Y”. Message Y will be printed when Second character is incorrect.
Which means Effect E3 will hold true when C3 is invalid. So the graph for Effect E3 is shown as (In Green line)
Cause and effect graph testing 6
This completes the Cause and Effect graph for the above situation.
Now let’s move to draw the Decision table based on the above graph.

Writing Decision table based on Cause and Effect graph

First write down the Causes and Effects in a single column shown below
Cause and effect graph testing 7
Key is the same. Go from bottom to top which means traverse from effect to cause.
Start with Effect E1. For E1 to be true, the condition is: (C1 1 C2) 2 C3 .
Here we are representing True as 1 and False as 0
First put Effect E1 as True in the next column as
Cause and effect graph testing 8
Now for E1 to be “1” (true), we have the below two conditions –
C1 AND C3 will be true
C2 AND C3 will be true
Cause and effect graph testing 9
For E2 to be True, either C1 or C2 has to be false shown as
Cause and effect graph testing 10
For E3 to be true, C3 should be false.
Cause and effect graph testing 11
So it’s done. Let’s complete the graph by adding 0 in the blank column and including the test case identifier.
Cause and effect graph testing 12

Writing Test cases from the decision table

I am writing a sample test case for test case 1 (TC1) and Test Case 2 (TC2).
Cause and effect graph testing 13
In a similar fashion, you can create other test cases.
(A test case contains many other attributes like preconditions, test data, severity, priority, build, version, release, environment etc. I assume all these attributes to be included when you write the test cases in actual situation)

Conclusion

Summarizing the steps once again:
  1. Draw the circles for Causes and Graphs
  2. Start from effects and move towards the cause.
  3. Look for mutually exclusive causes.
This finishes the Cause and Effect graph dynamic test case writing technique. We have seen how to draw the graph and how to draw the decision table based on it. The final step of writing test cases based on decision table is comparatively easy.

http://www.softwaretestinghelp.com/cause-and-effect-graph-test-case-writing-technique/

We Need A Better Way To Test

Testing started simply.  Developers would run their code after they wrote it to make sure it worked.  When teams became larger and code more complex, it became apparent that developers could spend more time coding if they left much of the testing to someone else.  People could specialize on developing or testing.  Most testers in the early stages of the profession were manual testers.  They played with the user interface and made sure the right things happened.
This works fine for the first release but after several releases, it becomes very expensive.  Each release has to test not only the new features in the product but also all of the features put into every other version of the product.  What took 5 testers for version 1 takes 20 testers for version 4.  The situation just gets worse as the product ages.  The solution is test automation.  Take the work people do over and over again and let the computer do that work.  There is a limit to the utility of this, but I've spoken of that elsewhere and it doesn't need to be repeated here.  With sufficiently skilled testers, most products can be tested in an automated fashion.  Once a test is automated, the cost of running it every week or even every day becomes negligible. 
As computer programs became more complex over time, the old testing paradigm didn't scale and a new paradigm--automated testing--had to be found.  There is, I think, a new paradigm shift coming.  Most test automation today is the work of skilled artisans.  Programmers examine the interfaces of the product they are testing and craft code to exercise it in interesting and meaningful ways.  Depending on the type of code being worked on, a workforce of 1:1 testers to devs can usually keep up.  This was true at one point.  Today it is only somewhat true and tomorrow it will be even less true.  Some day, it will be false.  What has changed?  Developers are leveraging better programming models such as object-oriented code, larger code libraries, greater code re-use, and more efficient languages to get more done with less code.  Unfortunately, this merely increases the surface area for testers to have to cover.  Imagine, if you will, a circle.  When a developer is able to create 1 unit of code (r=1), the perimeter which a tester must cover is only 3.14.  When the developer uses tools to increase his work and the radius stretches to 2, the tester must now cover a perimeter of 12.56.  The area needing to be tested increases much faster than the productivity increase.  Using the same programming models as the developers will not allow test to keep up.  In the circle example, a 2x boost in tester performance would only cover 1/2 of the circle.
Is test doomed?  Is there any way to keep up or are we destined to be outpaced by development and to need larger and larger teams of test developers just to keep pace.  The solution to the problem has the same roots as the solution to manual testing problem.  That is, it is time to leverage the computer to do more work on behalf of the tester.  It will soon be too expensive to hand-craft test cases for each function call and the set of parameters it entails.  Writing code one test case at a time just doesn't scale--even with newer tools.  In the near future, it will be important to leverage the computer to write test cases by itself.  Can this be done?  Work is already beginning, but it is just in its infancy.  The tools and practices that will make this a widespread practice likely do not exist today.  Certainly not in a readily consumed form.
This coming paradigm shift makes testing a very interesting place to be working today.  On the one hand, it can be easy for testers to become overwhelmed with the amount of work asked of them.  On the other hand, the solutions to the problem of how to leverage the computer to test itself are just now being created.  Being in on the ground floor of a new paradigm means the ability to have a tremendous impact on how things will be done for years to come.

http://blogs.msdn.com/b/steverowe/archive/2008/05/30/we-need-a-better-way-to-test.aspx

lunes, junio 24

¿Probar un caso de prueba? ¿test del test?

Cuando hablamos de automatización uno de los puntos más delicados es el resultado mentiroso, el también conocido como Falso Positivo y Falso Negativo. Los que ya han automatizado saben que esto es un problema y a los que van a empezar les adelantamos que van a tener este tipo de problemas. ¿Qué podemos hacer para evitarlo? ¿Qué podemos hacer para contribuir a que ese caso de prueba haga lo que se supone que debe hacer? Eso no les suena a ¿testing?

Definiciones en el área de la medicina:

  • Falso Negativo: un estudio indica enfermedad cuando no la hay. 
  • Falso Positivo: un estudio indica que todo está normal cuando el paciente está enfermo.


En nuestro caso el resultado mentiroso se da cuando un caso de prueba muestra un resultado que no es el correcto, ya sea que el software esté funcionando bien y el caso de prueba muestra que hay un error (falso negativo, el software está sano) ó que el software está funcionando mal y el caso de prueba muestra que está correcto (falso positivo).

O sea, el resultado de que sea negativo es falso, o el resultado de que sea positivo es falso.

Problemas ocasionados:
  • Falso Negativo: la prueba dice que hay error y no lo hay, entonces perdemos tiempo buscando la causa del error hasta que nos damos cuenta que el error estuvo en el test, en el ambiente, los datos, etc.
  • Falso Positivo: ¡hay errores y nosotros tan tranquilos! confiados de que esas funcionalidades están cubiertas, que están siendo probadas, y por lo tanto que no tienen errores (ojo!!)

Obviamente, ¡queremos evitar que los resultados nos mientan! No nos caen bien los mentirosos. Lo que se espera de un caso de prueba automático es que el resultado sea fiel y no estar perdiendo el tiempo en verificar si el resultado es correcto o no.

No queda otra que hacer un análisis proactivo, verificando la calidad de nuestras pruebas, y anticipando a situaciones de errores. O sea, ponerle un pienso a los casos de prueba, y que no sea solo record and playback.

Por un lado, para bajar el riesgo de que ocurran problemas de ambiente o datos de prueba, deberíamos tener un ambiente controlado, que solo sea accedido por las pruebas automáticas. Ahí ya nos vamos a ahorrar dolores de cabeza y el no poder reproducir problemas detectados por las pruebas, ya que si los datos cambian constantemente, no sabremos qué es lo que pasa.

Por otra parte, ¡deberíamos probar los propios casos de prueba! porque ¿quién nos asegura que estén bien programados? Al fin de cuentas el código de pruebas es código también, y como tal, puede tener fallas y oportunidades de mejora. Y quién mejor que nosotros los testers para probarlos.

Probando en busca de falsos negativos
Si el software está sano y no queremos que se muestren errores, debemos asegurarnos que la prueba está probando lo que quiere probar, y esto implica verificar las condiciones iniciales tanto como las finales. O sea, un caso de prueba intenta ejecutar determinado conjunto de acciones con ciertos datos de entrada para verificar los datos de salida y el estado final, pero es muy importante (y especialmente cuando el sistema que estamos probando usa bases de datos) asegurar que el estado inicial es el que esperamos.
Entonces, si por ejemplo vamos a crear una instancia de determinada entidad en el sistema, la prueba debería verificar si ese dato ya existe antes de comenzar la ejecución de las acciones a probar, pues si es así la prueba fallará (por clave duplicada o similar) y en realidad el problema no es del sistema, sino de los datos de prueba). Dos opciones: verificamos si existe, y si es así ya utilizamos ese dato, y sino, damos la prueba como concluida dando un resultado de "inconcluso" (¿o acaso los únicos resultados posibles de un test son pass y fail?).
Si nos aseguramos que las distintas cosas que pueden afectar el resultado están en su sitio, tal como las esperamos, entonces vamos a reducir mucho el porcentaje de errores que no son errores.

Probando en busca de falsos positivos
Si el software está enfermo, ¡la prueba debe fallar! Una posible forma de detectar los falsos negativos es insertar errores al software y verificar que el caso de prueba encuentre la falla. Esto en cierta forma sigue la idea del testing de mutación. Es muy difícil cuando no se trabaja con el desarrollador directamente como para que nos ingrese los errores en el sistema, y es muy costoso preparar cada error, compilarlo, hacer deploy, etc., y verificar que el test encuentra ese error. Muchas veces se puede hacer variando los datos de prueba, o jugando con distintas cosas. Por ejemplo: si tengo de entrada un archivo de texto plano, cambio algo para obligar a que falle y verifico que el caso de prueba encuentra esa falla. En una aplicación parametrizable, también se puede lograr modificando algún parámetro.
Siempre la idea es verificar que el caso de prueba se puede dar cuenta del error, y por eso intento con estas modificaciones hacer que falle. De todos modos lo que al menos podríamos hacer es pensarlo ¿qué pasa si el Software falla en esto? ¿este caso de prueba se daría cuenta, o me hace falta agregar alguna otra validación?

Ambas estrategias nos van a permitir tener casos de pruebas más robustos, pero ojo, ¿quizá luego sean más difíciles de mantener?

Disclaimer: 
Por supuesto que no vamos a hacer esto con todos los casos de prueba que automaticemos, será aplicable a los más críticos, o a los que realmente nos merezca la pena, o quizá a los que sabemos que a cada poco nos dan problemas. 

http://blog.abstracta.com.uy/2013/11/probar-un-caso-de-prueba-test-del-test.html

Modularization vs. Integration - Which Is Best?

Clayton Christensen's second book, The Innovator's Solution, produces several important theories in the realm of innovation.  Like his first book, The Innovator's Dilemma, the second book should be required reading for anyone in technology and especially managers of technology.  Among the theories, one stands out as the most important and, I think, most applicable to the world of software development.  Christensen calls this the Law of Conservation of Attractive Profits.  In essence it states that the profits in a system move over time.  When a market segment is under-served, profits are made in vertically integrated products.  When a market becomes over-served, the profits instead flow to more modular solutions.  In this post I will lay out the theory.  In a future post, I'll apply it to software development.
For every market segment--whether PCs or winter jackets--there are certain metrics that matter.  In the world of PCs for a long time it was speed.  People bought one computer over another because it was faster.  In an early market, the products do not sufficiently satisfy the demand for that metric.  Computers were too slow.  In these markets, there is a performance gap.  To make the fastest computer required tightly integrating the hardware, the operating system, and often the application software.  The interfaces between each of the parts had to be flexible so they could evolve quickly.  This meant the parrts were proprietary  and interdependent.  Companies trying to work on only a single part of the problem found that they couldn't move fast enough.  Look at the world of computer workstations.  When WindowsNT first tried to take on the Sun and HP workstations of the world, it wasn't as fast.  Intel made the processors, Dell made PCs, Microsoft made the operating system.  By comparison, Sun made the Sparc processor, the Solaris operating system, and the Sparcstation.  It was difficult to squeeze as much speed out of an NT system as Sun could get out of its.  Because Sun's workstations provided more performance where the market wanted it, Sun was able to extract higher "rents" (economist-speak for profits).
Eventually every market's primary metric is sufficiently supplied by available solutions.  Products can be said to have a performance surplus.  At this point, customers no longer make purchasing decisions based on the metric--speed--because most solutions provide enough.  Customers are willing to accept higher performance, but they aren't willing to pay for it.  Instead, their purchasing criteria switches to metrics like price, customization and functionality.  Modular architectures trade off performance for the ability to introduce new products more quickly, lower costs, etc.  Products become more commoditized and it is hard to extract high rents for high performance.  However, Christensen says that the profits don't disappear, they only shift to another location in the value chain.  Those companies who are able to best provide the market's new metrics will make the most money.  In the example of the workstations, once computers became fast enough, the modular solutions based around WindowsNT began to make a lot more of the money.  The costs for these were lower, the ability to customize greater, and the support ecosystem (3rd party devices and software) larger.
Looking closely, it becomes apparent that markets are a lot like fractals.  No matter how close the zoom, there is still a complex world.  Each of the modular parts are themselves a market segment with their own primary metrics.  Each one is subject to the modularization/integration cycle.  When a system becomes ripe for modularization, the profits move to the integrated modules which best satisfy the new metrics.  The secret to continuing to gain attractive profits is to notice when this transition is taking place and give up vertical integration at the right moment, choosing instead to integrate across the parts of the value chain least able to satisfy customer demand.
This theory seems to explain Apple's success with the iPod.  The Plays-For-Sure approach taken by Microsoft was a modular approach.  Vendors like creative supplied the hardware.  Microsoft supplied the DRM and the player.  Companies like Napster supplied the music.  There are 3 tiers and 2 seems between them that must be standardized.  In an emerging market where the technology on all fronts was not good enough, is it any wonder that this approach was beaten by an integrated strategy?  Of course, hindsight is 20-20 and what is obvious to us now may not have been obvious then.  Still, Apple came at the problem controlling all 3 pieces of the puzzle.  It was able to satisfy the metric--ease of use--much better than the competition.  We all know how that turned out.  The theory indicates that at some point the metric will be satisfied well enough and people the advantage of the integrated player will dissipate.  With the move away from DRM'd music and the increase quality of the available hardware, this day may be upon us.  Amazon's MP3 store seems to be gaining traction.  Competitors like the Zune and the Sansa players are making some inroads in the player space.  A dis-integrated model may be emerging.

http://blogs.msdn.com/b/steverowe/archive/2008/02/07/modularization-vs-integration-which-is-best.aspx

jueves, junio 20

Testing A Daily Build

It is becoming accepted in the industry that teams should produce a build on a daily basis.  Every project at Microsoft does this as do most projects elsewhere.  If you happen to be on a project that does not, I suggest you work to get one implemented soon.  The benefits are great.  After a daily build is produced though, what next?  What do you do with it?  Here is my suggested work flow.  This is for a large project.  If yours is smaller, feel free to collapse some of the items.
  1. Deal with build breaks - If anything failed to compile, jump on it right away.  Drag developers out of bed and get it fixed.  Either that or back out the offending checkin (you are using a content management system aren't you?) and build again.
  2. Build Authentication Tests (BATs) - The first thing to run against a build are the BATs.  These are test cases that merely ensure that the build is not entirely broken.  These should ensure that all the right files were produced, that the setup works and places files correctly, etc.  Some very basic functionality may be tested here as well.  If the BATs fail, have someone look into the cause immediately and get it fixed.  Do not proceed with any more work on this build until BATs pass.
  3. Build Verification Tests (BVTs) - BVTs are a set of tests to verify basic functionality in your build.  These are not comprehensive tests.  Rather, they are limited in scope and time to the things that matter most.  I'd recommend you ensure that these complete within an hour.  If these fail, the build must not be released for further testing.  Developers must be called in and fixes generated quickly.  I've talked about these previously.  It is worth repeating a little here though.  BVT failures are not acceptable.  These are the canaries we take with us into the mine.  If they fall over, it's time to head for the exits.  Only put tests into the BVTs that meet the criteria that you're willing to block a build's release when they fail.
  4. Functional Tests - This is most of the rest of your test collateral.  These tests are initiated upon completion of the BVTs.  The functional tests contain all of your automation and any manual tests you deem worthy of being run on a daily basis.  These can take all day to execute if you want.  It is acceptable for functional tests to fail.  Each failure should generate a bug that is tracked.  The point of the functionals is to get a feel for the true state of the product.  Everything you want to work should be tested here.  When most (all?) of your functional tests are passing, you know you have a product ready to release to the world. 
That's it for the daily testing regime.  However, that's not all for testing.  There are other tests you probably want before you are ready to release.  These include:
  • Stress Tests - Make sure your code can work under stressful conditions like repeated use, high CPU usage, and low memory.
  • Longhaul Tests - Ensure that your code will continue to work over long periods of time.  The exact amount of time depends on the usage model of your tests.
  • Customer Acceptance Tests - Make sure the customer is happy with the product.  This is usually a series of manual tests that verify that the usability is acceptable.
http://blogs.msdn.com/b/steverowe/archive/2007/10/25/testing-a-daily-build.aspx

lunes, junio 17

¿Cómo se podría ejecutar una prueba de performance y que no se descubran tantos problemas?

Hay una frase de Scott Barber que me gusta mucho y refleja parte de lo que quiero contar aquí:

Only performance testing at the conclusion of system or functional testing is like ordering a diagnostic blood test after the patient is dead.
Que se entendería como Solo probar la performance al final de las pruebas de sistema o pruebas funcionales es como pedir un diagnóstico de sangre cuando el paciente ya está muerto.

Es típico que las pruebas de performance se dejen para el final de un proyecto (si hay tiempo y presupuesto aún), sin haber investigado nada en absoluto con anterioridad. Esto hace que se encuentren todos los problemas relacionados a la performance al final, juntos.

Pensemos qué pasa con respecto a las pruebas de aspectos funcionales. En general (lamentablemente no es siempre) se hacen pruebas unitarias por parte de cada desarrollador antes de integrar el sistema. Una vez integrado se hacen pruebas de integración, y ahí se le pasa una nueva versión al equipo de pruebas externo para que hagan las pruebas de sistema.

Nosotros creemos que este enfoque aplicado a la performance (y obviamente a muchos otros aspectos no-funcionales también) podría hacer que la aplicación llegue mucho más depurada al momento de la prueba final en la plataforma definitiva.


No parece muy difícil implementar un programa que ejecute muchas veces un método y que registre los tiempos. Mientras eso sucede, podríamos mirar la base de datos, qué tiempo tuvieron las SQLs, si utilizaron los índices correctamente, qué pasa si crece el tamaño de esa tabla, etc. Luego, ese programa en lugar de simplemente ejecutar muchas veces un método, podría levantar múltiples procesos que ejecuten muchas veces ese método (siempre teniendo en mente lo que sucederá cuando esté en producción). Podríamos ver qué tanta memoria consume, cómo se maneja el pool de conexiones, si observamos bloqueo entre tablas, etc.

Ventajas

  • los desarrolladores se comprometen con la performance, verifican y no liberan módulos con problemas de performance graves 
  • los desarrolladores se familiarizan con el uso de recursos de sus sistemas, descubren qué cosas de su código generan problemas y seguramente les ayude a evitarlo a futuro
  • menos riesgo de tener problemas de performance
  • los problemas se detectan antes, con lo cual será más barato resolverlos

Desventajas

  • más tiempo para liberar la primera versión
  • necesidad de aprender a utilizar herramientas de monitorización
Claro que esto no es para hacerlo con cada método que se desarrolle, pero se podría pensar en cuáles serán los más utilizados, los que tienen las tareas más demandantes en cuanto a consumo, los que deben procesar más datos, etc.


Claro que esto no sustituye las pruebas de performance, sino que nos prepara mejor para ellas. No las sustituye porque no se prueban las distintas funcionalidades en conjunto, no se prueba la operativa, no se prueba en un servidor similar al de producción, sino que en el de desarrollo, etc.


http://blog.abstracta.com.uy/2013/12/como-se-podria-ejecutar-una-prueba-de.html

What Is Test Automation?

talk about it a lot, but I don't know that I've ever defined it.  A reader recently wrote in and asked what exactly this was.  I suppose that means I should give a better explanation of it.
Long ago in a galaxy far, far away, testers were computer-savvy non-programmers.  Their job was to use the product before customers did.  In doing so, they could find the bugs, report them to the developers, and get them fixed.  This was a happy world but it couldn't last.  Eventually companies started shipping things called SDKs which were Software Development Kits full of programming primitives for use by other programmers.  There were no buttons to click.  No input boxes to type the number 1 and the letter Q into.  How was a non-programmer supposed to test these?  Also, as companies shipped larger and larger products and these products built upon previous products, the number of buttons that needed pushing and input boxes that needed input grew and grew.  Trying to test these was like running on a treadmill turned up to 11.  The cost of testing grew as did the amount of time developers had to wait for the testers to give the green light to ship a product.  Something had to be done.
The solution:  Test Automation.
Test automation is simply an automatic way of doing what testers were doing before.  Test automation is a series of programs which call APIs or push buttons and then programmatically determine whether the right action took place.
In a simple form, test automation is just unit tests.  Call an API, make sure you get the right return result or that no exception is thrown.  However, the real world requires much more complex testing than that.  A return result is insufficient to determine true success.  A function saying it succeeded just means it isn't aware that it failed.  That's a good first step, but it is sort of the check engine light not being lit in the car.  If there is an awful knocking sound coming from under the hood, it still isn't a good idea to drive.  Likewise, it is important to use more than just the return value to verify success.  Most functions have a purpose.  That may be to sort a list, populate a database, or display a picture on the screen.  Good test automation will independently verify that this purpose was fulfilled.
Other advanced forms of test automation include measuring performance, stressing the system by executing functionality under load, and what we call "end to end" testing.  While unit tests and API tests treat methods and modules as discrete pieces and test them in isolation, end to end testing tries to simulate the experience of the user.  This means pressing the buttons in Windows Media Player to cause a DVD to play and then verifying that it is playing.  Sometimes this can be the most challenging part of testing.
Here's an example of something we had to automate.  Think about how you might approach this.  Windows Vista offers per-application volume.  It is possible to turn down the volume on your World of Warcraft game while leaving Windows Media Player playing loud.  To do this, right-click on the speaker icon in the lower-right-hand corner of your screen and select "Open Volume Mixer."  Moving the slider for an application down should cause its volume to attenuate (get quieter).  Testing this manually is easy.  Just play a sound, lower the volume, and listen.  Now try automating it.

http://blogs.msdn.com/b/steverowe/archive/2007/12/19/what-is-test-automation.aspx

jueves, junio 13

State Transition Testing Technique for Testing Complex Applications

In our last article, we saw ‘Cause and effect graph’ test case writing technique. Today let’s move to next dynamic test case writing method – State transition technique.
Note – The test design techniques mentioned here may seem difficult but once you get hold of it, it becomes very easy to implement and reuse, increasing productivity and test coverage.

What is State transition testing technique?
State transition technique is a dynamic testing technique, which is used when the system is defined in terms of a finite number of states and the transitions between the states is governed by the rules of the system.
Or in other words, this technique is used when features of a system are represented as states which transforms to other state. The transformations are determined by the rules of the software.  The pictorial representation can be shown as:
State Transition Testing 1
So here we see that an entity transitions from State 1 to State 2 because of some input condition, which leads to an event and results to an action and finally gives the output.
To explain it with an example:
You visit an ATM and withdraw $1000. You get your cash. Now you run out of balance and make exactly the same request of withdrawing $1000. This time ATM refuses to give you the money because of insufficient balance. So here the transition, which caused the change in state is the earlier withdrawal

State Transition Testing Example in Software testing:

In practical scenario, testers are normally given the state transition diagrams and we are required to interpret it. These diagrams are either given by the Business Analysts or a stakeholder and we use these diagrams to determine our test cases.
Let’s consider the below situation:
Software name – Manage_display_changes
Specifications – The software responds to input requests to change display mode for a time display device.
The display mode can be set to one of the four values:
  • Two corresponding to displaying either the time or date.
  • The other two when altering either the time or the date.
State Transition Testing 9
The different states are as follows:
Change Mode (CM)
Activation of this shall cause the display mode to move between “display time (T)” and “display date (D)”.
Reset (R)
If the display mode is set to T or D, then a “reset” shall cause the display mode to be set to “alter time (AT)” or “alter date (AD)” modes.
Time Set (TS)
Activation of this shall cause the display mode to return to T from AT.
Date Set (DS)
Activation of this shall cause the display mode to return to D from AD.
State transition diagram is shown as:
State Transition Testing 2
Now, let’s move to interpret it:
Here:
1) Various states are:
  • Display Time(S1),
  • Change Time(S3),
  • Display Date(S2) and
  • Change Date (S4).
2) Various Inputs are:
  • Change Mode(CM),
  • Reset (R),
  • Time Set(TS),
  • Date Set(DS).
3) Various Outputs are:
  • Alter Time(AT),
  • Display Time(T),
  • Display Date(D),
  • Alter Date (AD).
4) Changed States are:
  • Display Time(S1),
  • Change Time (S3),
  • Display Date (S2) and
  • Change Date (S4).
Step 1:
Write all the start state.  For this, take one state at a time and see how many arrows are coming out from it.
  • For State S1, there are two arrows coming out of it. One arrow is going to state S3 and other arrow is going to state S2.
  • For State S2 – There are 2 arrows. One is going to State S1 and other going to S4
  • For State S3 – Only 1 arrow is coming out of it, going to state S1
  • For State S4 – Only 1 arrow is coming out of it, going to state S2
Let’s put this into our table:
State Transition Testing 3
Since, for state S1 and S2 there are two arrows coming out, we have written it twice.
Step -2:
For each state, write down their final state.


  • For state S1 – The final states are S2 and S3
  • For State S2 – The final states are S1 and S4
  • For State S3 – Final state is S1
  • For State S4 – Final State is S2
Put this in the table as output / resultant state.
State Transition Testing 4
Step 3:
For each start state and its corresponding finish state, write down the input and output conditions
- For state S1 to go to state S2, the input is Change Mode (CM) and output is Display Date(D) shown below:
State Transition Testing 5
In a similar way, write down the Input conditions and its output for all the states as follows:
State Transition Testing 6
Step 4:
Now add the test case ID for each tests shown below:
State Transition Testing 7

Now let’s convert it to a formal test cases: 

(Click image to enlarge)
State Transition Testing 8
In this way all the remaining test cases can be derived. I assume the other attributes of the test cases like preconditions, severity, priority, environment, build etc. are also included in the test case.

Conclusion:

Summarizing the steps once again:
  1. Identify the initial states and their final state based on the lines/arrows that are coming out of the initial state
  2. For each initial state, find out the input condition and the output result
  3. Mark each set as a separate test case.
State Transition testing is a unique test approach for testing complex applications, which would increase test execution productivity without compromising on test coverage.

http://www.softwaretestinghelp.com/state-transition-testing-technique-for-testing-complex-applications/

The Comfort of Unit Tests

Working on my class project this summer, I decided to bite the bullet and do full unit testing.  I've done unit testing before, but only in small amounts.  In this case, I'm doing something akin to test driven development.  I'm writing the tests very early and for everything.  Not necessarily first, but in very close proximity to when I write the code.  At first it felt painful.  You can't code as fast.  It sucks having to stop and write tests.  Then I started making changes to the design.  This is when it pays to have unit tests.  It was great to make a change, press the "run tests" button, and look for the green bar to appear.  If it didn't, I could easily go fix the issues and continue on.  I didn't have to worry about breaking random things with my changes.  Instead, if the tests passed, I could be confident that everything was still working.  Unit tests are a little like a security blanket.  Just having them makes you feel comfortable.  In the end, I find myself more willing to make changes and to refactor the code.
One of my partners in this project has written some code that I will end up helping out with.  He has no tests for his area.  It's a lot scarier going in there to make changes.  After doing some work, I will have no peace of mind that I haven't subtly changed things.  This also points out another benefit of unit tests.  It makes life a lot easier for the next person.  It's not too hard to keep all of your own code in your head and know what effects changes are likely to have.  It's very difficult to do the same with someone else's code.  I want the ability to make what looks like an obvious change without having to read every piece of someone else's code first.  Creating unit tests makes life a lot easier for whoever has to maintain the code later.
This brings up a use for unit tests that is mentioned in the literature but is not often considered in the real world.  When you are asked to change "scary" code that doesn't have any unit tests, consider spending the first part of your schedule writing those tests.  Not tests for the new code, but tests for the code as it originally exists.  Once you have a comprehensive suite of tests, you can make changes with peace of mind.

http://blogs.msdn.com/b/steverowe/archive/2007/08/01/the-comfort-of-unit-tests.aspx

Dreaming In Code

I finally finished Dreaming in Code by Scott Rosenberg.  It was initially hailed as the Soul of a New Machine for a new generation.  As such, it fails.  Its depiction of the process and the characters involved is just not that compelling.  It's not poorly written, it just isn't outstanding.  It is, however, an interesting look into the realm of software process theory.
Scott was given inside access to the team creating Chandler.  Chandler is Mitch Kapor's project to create a new e-mail/calendar application.  Something akin to Outlook but much more flexible.  Scott tells us about the formative stages, the constant changes in direction, the endless meetings.  Some interesting characters like Andy Hertzfeld were part of the team.  As a description of a software project, it is palatable, but not exciting. 
We're given a view of what can only be described as a failure.  Chandler may become a success eventually, but it's taken over 4 years and is still not ready for prime-time.  It is this failure that provides the interesting part of the book.  Many software projects run aground, but most do so behind closed doors.  It is rare to have a chance to observe a failure and analyze what happened.  Perhaps this opportunity will give us some insights into why things failed which can be applied to avoid failures in our own projects.  I've posted elsewhere with my ideas on this.
Scott seems to have decided that a description of a failed software process was only moderately interesting and gives us an overview of much of the modern theory of software project management.  He references the writings of Fred Brooks, Alan Kay, Joel Spolsky, and many others.  These discussions are interspersed throughout the text and make up the bulk of the last third of the book.  In my opinion, the book is worth reading just for this content.  It's a great introduction to the subject and would make a good jumping-off point for more detailed research.
Overall, I recommend reading this book if software theory is something you find interesting.  If you are looking for a history book telling the story of a small team creating something amazing, stick to the original classic.

http://blogs.msdn.com/b/steverowe/archive/2007/07/17/dreaming-in-code.aspx

lunes, junio 10

Tips to design test data before executing your test cases

Software Testing Profesional.
I have mentioned importance of proper test data in many of my previous articles. Tester should check and update the test data before execution of any test case. In this article I will provide tips on how to prepare test environment so that any important test case will not be missed by improper test data and incomplete test environment setup.

What do I mean by test data?

If you are writing test case then you need input data for any kind of test. Tester may provide this input data at the time of executing the test cases or application may pick the required input data from the predefined data locations. The test data may be any kind of input to application, any kind of file that is loaded by the application or entries read from the database tables. It may be in any format like xml test data, system test data, SQL test data or stress test data.
Preparing proper test data is part of the test setup. Generally testers call it as testbed preparation. In testbed all software and hardware requirements are set using the predefined data values.
If you don’t have the systematic approach for building test data while writing and executing test cases then there are chances of missing some important test cases. Tester can’t justify any bug saying that test data was not available or was incomplete. It’s every testers responsibility to create his/her own test data according to testing needs. Don’t even rely on the test data created by other tester or standard production test data, which might not have been updated for months! Always create fresh set of your own test data according to your test needs.
Sometime it’s not possible to create complete new set of test data for each and every build. In such cases you can use standard production data. But remember to add/insert your own data sets in this available database. One good way to design test data is use the existing sample test data or testbed and append your new test case data each time you get same module for testing. This way you can build comprehensive data set.

How to keep your data intact for any test environment?

Many times more than one tester is responsible for testing some builds. In this case more than one tester will be having access to common test data and each tester will try to manipulate that common data according to his/her own needs. Best way to keep your valuable input data collection intact is to keep personal copies of the same data. It may be of any format like inputs to be provided to the application, input files such as word file, excel file or other photo files.
Check if your data is not corrupted:
Filing a bug without proper troubleshooting is bad a practice. Before executing any test case on existing data make sure that data is not corrupted and application can read the data source.

How to prepare data considering performance test cases?

Performance tests require very large data set. Particularly if application fetching or updating data from DB tables then large data volume play important role while testing such application for performance. Sometimes creating data manually will not detect some subtle bugs that may only be caught by actual data created by application under test. If you want real time data, which is impossible to create manually, then ask your manager to make it available from live environment.
I generally ask to my manager if he can make live environment data available for testing. This data will be useful to ensure smooth functioning of application for all valid inputs.
Take example of my search engine project ‘statistics testing’. To check history of user searches and clicks on advertiser campaigns large data was processed for several years which was practically impossible to manipulate manually for several dates spread over many years. So there is no other option than using live server data backup for testing. (But first make sure your client is allowing you to use this data)

What is the ideal test data?

Test data can be said to be ideal if for the minimum size of data set all the application errors get identified. Try to prepare test data that will incorporate all application functionality, but not exceeding cost and time constraint for preparing test data and running tests.

How to prepare test data that will ensure complete test coverage?

Design your test data considering following categories:
Test data set examples:
1) No data: Run your test cases on blank or default data. See if proper error messages are generated.
2) Valid data set: Create it to check if application is functioning as per requirements and valid input data is properly saved in database or files.
3) Invalid data set: Prepare invalid data set to check application behavior for negative values, alphanumeric string inputs.
4) Illegal data format: Make one data set of illegal data format. System should not accept data in invalid or illegal format. Also check proper error messages are generated.
5) Boundary Condition data set: Data set containing out of range data. Identify application boundary cases and prepare data set that will cover lower as well as upper boundary conditions.
6) Data set for performance, load and stress testing: This data set should be large in volume.
This way creating separate data sets for each test condition will ensure complete test coverage.
Conclusion:
Preparing proper test data is a core part of “project test environment setup”. Tester cannot pass the bug responsibility saying that complete data was not available for testing. Tester should create his/her own test data additional to the existing standard production data. Your test data set should be ideal in terms of cost and time. Use the tips provided in this article to categorize test data to ensure complete functional test cases coverage.
Be creative, use your own skill and judgments to create different data sets instead of relying on standard production data while testing.

http://www.softwaretestinghelp.com/tips-to-design-test-data-before-executing-your-test-cases/

How to write effective Test cases, procedures and definitions

Writing effective test cases is a skill and that can be achieved by some experience and in-depth study of the application on which test cases are being written.
Here I will share some tips on how to write test cases, test case procedures and some basic test case definitions.
What is a test case?
“A test case has components that describes an input, action or event and an expected response, to determine if a feature of an application is working correctly.” Definition by Glossary
There are levels in which each test case will fall in order to avoid duplication efforts.
Level 1: In this level you will write the basic test cases from the available specification and user documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus on current updated functionalities to test rather than remaining busy with regression testing.
So you can observe a systematic growth from no testable item to a Automation suit.
Why we write test cases?
The basic objective of writing test cases is to validate the testing coverage of the application. If you are working in any CMMi company then you will strictly follow test cases standards. So writing test cases brings some sort of standardization and minimizes the ad-hoc approach in testing.
How to write test cases?
Here is a simple test case format
Fields in test cases:
Test case id:
Unit to test:
What to be verified?
Assumptions:
Test data:
Variables and their values
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:

So here is a basic format of test case statement:
Verify
Using
[tool name, tag name, dialog, etc]
With [conditions]
To [what is returned, shown, demonstrated]
Verify: Used as the first word of the test case statement.
Using: To identify what is being tested. You can use ‘entering’ or ‘selecting’ here instead of using depending on the situation.

For any application basically you will cover all the types of test cases including functional, negative and boundary value test cases.
Keep in mind while writing test cases that all your test cases should be simple and easy to understand. Don’t write explanations like essays. Be to the point.
Try writing the simple test cases as mentioned in above test case format. Generally I use Excel sheets to write the basic test cases. Use any tool like ‘Test Director’ when you are going to automate those test cases.

http://www.softwaretestinghelp.com/how-to-write-effective-test-cases-test-cases-procedures-and-definitions/

jueves, junio 6

15 Best Test Management Tools for Software Testers

Test Management encompasses anything and everything that we do as testers.  Our day-to-day activities include:
  1. Creating and maintaining release/project cycle/component information
  2. Creating and maintaining the test artifacts specific to each release/cycle that we have- requirements, test cases, etc.
  3. Establishing traceability and coverage between the test assets
  4. Test execution support – test suite creation, test execution status capture, etc.
  5. Metric collection/report-graph generation for analysis
  6. Bug tracking/defect management
The above are broadly some of the tasks that involve what we call, the test management process. This process is critical, detail-oriented and instrumental is making sure that the entire testing effort is successful.
Well, the good news is, there is help available. Through this article we will try to introduce ourselves to the most commonly available tools for the test management process briefly. Here is the comparison of top test management software.

1) QMetry:

QMetry logo
QMetry is a total test management tool that you can use to create requirements, test cases (test suites) that can be run on multiple platforms and defects. It integrates seamlessly with many leading bug-tracking and automation environments making a good candidate for use in most situations. It is a commercial product, with a 30 day free trial available. Visit the site for more information at: http://www.qmetry.com/

2) TestRail:

Testrail logo
TestRail is a centralized test case management tool – you can use it to create test cases and test suites, track execution and report metrics. Additionally, it integrates with many issue tracking tools that makes requirements from external systems to be linked to test cases in TestRail; bugs can also be created in the external systems and links can be established to the corresponding test case. It comes with a HTTP-based API to integrate with the automated test results. One most common integration is with Gemini, which is a incident/ticket management system(supports agile too). It is a commercial product with a free trial available at: http://www.gurock.com/testrail/

3) JIRA:

jira logo
JIRA is tool that makes an appearance anytime there is a discussion on any-management process- for all the right reasons. JIRA has 2 add-ons that support the test management process.
  a) Zephyr: All the aspects that you would expect of a typical tool of this type are supported. You can create tests/test suites/test cycles/bugs/reports and so on. You could have an additional add-on, ZAPI for automation integration.  Along with the initial JIRA license you would have to pay for Zephyr to use it. ($10 for 10 users a month). There is a free trail available too. Check out information about it at: https://marketplace.atlassian.com/plugins/com.thed.zephyr.je
  b) Go2Group SynapseRT: This tool has all the test management features but the primary focus is on requirement based testing. It can be used for projects where it makes more sense to track your progress in terms of the completion and/or success-failure of a certain requirement as opposed to test cases. Traceability is of higher priority with this add-on. Along with the initial JIRA license you would have to pay for this add-on too. ($10 for 10 users a month). There is a free trail available at: https://marketplace.atlassian.com/plugins/com.go2group.jira.plugin.synapse

4) qTest:

qTest logo
Developed by QASymphony, qTest is one of the cloud-based test management tools that has all the typical key features. With the help of qTest Connector, it can integrate with JIRA for an entire end-to-end QA solution – but that is not all, it also integrates with other tools like Bugzilla, FogBugz, Rally etc. It is not open sourced but very affordable. Check out information and pricing at: http://www.qasymphony.com/qtest.html

5) TestLodge:

TestLodge logo
This is a comprehensive test case management tool that has 4 key aspects – Test Plan, requirements, Test suites/cases and test runs. So, as you can see, it has everything it needs to be able to manage test cases for you. For all the other operations, it integrates with the many main stream incident/issue management tools to provide a comprehensive solution. It is a commercial product, for a free trial visit: http://www.testlodge.com/

6) HP ALM/Quality center:

HP ALM logo
HP QC has been one of the most used test management software for many years. It has all the features necessary and in many ways, it is the standard against which the other tools are measured. Even though it is one of the high-end tools, economically, it still remains to be very popular. Check out http://www8.hp.com/in/en/software-solutions/software.html?compURI=1172141 for trial and other information.

7) Zephyr:

Zephyr logo
This is the same Zephyr that we discussed earlier as being an add-on to JIRA. The reason why it deserves an independent discussion is because it can be used as an individual product also. For free trial and more info: http://www.getzephyr.com/

8) Testuff:

Testuff logo
Testuff is Saas Test Management tool that has many cool features. The typical test case management features are a given. Besides that, it has a cool video upload capability for a defect. Integrates with a huge list of bug-trackers, some of which are, Bugzilla, JIRA, YouTrack, Mantis among others. It has an API that supports automation tools like QTP, Rational Robot, Selenium, TestComplete etc. Best of all, it is very affordable. Check out the features and pricing information at: http://www.testuff.com/

9) Test Collab:

Test Collab logo
A web based test case management tool that states its speed to be one of its key features due to its ‘Ajax’ified interface(in the exact words of the software makers). It is simple to use, integrates with all major bug/issue/incident trackers. Customizable and has a good reporting facility. It is a commercial product and information is available at: http://testcollab.com/

10) Gemini:

countersoft gemini
One of the key components of this tool is supporting ‘Testing & QA’ along with the other aspects like Project Planning, issue tracking etc. Using this tool you could create test plans, test cases, test runs, traceability, test run reports etc. There are also various integrations and extensions available. It is a commercial product with a free starter pack available at: http://www.countersoft.com/solutions/testing/

11) PractiTest:

practiTest logo
It is an end-to-end QA and Test management software. You can organize your requirements, create tests, run tests, tracks bugs etc, using this tool. It integrates very well with three of the leading incident management tools like JIRA, Bugzilla and RedMine. It is not open sourced but is quite affordable. For more feature and pricing information, check: http://www.practitest.com/

12) TestLink:






TestLink logo

This is one of the very few open source test management tools available for use in the market.  It is a web- based tool with typical features like, requirement management, test case creation and maintenance, test runs, tracking bugs, reports, integration with common issue trackers etc. To download, visit: http://sourceforge.net/projects/testlink/

13) QAComplete:

smartbear QAComplete logo
QAComplete is one of the most powerful test management tools that we have.  This suits agile/traditional, manual/automation projects excellently.  You could use it in integration to QTP and TestComplete. For automation projects, you could schedule the test runs and run them remotely on any registered hosts. There is also a detailed release management feature that provides for better analysis. It works along with major bug-trackers and with source control tools – Subversion, Perforce and CVS. Given all the features, it is a little pricey. There is a free trial available though. Check out all its features at: http://smartbear.com/products/qa-tools/test-management/

14) Silk Central:

Silk Central logo
This is a test management product by Borland. Once you have Silk Central with you there is a nothing left for you to ask for anything else. It is robustness personified, though it is pricey.  All the features for coverage, traceability, reporting, test creation, running – it has got them.  It integrates with many source control and issue tracking system. There are additional plug-ins to extend its capabilities to automation testing using QTP, WinRunner etc. It comes with a video capture feature and supports SAP testing. This product is really cool. Try it at: http://www.borland.com/products/silkcentral/

15) IBM Rational Quality Manager:

IBM Rational Quality Manager logo
A test Management product that has all the typical features – Test planning, test design, test execution, tracking and reporting. It integrates with many of the rational products for automation, source control and bug tracking activities. It is a commercial product. Check out its features, pricing and other information at: http://www-03.ibm.com/software/products/en/ratiqualmana

Additional tools:

The below are some more tools that are worth mentioning:
16) VersionOne: A commercial product primarily catering to agile projects, this has a test management module along with planning, reporting and others, with all the typical features. Check out: http://www.versionone.com/product/agile-test-management/
17) TestPad:  This tool’s motto is “Spend more time actually testing”. The primary concept of this tool is – checklists. In your test plan you could have a series of checklists (tests) that can be as detailed or as short as possible. It is perfect for exploratory testing. It is commercial and you can try it at: https://ontestpad.com/
18) Aptest: a web based, commercial product that has all the typical features you would expect along with the typical integrations with issue trackers. http://www.aptest.com/atm2/
19) SpiraTest: A complete QA solution is what this product is. Instead of having your requirements, tests, defects in separate systems, this tool has it all in one place. It integrates with unit and automation testing frameworks among other things. It is commercial, though not very expensive. https://www.inflectra.com/SpiraTest/Default.aspx
20) Meliora TestLab: This is a simple to use product for requirement management, test-creating & running, defect management, workflow optimization, and integration with JIRA etc.  Commercial again, check it out at: https://www.melioratestlab.com/
21) SmarteQM: A complete life cycle management tool that provides the complete end-to-end test process support and integrates with other SmartSoft functional test tools.  It is web based and commercial. http://www.smartesoft.com/products_smarteQM.php
22) Test Run:  a web based commercial test management process that is easy and simple to use. It has all that you need to create test plans, execute them successfully and report. Integrates with JIRA and LightHouse. http://runtestrun.com/
23) Test Wave: A test management tool that needs no installation, is web based, simple and commercial (affordable). This tool provides you a facility to import your already existing requirements/test assets from excel sheets. It also comes with an inbuilt defect tracker http://www.testwave.co.uk/
24) Enterprise Tester: This test management tool supports both agile and traditional projects. Integrates with JIRA and also works great for automation testing with QTP, Selenium, RFT etc. This is a really cool commercial tool. Check it out at: http://enterprisetester.com/
25) TestLog: A very comprehensive end-to-end test management tool that is easy to install and configure due to its XML database. It allows documentation of both automation and manual test cases. It also comes with a web interface for remote access.  This product is commercial too. http://www.testlog.com/
26) QaTraq:  An open source test process control tool that can be used to create test cases, running them, recording results etc. http://sourceforge.net/projects/qatraq/

Points to note:

Well, from the above list two things are apparent:
  1. There are not many open source tools of this type available, although most of them are still affordable.
  2. Most of the tools just provide test case management and leave the bug tracking to be integrated via an external tool. (A smart choice, if you ask me. Otherwise, all of us would be stuck reinventing the wheel.)

Conclusion:

Even though we put up a big list, we realize that we could not have included every tool available. Share your experiences with the test management tools – both the ones that made it to the list and those which did not- with us below.

http://www.softwaretestinghelp.com/15-best-test-management-tools-for-software-testers/

lunes, junio 3

Types of software Testing

Software Testing Types:
Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.
Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.
Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.
Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.
Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

http://www.softwaretestinghelp.com/types-of-software-testing/

How To Automate UI Testing

Most software has a user interface.  That means that most test teams spend time testing that interface.  The easiest way to do this is to just click on all the buttons and make sure the right thing happens.  This works, but it doesn't scale.  Eventually you look at the hordes of testers clicking buttons and decide that there has to be a better way.  Enter UI automation.  There are plenty of tools and toolkits to automate UI.  These range from tools like VisualTest to toolkits like Microsoft UI Automation for WPF.  These tools make automating the interface easy...for a while.  After some time you realize that you are spending all of your time updating the test scripts to reflect UI changes and very little time actually testing the product.
The difficulty is that UI is easy to change and the changes often come very late in the product.  This means that the UI is constantly changing.  For every change of the UI, the tests have to be updated.  The truth is that poking the UI buttons--either manually or automatically--doesn't really work.  They are both leave you running on a treadmill doing the same thing over and over and over.  Is there a better way?  I believe that there is.
The better way is to not test the UI.  At least, don't test it until very late.  Instead, test the functionality behind the UI.  In truth, we usually don't test the UI to make sure the button works, we test the UI to make sure the functionality represented by the button works.  For instance, in a word processor, we don't click the bold button to make sure that the button works so much as to check that text is actually turned bold when such an action is required.  It would be better if we could test the ability to make text bold without having to worry about navigating through the UI buttons to make it happen.  The MakeTextBold function is not likely to change its interface very often whereas the button is prone to changing shape, moving around, becoming a menu item, etc.  If we were testing MakeTextBold instead of the bold button, we wouldn't have to patch our test code every time the designers got a new idea.
This is all well and good but it doesn't work for most software products.  The trouble is the way that they are written.  UI toolkits encourage developers to write the code invoked by the button click event right in the button click handler.  The code might look something like this:
BoldButtonHandler(Context context)
{
    Region selectedText = Framework.GetSelectedRegion();
    selectedText.Font = selectedText.Font.Bold();
}
In reality it would be a lot more complicated, but I think you can get the general idea from this.  The difficulty with this code is that the only way to invoke it is to invoke a button click.  That means finding the right button in the UI and clicking on it.
We can, however, restructure our code.  Instead of doing the work implied by the button click in the button handler, we can make the button handler a very thin layer that just calls another function to do the work.  The advantage is that we decouple the button clicking from the work it implies.  We can then test the work unit without worrying about where the buttons are in the UI.  The new code might look like this:
BoldButtonHandler(Context context)
{
    MakeTextBold();
}
MakeTextBold()
{
    Region selectedText = Framework.GetSelectedRegion();
    selectedText.Font = selectedText.Font.Bold();
}
This requires either a lot of rewriting or, better still, starting from the beginning of the project with the right patterns.  One advantage is that testing your application becomes resilient to changes in the UI.  Another advantage is that you can now write unit tests for the UI-based functionality.  Should you desire to expose your application's functionality for scripted automation, this also becomes a lot easier.  It's definitely worth the extra effort to go this route.
What about testing the actual UI?  Surely we still need to make sure that the bold button actually makes text bold.  We do.  But we can wait until really late in the process after the UI is stabilized to do this.  We know all of the functionality works correctly and so we can wait to make sure the buttons works.  This split-approach to testing the UI also provides the benefit that when something breaks in the UI test, you know it is in the UI code and not in the underlying functional code.

http://blogs.msdn.com/b/steverowe/archive/2007/07/27/how-to-automate-ui-testing.aspx

Programación Orientada a Objetos


La programación orientada al objeto brinda un medio de mejorar la reusabilidad de los
componentes software.

Conceptos Fundamentales
Clase y objetosEl cómputo en un sistema orientado al objeto supone la manipulación de objetos de cierta clase. Una clase es en realidad un medio de empaquetar un tipo abstracto de dato (TAD). Siendo una forma de implementar un TAD, una clase permite encapsular como una única entidad a los elementos y las rutinas de acceso de la implementación de un TAD. Una clase contiene toda la información para construir ejemplares individuales, ejemplares llamados objetos. Una clase es simplemente la especificación para creae objetos. Un objeto, por el contrario, son las entidades reales que serán manipuladas en el programa.
Cada objeto contiene conjuntos de datos llamados variables miembro o miembros de datos que determinan el estado individual de ese objeto. Además, una clase puede almacenar información compartida por todos los ejemplares de la clase en varibles de clase. Las variables miembro y de clase están empaquetadas de manera que sólo pueden accesadas a través de las rutinas aportadas por la clase, las cuales se denominan funciones miembro Herencia
La posibilidad de estructurar un sistema permite su descomposición en componentes. En base a esta descomposición, el herencia es el medio por el cual los objetos de una clase pueden acceder a variables y funciones miembros contenidas en una clase en una clase previamente definida. Esto da la posibilidad de crear una nueva clase que es una extensión o especialización de una clase existente. Así la nueva clase, llamada clase derivada, se dice que deriva de la clase base. Los lenguaje de orientación al objeto debieran soportar herencia múltiple, donde una clase pueda derivar de varias clases. La clase derivada puede añadir nuevas funciones de la clase base o puedde redefinirla. En
el último caso, se dice que la clase derivada redefine a la función miembro con el mismo nombre en la clase base.

Paso de Mensajes
En orientación al objeto, el cómputo de un sistema evoluciona conforme a mensajes. Los objetos de un sistema manipulan otros objetos enviándoles mensajes solicitando que realicen acciones específicas. Estos mensajes invocan a funciones miembro apropiadas de las clases de objetos. Si una función miembro deseada no se encuentra en la clase inmmediata al objeto, entonces se buscan las funciones miembro en la clase base de ese objeto, y así sucesivamente.
Vinculación Dinámica y Polimorfismo
Si el sistema decide en timpo de compilación qué implementación de la operación va a utilizar, realiza una vinculación estática. Si lo hace en tiempo de ejecución, entonces es una vinculación dinámica.

El polimorfismo se refiere a la posibilidad de que un único mensaje pueda referirse en
tiempo de ejecución a objetos de distintas clases. Típicamente en una clase base se declara una función como polimórfica. Entonces, esta función es redefinida en clases que son derivadas de la clase base.  Así, existen funciones e las clases derivadas con el mismo nombre que aquella en la clase base. Si un objeto de la clase base es declarado en un programa, la definición de la función original que se encuentra en la clase base será invocada cuando se llama a la función.  Sin embargo, si un objeto de una clase derivada es posteriormente asignado al objeto de la clase base, entonces la definición de la función para la clase derivada será invocada si es llamada la misma función.


Estructura de un Objeto:

Un objeto puede considerarse como una especie de cápsula dividida en tres partes:
 
1 - Relaciones
2 - Propiedades
3 - Métodos

Cada uno de estos componentes desempeña un papel totalmente independiente:

Las relaciones permiten que el objeto se inserte en la organización y están formadas esencialmente por punteros a otros objetos.

Las propiedades distinguen un objeto determinado de los restantes que forman parte de la misma organización y tiene valores que dependen de la propiedad de que se trate. Las propiedades de un objeto pueden ser heredadas a sus descendientes en la organización.

Los métodos son las operaciones que pueden realizarse sobre el objeto, que normalmente estarán incorporados en forma de programas (código) que el objeto es capaz de ejecutar y que también pone a disposición de sus descendientes a través de la herencia.


Enclapsulamiento y ocultacion:

Como hemos visto, cada objeto es una estructura compleja en cuyo interior hay datos y programas, todos ellos relacionados entre sí, como si estuvieran encerrados conjuntamente en una cápsula. Esta propiedad (encapsulamiento), es una de las características fundamentales en la OOP.

Los objetos son inaccesibles, e impiden que otros objetos, los usuarios, o incluso los programadores conozcan cómo está distribuida la información o qué información hay disponible. Esta propiedad de los objetos se denomina ocultación de la información.

Esto no quiere decir, sin embargo, que sea imposible conocer lo necesario respecto a un objeto y a lo que contiene. Si así fuera no se podría hacer gran cosa con él. Lo que sucede es que las peticiones de información a un objeto. deben realizarse a través de mensajes dirigidos a él, con la orden de realizar la operación pertinente. La respuesta a estas ordenes será la información requerida, siempre que el objeto considere que quien envía el mensaje está autorizado para obtenerla.



Ejemplo De Mensajes: si el objeto pato desea destruir la computadora debe enviar un mensaje al objeto martillo.

El hecho de que cada objeto sea una cápsula facilita enormemente que un objeto determinado pueda ser transportado a otro punto de la organización, o incluso a otra organización totalmente diferente que precise de él. 
Si el objeto ha sido bien construido, sus métodos seguirán funcionando en el nuevo entorno sin problemas. Esta cualidad hace que la OOP sea muy apta para la reutilización de programas.


Organizacion de los Objetos:

En principio, los objetos forman siempre una organización jerárquica, en el sentido de que ciertos objetos son superiores a otros de cierto modo.
Existen varios tipos de jerarquías: serán simples cuando su estructura pueda ser representada por medio de un "árbol". En otros casos puede ser más compleja.
 
En cualquier caso, sea la estructura simple o compleja, podrán distinguirse en ella tres niveles de objetos.

- La raíz de la jerarquía. Se trata de un objeto único y especial. Este se caracteriza por estar en el nivel más alto de la estructura y suele recibir un nombre muy genérico, que indica su categoría especial, como por ejemplo objeto madre, Raíz o Entidad.

- Los objetos intermedios. Son aquellos que descienden directamente de la raíz y que a su vez tienen descendientes. Representan conjuntos o clases de objetos, que pueden ser muy generales o muy especializados, según la aplicación. Normalmente reciben nombres genéricos que denotan al conjunto de objetos que representan, por ejemplo, Ventana, Cuenta, Fichero. En un conjunto reciben el nombre de clases o tipos si descienden de otra clase o subclase.

- Los objetos terminales. Son todos aquellos que descienden de una clase o subclase y no tienen descendientes. Suelen llamarse casos particulares, instancias o ítems porque representan los elementos del conjunto representado por la clase o subclase a la que pertenecen.

Fuente: http://candyluna.galeon.com/aficiones836769.html