WEB TESTING
While testing a web application you need to consider following Cases:
• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security
Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links
• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance :
Performance testing can be applied to understand the web site’s
scalability, or to benchmark the performance in the environment of third
party products such as servers and middleware for potential purchase.
• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
i. What is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc..
Usability:
Usability testing is the process by which the human-computer
interaction characteristics of a system are measured, and weaknesses are
identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance
Server Side Interface:
In web testing the server side interface should be tested. This is done
by verify that communication is done properly. Compatibility of server
with software, hardware, network and database should be tested.
Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.
ecurity:
The primary reason for testing the security of a web is to identify potential vulnerabilities and subsequently repair them.
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection
http://www.softwaretestinghelp.com/web-testing-example-test-cases/
martes, octubre 30
lunes, octubre 29
Web Terminologies: Useful for web application testers - Part II
Web technology Guide
If you are working on web application testing then you should be aware of different web terminologies. This page will help you to learn all basic and advanced web terminologies that will definitely help you to test your web projects.
Web terminologies covered in this page are:
What is internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web server, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status codes.
• Web form
– A portion of a Web page containing blank fields that users can fill in with data (including personal info) and submits for Web server to process it.
• Web server log
– Every time a Web page is requested, the Web server may automatically logs the following information:
If you are working on web application testing then you should be aware of different web terminologies. This page will help you to learn all basic and advanced web terminologies that will definitely help you to test your web projects.
Web terminologies covered in this page are:
What is internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web server, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status codes.
-
Web client
-
– Most commonly in the form of Web browser software such as
Internet Explorer or Netscape
-
– Used to navigate the Web and retrieve Web content from Web
servers for viewing.
-
– Most commonly in the form of Web browser software such as
Internet Explorer or Netscape
-
Proxy server
-
– An intermediary server that provides a gateway to the Web (e.g.,
employee access to the Web most often goes through a proxy)
-
– Improves performance through caching and filters the Web
-
– The proxy server will also log each user interaction.
-
– An intermediary server that provides a gateway to the Web (e.g.,
employee access to the Web most often goes through a proxy)
-
Caching
• Web form
– A portion of a Web page containing blank fields that users can fill in with data (including personal info) and submits for Web server to process it.
• Web server log
– Every time a Web page is requested, the Web server may automatically logs the following information:
• Cookies
o the IP address of the visitor
o date and time of the request
o the URL of the requested file
o the URL the visitor came from immediately before
(referrer URL)
o the visitor’s Web browser type and operating system
o date and time of the request
o the URL of the requested file
o the URL the visitor came from immediately before
(referrer URL)
o the visitor’s Web browser type and operating system
– A small text file provided by a Web server and stored on a users PC
the text can be sent back to the server every time the browser
requests a page from the server. Cookies are used to identify a user
as they navigate through a Web site and/or return at a later time.
Cookies enable a range of functions including personalization of
content.
Session vs. persistent cookies
• Application Server
-
– A Session is a unique ID assigned to the client browser by a web
server to identify the state of the client because web servers are
stateless.
-
– A session cookie is stored only while the user is connected to the
particular Web server – the cookie is deleted when the user
disconnects
-
– Persistent cookies are set to expire at some point in the future –
many are set to expire a number of years forward
• Socket
• Application Server
-
– An application server is a server computer in a computer network
dedicated to running certain software applications. The term also
refers to the software installed on such a computer to facilitate the
serving of other applications. Application server products typically
bundle middleware to enable applications to intercommunicate
with various qualities of service — reliability, security, non-
repudiation, and so on. Application servers also provide an API to
programmers, so that they don't have to be concerned with the
operating system or the huge array of interfaces required of a
modern web-based application. Communication occurs through the
web in the form of HTML and XML, as a link to various databases,
and, quite often, as a link to systems and devices ranging from huge
legacy applications to small information devices, such as an atomic
clock or a home appliance.
-
– An application server exposes business logic to client applications
through various protocols, possibly including HTTP. the server
exposes this business logic through a component API, such as the
EJB (Enterprise JavaBean) component model found on J2EE (Java
2 Platform, Enterprise Edition) application servers. Moreover, the
application server manages its own resources. Such gate-keeping
duties include security, transaction processing, resource pooling,
and messaging
-
– Ex: JBoss (Red Hat), WebSphere (IBM), Oracle Application Server
10g (Oracle Corporation) and WebLogic (BEA)
Thin Client
– A thin client is a computer (client) in client-server architecture networks which has little or no application logic, so it has to depend primarily on the central server for processing activities. It is designed to be especially small so that the bulk of the data processing occurs on the server.
• Thick client
– It is a client that performs the bulk of any data processing
operations itself, and relies on the server it is associated with primarily for data storage.
• Daemon
– It is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter "d"; for example, syslogd is the daemon which handles the system log. Daemons typically do not have any existing parent process, but reside directly under init in the process hierarchy. Daemons usually become daemons by forking a child process and then making the parent process kill itself, thus making init adopt the child. This practice is commonly known as "fork off and die." Systems often start (or "launch") daemons at boot time: they often serve the function of responding to network requests, hardware activity, or other programs by performing some task. Daemons can also configure hardware (like devfsd on some Linux systems), run scheduled tasks (like cron), and perform a variety of other tasks.
• Client-side scripting
– A thin client is a computer (client) in client-server architecture networks which has little or no application logic, so it has to depend primarily on the central server for processing activities. It is designed to be especially small so that the bulk of the data processing occurs on the server.
• Thick client
– It is a client that performs the bulk of any data processing
operations itself, and relies on the server it is associated with primarily for data storage.
• Daemon
– It is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter "d"; for example, syslogd is the daemon which handles the system log. Daemons typically do not have any existing parent process, but reside directly under init in the process hierarchy. Daemons usually become daemons by forking a child process and then making the parent process kill itself, thus making init adopt the child. This practice is commonly known as "fork off and die." Systems often start (or "launch") daemons at boot time: they often serve the function of responding to network requests, hardware activity, or other programs by performing some task. Daemons can also configure hardware (like devfsd on some Linux systems), run scheduled tasks (like cron), and perform a variety of other tasks.
• Client-side scripting
-
– Generally refers to the class of computer programs on the web that
are executed client-side, by the user's web browser, instead of
server-side (on the web server). This type of computer
programming is an important part of the Dynamic HTML
(DHTML) concept, enabling web pages to be scripted; that is, to
have different and changing content depending on user input,
environmental conditions (such as the time of day), or other
variables.
-
– Web authors write client-side scripts in languages such as
JavaScript (Client-side JavaScript) or VBScript, which are based on
several standards:
o HTML scripting o HTTP
Document Object Model
• Client-side scripts are often embedded within
an HTML document, but they may also be contained in a separate file, which is referenced by the document (or documents) that use it. Upon request, the necessary files are sent to the user's computer by the web server (or servers) on which they reside. The user's web browser executes the script, then displays the document, including any visible output from the script. Client-side scripts may also contain instructions for the browser to follow if the user interacts with the document in a certain way, e.g., clicks a certain button. These instructions can be followed without further communication with the server, though they may require such communication.
• Server-side Scripting
– It is a web server technology in which a user's request is fulfilled by running a script directly on the web server to generate dynamic HTML pages. It is usually used to provide interactive web sites that interface to databases or other data stores. This is different from client-side scripting where scripts are run by the viewing web browser, usually in JavaScript. The primary advantage to server- side scripting is the ability to highly customize the response based on the user's requirements, access rights, or queries into data stores.
o ASP: Microsoft designed solution allowing various languages (though generally VBscript is used) inside a HTML-like outer page, mainly used on Windows but with limited support on other platforms.
o ColdFusion: Cross platform tag based commercial server side scripting system.
o JSP: A Java-based system for embedding code in HTML pages.
o Lasso: A Datasource neutral interpreted programming language and cross platform server.
o SSI: A fairly basic system which is part of the common apache web server. Not a full programming environment by far but still handy for simple things like including a common menu.
• Client-side scripts are often embedded within
an HTML document, but they may also be contained in a separate file, which is referenced by the document (or documents) that use it. Upon request, the necessary files are sent to the user's computer by the web server (or servers) on which they reside. The user's web browser executes the script, then displays the document, including any visible output from the script. Client-side scripts may also contain instructions for the browser to follow if the user interacts with the document in a certain way, e.g., clicks a certain button. These instructions can be followed without further communication with the server, though they may require such communication.
• Server-side Scripting
– It is a web server technology in which a user's request is fulfilled by running a script directly on the web server to generate dynamic HTML pages. It is usually used to provide interactive web sites that interface to databases or other data stores. This is different from client-side scripting where scripts are run by the viewing web browser, usually in JavaScript. The primary advantage to server- side scripting is the ability to highly customize the response based on the user's requirements, access rights, or queries into data stores.
o ASP: Microsoft designed solution allowing various languages (though generally VBscript is used) inside a HTML-like outer page, mainly used on Windows but with limited support on other platforms.
o ColdFusion: Cross platform tag based commercial server side scripting system.
o JSP: A Java-based system for embedding code in HTML pages.
o Lasso: A Datasource neutral interpreted programming language and cross platform server.
o SSI: A fairly basic system which is part of the common apache web server. Not a full programming environment by far but still handy for simple things like including a common menu.
PHP : Common opensource solution based on
including code in its own language into an HTML
page.
o Server-side JavaScript: A language generally used on the client side but also occasionally on the server side.
o SMX : Lisplike opensource language designed to be embedded into an HTML page.
• Common Gateway Interface (CGI)
– is a standard protocol for interfacing external application software with an information server, commonly a web server. This allows the server to pass requests from a client web browser to the external application. The web server can then return the output from the application to the web browser.
• Dynamic Web pages:
– can be defined as: (1) Web pages containing dynamic content (e.g., images, text, form fields, etc.) that can change/move without the Web page being reloaded or (2) Web pages that are produced on- the-fly by server-side programs, frequently based on parameters in the URL or from an HTML form. Web pages that adhere to the first definition are often called Dynamic HTML or DHTML pages. Client-side languages like JavaScript are frequently used to produce these types of dynamic web pages. Web pages that adhere to the second definition are often created with the help of server-side languages such as PHP, Perl, ASP/.NET, JSP, and languages. These server-side languages typically use the Common Gateway Interface (CGI) to produce dynamic web pages.
• Digital Certificates
In cryptography, a public key certificate (or identity certificate) is a certificate which uses a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual.
In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate authority (CA). In a web of trust s
"endorsements"). In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong together.
o Server-side JavaScript: A language generally used on the client side but also occasionally on the server side.
o SMX : Lisplike opensource language designed to be embedded into an HTML page.
• Common Gateway Interface (CGI)
– is a standard protocol for interfacing external application software with an information server, commonly a web server. This allows the server to pass requests from a client web browser to the external application. The web server can then return the output from the application to the web browser.
• Dynamic Web pages:
– can be defined as: (1) Web pages containing dynamic content (e.g., images, text, form fields, etc.) that can change/move without the Web page being reloaded or (2) Web pages that are produced on- the-fly by server-side programs, frequently based on parameters in the URL or from an HTML form. Web pages that adhere to the first definition are often called Dynamic HTML or DHTML pages. Client-side languages like JavaScript are frequently used to produce these types of dynamic web pages. Web pages that adhere to the second definition are often created with the help of server-side languages such as PHP, Perl, ASP/.NET, JSP, and languages. These server-side languages typically use the Common Gateway Interface (CGI) to produce dynamic web pages.
• Digital Certificates
In cryptography, a public key certificate (or identity certificate) is a certificate which uses a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual.
In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate authority (CA). In a web of trust s
"endorsements"). In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong together.
Certificates can be used for the large-scale use of public-key cryptography.
Securely exchanging secret keys amongst users becomes impractical to the point
of effective impossibility for anything other than quite small networks. Public key
cryptography provides a way to avoid this problem. In principle, if Alice wants
others to be able to send her secret messages, she need only publish her public
key. Anyone possessing it can then send her secure information. Unfortunately,
David could publish a different public key (for which he knows the related private
key) claiming that it is Alice's public key. In so doing, David could intercept and
read at least some of the messages meant for Alice. But if Alice builds her public
key into a certificate and has it digitally signed by a trusted third party (Trent),
anyone who trusts Trent can merely check the certificate to see whether Trent
thinks the embedded public key is Alice's. In typical Public-key Infrastructures
(PKIs), Trent will be a CA, who is trusted by all participants. In a web of trust,
Trent can be any user, and whether to trust that user's attestation that a
particular public key belongs to Alice will be up to the person wishing to send a
message to Alice.
In large-scale deployments, Alice may not be familiar with Bob's certificate authority (perhaps they each have a different CA — if both use employer CAs, different employers would produce this result), so Bob's certificate may also include his CA's public key signed by a "higher level" CA2, which might be recognized by Alice. This process leads in general to a hierarchy of certificates, and to even more complex trust relationships. Public key infrastructure refers, mostly, to the software that manages certificates in a large-scale setting. In X.509 PKI systems, the hierarchy of certificates is always a top-down tree, with a root certificate at the top, representing a CA that is 'so central' to the scheme that it does not need to be authenticated by some trusted third party.
A certificate may be revoked if it is discovered that its related private key has been compromised, or if the relationship (between an entity and a public key) embedded in the certificate is discovered to be incorrect or has changed; this might occur, for example, if a person changes jobs or names. A revocation will likely be a rare occurrence, but the possibility means that when a certificate is trusted, the user should always check its validity. This can be done by comparing it against a certificate revocation list (CRL) — a list of revoked or cancelled certificates. Ensuring that such a list is up-to-date and accurate is a core function in a centralized PKI, one which requires both staff and budget and one which is therefore sometimes not properly done. To be effective, it must be readily available to any who needs it whenever it is needed and must be updated frequently. The other way to check a certificate validity is to query the certificate authority using the Online Certificate Status Protocol (OCSP) to know the status of a specific certificate.
Both of these methods appear to be on the verge of being supplanted by XKMS. This new standard, however, is yet to see widespread implementation.
A certificate typically includes:
In large-scale deployments, Alice may not be familiar with Bob's certificate authority (perhaps they each have a different CA — if both use employer CAs, different employers would produce this result), so Bob's certificate may also include his CA's public key signed by a "higher level" CA2, which might be recognized by Alice. This process leads in general to a hierarchy of certificates, and to even more complex trust relationships. Public key infrastructure refers, mostly, to the software that manages certificates in a large-scale setting. In X.509 PKI systems, the hierarchy of certificates is always a top-down tree, with a root certificate at the top, representing a CA that is 'so central' to the scheme that it does not need to be authenticated by some trusted third party.
A certificate may be revoked if it is discovered that its related private key has been compromised, or if the relationship (between an entity and a public key) embedded in the certificate is discovered to be incorrect or has changed; this might occur, for example, if a person changes jobs or names. A revocation will likely be a rare occurrence, but the possibility means that when a certificate is trusted, the user should always check its validity. This can be done by comparing it against a certificate revocation list (CRL) — a list of revoked or cancelled certificates. Ensuring that such a list is up-to-date and accurate is a core function in a centralized PKI, one which requires both staff and budget and one which is therefore sometimes not properly done. To be effective, it must be readily available to any who needs it whenever it is needed and must be updated frequently. The other way to check a certificate validity is to query the certificate authority using the Online Certificate Status Protocol (OCSP) to know the status of a specific certificate.
Both of these methods appear to be on the verge of being supplanted by XKMS. This new standard, however, is yet to see widespread implementation.
A certificate typically includes:
The public key being signed.
A name, which can refer to a person, a computer or an organization.
A validity period.
The location (URL) of a revocation center.
The most common certificate standard is the ITU-T X.509. X.509 is being adapted to the Internet by the IETF PKIX working group.
Classes
Verisign introduced the concept of three classes of digital certificates:
Class 1 for individuals, intended for email;
Class 2 for organizations, for which proof of identity is required; and
Class 3 for servers and software signing, for which independent verification and checking of identity and authority is done by the issuing certificate authority (CA)
• List of HTTP status codes 1xx Informational
Request received, continuing process.
100: Continue
101: Switching Protocols
2xx Success
The action was successfully received, understood, and accepted. 200: OK
201: Created
202: Accepted
203: Non-Authoritative Information
204: No Content
205: Reset Content
206: Partial Content
3xx Redirection
The client must take additional action to complete the request. 300: Multiple Choices
301: Moved Permanently
302: Moved Temporarily (HTTP/1.0)
302: Found (HTTP/1.1)
see 302 Google Jacking
303: See Other (HTTP/1.1)
304: Not Modified
305: Use Proxy
A name, which can refer to a person, a computer or an organization.
A validity period.
The location (URL) of a revocation center.
The most common certificate standard is the ITU-T X.509. X.509 is being adapted to the Internet by the IETF PKIX working group.
Classes
Verisign introduced the concept of three classes of digital certificates:
Class 1 for individuals, intended for email;
Class 2 for organizations, for which proof of identity is required; and
Class 3 for servers and software signing, for which independent verification and checking of identity and authority is done by the issuing certificate authority (CA)
• List of HTTP status codes 1xx Informational
Request received, continuing process.
100: Continue
101: Switching Protocols
2xx Success
The action was successfully received, understood, and accepted. 200: OK
201: Created
202: Accepted
203: Non-Authoritative Information
204: No Content
205: Reset Content
206: Partial Content
3xx Redirection
The client must take additional action to complete the request. 300: Multiple Choices
301: Moved Permanently
302: Moved Temporarily (HTTP/1.0)
302: Found (HTTP/1.1)
see 302 Google Jacking
303: See Other (HTTP/1.1)
304: Not Modified
305: Use Proxy
Many HTTP clients (such as Mozilla and Internet Explorer) don't correctly
handle responses with this status code.
306: (no longer used, but reserved)
307: Temporary Redirect
4xx Client Error
The request contains bad syntax or cannot be fulfilled.
400: Bad Request
401: Unauthorized
Similar to 403/Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. See basic authentication scheme and digest access authentication.
402: Payment Required
403: Forbidden
404: Not Found
405: Method Not Allowed
406: Not Acceptable
407: Proxy Authentication Required
408: Request Timeout
409: Conflict
410: Gone
411: Length Required
412: Precondition Failed
413: Request Entity Too Large
414: Request-URI Too Long
415: Unsupported Media Type
416: Requested Range Not Satisfiable
417: Expectation Failed
5xx Server Error
The server failed to fulfill an apparently valid request. 500: Internal Server Error
501: Not Implemented
502: Bad Gateway
503: Service Unavailable
504: Gateway Timeout
505: HTTP Version Not Supported
509: Bandwidth Limit Exceeded
306: (no longer used, but reserved)
307: Temporary Redirect
4xx Client Error
The request contains bad syntax or cannot be fulfilled.
400: Bad Request
401: Unauthorized
Similar to 403/Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. See basic authentication scheme and digest access authentication.
402: Payment Required
403: Forbidden
404: Not Found
405: Method Not Allowed
406: Not Acceptable
407: Proxy Authentication Required
408: Request Timeout
409: Conflict
410: Gone
411: Length Required
412: Precondition Failed
413: Request Entity Too Large
414: Request-URI Too Long
415: Unsupported Media Type
416: Requested Range Not Satisfiable
417: Expectation Failed
5xx Server Error
The server failed to fulfill an apparently valid request. 500: Internal Server Error
501: Not Implemented
502: Bad Gateway
503: Service Unavailable
504: Gateway Timeout
505: HTTP Version Not Supported
509: Bandwidth Limit Exceeded
Experimenting With Scrum Part 4
This is the last post in my "experimenting with scrum" series. After
our bug smash time, we went into a period of more serious product
development. Recall that we were not implementing all of scrum but
rather just the scrum meetings portion. This works acceptably in the
development scenario, but not optimally. We didn't break each task down
into 1-2 day work items. This meant that often when we met to talk,
there was very little to update. A person might come in for several
days or even a week with the same status. "I'm worked on the Widget
interface yesterday. I'll still be working on it today." That is not
terribly useful status and makes the meetings less interesting for all
involved. I did find that the cross-polination I had noticed in my
earlier experiments continued. Often something said in the meeting
would spark one team member to interject with useful ideas for another.
This is conversation that often would not take place. It is easy for a
software developer to get lost in their office (or cube) and shut off
the outside world. While this allows for high-intensity coding, it
doesn't vet the ideas very well. Forcing people out of their offices,
even if for only 15 minutes a day, encourages bouncing ideas off of each
other. This has beneficial effects on all involved.
I've talked to most of my team members about the experience. Most liked it. Some tolerated it. No one had serious complaints about it. I think I'll stick with something like this for a while at least. I'll reflect on my experience and refine what we're doing.
http://blogs.msdn.com/b/steverowe/archive/2006/01/26/experimenting-with-scrum-part-4.aspx
I've talked to most of my team members about the experience. Most liked it. Some tolerated it. No one had serious complaints about it. I think I'll stick with something like this for a while at least. I'll reflect on my experience and refine what we're doing.
http://blogs.msdn.com/b/steverowe/archive/2006/01/26/experimenting-with-scrum-part-4.aspx
jueves, octubre 25
Experimenting With Scrum Part 3
Our first attempt at using scrum came during what we call a bug
smash. That is a time when we focus solely on fixing bugs. No work is
being done on new features. This seemed like a logical time to
implement scrum meetings. I chose to meet once a day for 15 minutes.
We met at around 11:30 each morning. I chose the middle of the day so
that everyone could attend and maintain their same schedules. Each
meeting I would have our bug database open. On my white board I tracked
the number of bugs we had fixed in the previous 24 hours as well as the
number of outstanding bugs. When people arrived, we would go around
the room and everyone would talk about what bug(s) they were actively
working on and what they expected to fix that day. If someone had no
bugs left in their queue, we would reassign them bugs from someone
else's queue.
There were at least three good things that came out of these meetings. First, it gave me a good opportunity to gauge our progress. Each day I saw how we were progressing. Admittedly, I could have done this myself by spending a few minutes in the bug database, but this was a forcing factor. I also was able to hear how people were doing. When they were stuck on a bug, I knew about it quickly. Second, it allowed the team to see where we stood. They knew where we stood as a group with the numbers on my white board. They also knew how their teammates were doing. Hard work is contagious. The third, and mostly unexpected, benefit was the cross-pollination that took place. One team member would say they were working on a bug and someone else would speak up with suggestions for solutions or places they might look for code doing something similar. We ended up being much more efficient as a team because of these short meetings.
For this bug-fixing stage of the product, small, short meetings on a daily basis proved very useful. It is not fully implementing scrum and not even really fully implementing the scrum meeting portion of scrum, but it was useful.
http://blogs.msdn.com/b/steverowe/archive/2006/01/05/experimenting-with-scrum-part-3.aspx
There were at least three good things that came out of these meetings. First, it gave me a good opportunity to gauge our progress. Each day I saw how we were progressing. Admittedly, I could have done this myself by spending a few minutes in the bug database, but this was a forcing factor. I also was able to hear how people were doing. When they were stuck on a bug, I knew about it quickly. Second, it allowed the team to see where we stood. They knew where we stood as a group with the numbers on my white board. They also knew how their teammates were doing. Hard work is contagious. The third, and mostly unexpected, benefit was the cross-pollination that took place. One team member would say they were working on a bug and someone else would speak up with suggestions for solutions or places they might look for code doing something similar. We ended up being much more efficient as a team because of these short meetings.
For this bug-fixing stage of the product, small, short meetings on a daily basis proved very useful. It is not fully implementing scrum and not even really fully implementing the scrum meeting portion of scrum, but it was useful.
http://blogs.msdn.com/b/steverowe/archive/2006/01/05/experimenting-with-scrum-part-3.aspx
lunes, octubre 22
Web Terminologies: Useful for web application testers - Part I
Web technology Guide
If you are working on web application testing then you should be aware of different web terminologies. This page will help you to learn all basic and advanced web terminologies that will definitely help you to test your web projects.
Web terminologies covered in this page are:
What is internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web server, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status codes.
• Internet
– A global network connecting millions of computers.
• World Wide Web (the Web)
– An information sharing model that is built on top of the Internet, utilizes HTTP protocol and browsers (such as Internet Explorer) to access Web pages formatted in HTML that are linked via hyperlinks and the Web is only a subset of the Internet (other uses of the Internet include email (via SMTP), Usenet, instant messaging and file transfer (via FTP)
• URL (Uniform Resource Locator)
– The address of documents and other content on the Web. It is consisting of protocol, domain and the file. Protocol can be either HTTP, FTP, Telnet, News etc., domain name is the DNS name of the server and file can be Static HTML, DOC, Jpeg, etc., . In other words URLs are strings that uniquely identify resources on internet.
• TCP/IP
– TCP/IP protocol suite used to send data over the Internet. TCP/IP consists of only 4 layers - Application layer, Transport layer, Network layer & Link layer
If you are working on web application testing then you should be aware of different web terminologies. This page will help you to learn all basic and advanced web terminologies that will definitely help you to test your web projects.
Web terminologies covered in this page are:
What is internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web server, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status codes.
• Internet
– A global network connecting millions of computers.
• World Wide Web (the Web)
– An information sharing model that is built on top of the Internet, utilizes HTTP protocol and browsers (such as Internet Explorer) to access Web pages formatted in HTML that are linked via hyperlinks and the Web is only a subset of the Internet (other uses of the Internet include email (via SMTP), Usenet, instant messaging and file transfer (via FTP)
• URL (Uniform Resource Locator)
– The address of documents and other content on the Web. It is consisting of protocol, domain and the file. Protocol can be either HTTP, FTP, Telnet, News etc., domain name is the DNS name of the server and file can be Static HTML, DOC, Jpeg, etc., . In other words URLs are strings that uniquely identify resources on internet.
• TCP/IP
– TCP/IP protocol suite used to send data over the Internet. TCP/IP consists of only 4 layers - Application layer, Transport layer, Network layer & Link layer
Internet Protocols:
Application Layer - DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP, rlogin.
Transport Layer- TCP, UDP, DCCP, SCTP, IL, RUDP,
Network Layer - IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, ...
Link Ethernet Layer- Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, DTM, Frame Relay, SMDS,
• TCP (Transmission Control Protocol)
– A unique number assigned to each connected device, often assigned dynamically to users by an ISP on a session-by-session basis – dynamic IP address. Increasingly becoming dedicated, particularly with always-on broadband connections – static IP address.
Application Layer - DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP, rlogin.
Transport Layer- TCP, UDP, DCCP, SCTP, IL, RUDP,
Network Layer - IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, ...
Link Ethernet Layer- Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, DTM, Frame Relay, SMDS,
• TCP (Transmission Control Protocol)
-
– Enables two devices to establish a connection and exchange data.
-
– In the Internet protocol suite, TCP is the intermediate layer
between the Internet Protocol below it, and an application above it.
Applications often need reliable pipe-like connections to each other,
whereas the Internet Protocol does not provide such streams, but
rather only unreliable packets. TCP does the task of the transport
layer in the simplified OSI model of computer networks.
-
– It is one of the core protocols of the Internet protocol suite. Using
TCP, applications on networked hosts can create connections to one
another, over which they can exchange data or packets. The
protocol guarantees reliable and in-order delivery of sender to
receiver data. TCP also distinguishes data for multiple, concurrent
applications (e.g. Web server and e-mail server) running on the
same host.
• IP
-
– Specifies the format of data packets and the addressing protocol.
The Internet Protocol (IP) is a data-oriented protocol used for
communicating data across a packet-switched internet work. IP is a
network layer protocol in the internet protocol suite. Aspects of IP
are IP addressing and routing. Addressing refers to how end hosts
become assigned IP addresses. IP routing is performed by all hosts,
but most importantly by internetwork routers
– A unique number assigned to each connected device, often assigned dynamically to users by an ISP on a session-by-session basis – dynamic IP address. Increasingly becoming dedicated, particularly with always-on broadband connections – static IP address.
Packet
– A portion of a message sent over a TCP/IP Network. It contains
content and destination
• HTTP (Hypertext Transfer Protocol)
– A portion of a message sent over a TCP/IP Network. It contains
content and destination
• HTTP (Hypertext Transfer Protocol)
-
– Underlying protocol of the World Wide Web. Defines how messages
are formatted and transmitted over a TCP/IP network for Web
sites. Defines what actions Web servers and Web browsers take in
response to various commands.
-
– HTTP is stateless. The advantage of a stateless protocol is that hosts
don't need to retain information about users between requests, but
this forces the use of alternative methods for maintaining users'
state, for example, when a host would like to customize content for
a user who has visited before. The common method for solving this
problem involves the use of sending and requesting cookies. Other
methods are session control, hidden variables, etc
-
– example: when you enter a URL in your browser, an HTTP
command is sent to the Web server telling to fetch and transmit the
requested Web page
o HEAD: Asks for the response identical to the one that
would correspond to a GET request, but without the
response body. This is useful for retrieving meta-
information written in response headers, without
having to transport the entire content.
o GET : Requests a representation of the specified resource. By far the most common method used on the Web today.
o POST : Submits user data (e.g. from a HTML form) to the identified resource. The data is included in the body of the request.
o PUT: Uploads a representation of the specified resource.
o DELETE: Deletes the specified resource (rarely implemented).
o TRACE: Echoes back the received request, so that a client can see what intermediate servers are adding or changing in the request.
o OPTIONS:
o Returns the HTTP methods that the server supports.
This can be used to check the functionality of a web
server.
o CONNECT: For use with a proxy that can change to
being an SSL tunnel.
o GET : Requests a representation of the specified resource. By far the most common method used on the Web today.
o POST : Submits user data (e.g. from a HTML form) to the identified resource. The data is included in the body of the request.
o PUT: Uploads a representation of the specified resource.
o DELETE: Deletes the specified resource (rarely implemented).
o TRACE: Echoes back the received request, so that a client can see what intermediate servers are adding or changing in the request.
o OPTIONS:
o Returns the HTTP methods that the server supports.
This can be used to check the functionality of a web
server.
o CONNECT: For use with a proxy that can change to
being an SSL tunnel.
HTTP pipelining
– appeared in HTTP/1.1. It allows clients to send multiple requests at once, without waiting for an answer. Servers can also send multiple answers without closing their socket. This results in fewer roundtrips and faster load times. This is particularly useful for satellite Internet connections and other connections with high latency as separate requests need not be made for each file. Since it is possible to fit several HTTP requests in the same TCP packet, HTTP pipelining allows fewer TCP packets to be sent over the network, reducing network load. HTTP pipelining requires both the client and the server to support it. Servers are required to support it in order to be HTTP/1.1 compliant, although they are not required to pipeline responses, just to accept pipelined requests.
• HTTP-Tunnel
– technology allows users to perform various Internet tasks despite the restrictions imposed by firewalls. This is made possible by sending data through HTTP (port 80). Additionally, HTTP-Tunnel technology is very secure, making it indispensable for both average and business communications. The HTTP-Tunnel client is an application that runs in your system tray acting as a SOCKS server, managing all data transmissions between the computer and the network.
• HTTP streaming
– It is a mechanism for sending data from a Web server to a Web browser in response to an event. HTTP Streaming is achieved through several common mechanisms. In one such mechanism the web server does not terminate the response to the client after data has been served. This differs from the typical HTTP cycle in which the response is closed immediately following data transmission. The web server leaves the response open such that if an event is received, it can immediately be sent to the client. Otherwise the data would have to be queued until the client's next request is made to the web server. The act of repeatedly queing and re-requesting information is known as a Polling mechanism. Typical uses for HTTP Streaming include market data distribution (stock tickers), live chat/messaging systems, online betting and gaming, sport results, monitoring consoles and Sensor network monitoring.
– appeared in HTTP/1.1. It allows clients to send multiple requests at once, without waiting for an answer. Servers can also send multiple answers without closing their socket. This results in fewer roundtrips and faster load times. This is particularly useful for satellite Internet connections and other connections with high latency as separate requests need not be made for each file. Since it is possible to fit several HTTP requests in the same TCP packet, HTTP pipelining allows fewer TCP packets to be sent over the network, reducing network load. HTTP pipelining requires both the client and the server to support it. Servers are required to support it in order to be HTTP/1.1 compliant, although they are not required to pipeline responses, just to accept pipelined requests.
• HTTP-Tunnel
– technology allows users to perform various Internet tasks despite the restrictions imposed by firewalls. This is made possible by sending data through HTTP (port 80). Additionally, HTTP-Tunnel technology is very secure, making it indispensable for both average and business communications. The HTTP-Tunnel client is an application that runs in your system tray acting as a SOCKS server, managing all data transmissions between the computer and the network.
• HTTP streaming
– It is a mechanism for sending data from a Web server to a Web browser in response to an event. HTTP Streaming is achieved through several common mechanisms. In one such mechanism the web server does not terminate the response to the client after data has been served. This differs from the typical HTTP cycle in which the response is closed immediately following data transmission. The web server leaves the response open such that if an event is received, it can immediately be sent to the client. Otherwise the data would have to be queued until the client's next request is made to the web server. The act of repeatedly queing and re-requesting information is known as a Polling mechanism. Typical uses for HTTP Streaming include market data distribution (stock tickers), live chat/messaging systems, online betting and gaming, sport results, monitoring consoles and Sensor network monitoring.
HTTP referrer
– It signifies the webpage which linked to a new page on the Internet. By checking the referer, the new page can see where the request came from. Referer logging is used to allow websites and web servers to identify where people are visiting them from, for promotional or security purposes. Since the referer can easily be spoofed (faked), however, it is of limited use in this regard except on a casual basis. A dereferer is a means to strip the details of the referring website from a link request so that the target website cannot identify the page which was clicked on to originate a request. Referer is a common misspelling of the word referrer. It is so common, in fact that it made it into the official specification of HTTP – the communication protocol of the World Wide Web – and has therefore become the standard industry spelling when discussing HTTP referers.
• SSL (Secure Sockets Layer)
o Symmetric cipher-based traffic encryption
o During the first phase, the client and server negotiate which cryptographic algorithms will be used. Current
implementations support the following choices:
o for public-key cryptography: RSA, Diffie-Hellman, DSA or Fortezza;
o for symmetric ciphers: RC2, RC4, IDEA, DES, Triple DES or AES;
o For one-way hash functions: MD5 or SHA.
– It signifies the webpage which linked to a new page on the Internet. By checking the referer, the new page can see where the request came from. Referer logging is used to allow websites and web servers to identify where people are visiting them from, for promotional or security purposes. Since the referer can easily be spoofed (faked), however, it is of limited use in this regard except on a casual basis. A dereferer is a means to strip the details of the referring website from a link request so that the target website cannot identify the page which was clicked on to originate a request. Referer is a common misspelling of the word referrer. It is so common, in fact that it made it into the official specification of HTTP – the communication protocol of the World Wide Web – and has therefore become the standard industry spelling when discussing HTTP referers.
• SSL (Secure Sockets Layer)
-
– Protocol for establishing a secure connection for transmission, it
uses the HTTPS convention
-
– SSL provides endpoint authentication and communications privacy
over the Internet using cryptography. In typical use, only the server
is authenticated (i.e. its identity is ensured) while the client remains
unauthenticated; mutual authentication requires public key
infrastructure (PKI) deployment to clients. The protocols allow
client/server applications to communicate in a way designed to
prevent eavesdropping, tampering, and message forgery.
-
– SSL involves a number of basic phases:
o Peer negotiation for algorithm support
o Symmetric cipher-based traffic encryption
o During the first phase, the client and server negotiate which cryptographic algorithms will be used. Current
implementations support the following choices:
o for public-key cryptography: RSA, Diffie-Hellman, DSA or Fortezza;
o for symmetric ciphers: RC2, RC4, IDEA, DES, Triple DES or AES;
o For one-way hash functions: MD5 or SHA.
HTTPS
– is a URI scheme which is syntactically identical to the http: scheme normally used for accessing resources using HTTP. Using an https: URL indicates that HTTP is to be used, but with a different default port and an additional encryption/authentication layer between HTTP and TCP. This system was invented by Netscape Communications Corporation to provide authentication and encrypted communication and is widely used on the Web for security-sensitive communication, such as payment transactions.
• HTML (Hypertext Markup Language)
sites and web-enabled services.
• Web server
– A computer that is connected to the Internet. Hosts Web content
and is configured to share that content.
– is a URI scheme which is syntactically identical to the http: scheme normally used for accessing resources using HTTP. Using an https: URL indicates that HTTP is to be used, but with a different default port and an additional encryption/authentication layer between HTTP and TCP. This system was invented by Netscape Communications Corporation to provide authentication and encrypted communication and is widely used on the Web for security-sensitive communication, such as payment transactions.
• HTML (Hypertext Markup Language)
-
– The authoring language used to create documents on the World
Wide Web
-
– Hundreds of tags can be used to format and layout a Web page’s
content and to hyperlink to other Web content.
• Hyperlink
sites and web-enabled services.
• Web server
– A computer that is connected to the Internet. Hosts Web content
and is configured to share that content.
– Webserver is responsible for accepting HTTP requests from clients,
which are known as Web browsers, and serving them Web pages,
which are usually HTML documents and linked objects (images,
etc.).
• Examples:
o Apache HTTP Server from the Apache Software
Foundation.
o Internet Information Services (IIS) from Microsoft.
o Sun Java System Web Server from Sun Microsystems,
formerly Sun ONE Web Server, iPlanet Web Server,
and Netscape Enterprise Server.
o Zeus Web Server from Zeus Technology
http://cdn.softwaretestinghelp.com/wp-content/qa/uploads/2008/01/web-technology-guide.pdf
o Internet Information Services (IIS) from Microsoft.
o Sun Java System Web Server from Sun Microsystems,
formerly Sun ONE Web Server, iPlanet Web Server,
and Netscape Enterprise Server.
o Zeus Web Server from Zeus Technology
http://cdn.softwaretestinghelp.com/wp-content/qa/uploads/2008/01/web-technology-guide.pdf
Experimenting With Scrum Part 2
t occurs to me that some of those reading this blog will not be
familiar with Scrum. Before I go into any details about what did and
didn't work about my experiments, I'll take some time to give a quick
overview of the process. For more information, check out the book or Ken Schwaber's web page.
In a nutshell, scrum is a workflow methodology for developing software. It is most often associated with practices such as eXtreme Programming but could really be used with any set of programming proactices. The basic premise is that work cannot be scheduled far in advance. Instead, it must be handled in discrete chunks and corrections to the course made regularly. The tools for this are threefold: the product backlog, the sprint, and the scrum meeting.
The list of potential features for a product is kept in ranked order in what is called a "product backlog." This list could contain new features, modifications to features, bug lists, etc. Everything that needs to be done goes on thsi list.
Next is the sprint. This is a defined period of time (the authors suggest 30 days) for work to be done. During this time, no course corrections will be made. If there is a new feature to be added, it will be handled in the next sprint. At the end of a sprint, the features set aside for it should be complete and shippable. The idea is that the product is ready at the end of each sprint. By "ready" it is meant, the features are complete, not that every feature is there. It might not be ready for cursotmer deployment yet. There will not, however, be partially-implemented features and things checked in that don't work.
Finally there are the scrum meetings. These are daily meetings of the team. They should be short. Each person in the room should basically give a quick status. "Here is what I did in the last 24 hours. Here is what I am doing in the next 24 hours. Here are the items that are blocking me." The purpose of this meeting is to provide visibility into the progress toward the sprint. At this meeting, work items may be reassigned to others but new items are not added.
The idea behind Scrum is that software development is not like manufacturing. It is more like original research. We don't know how long something will take. We don't know what roadblocks will be put in our way. We don't even really know what the end product needs to look like because requirements change so often. The response to this, rather than hiring 14 people just to maintain your Gantt charts, is to do away with them. If the environment is always changing, the best response is not to plan better up front but rather to learn to react to those changes. Scrum is one methodology to do this.
As a test development team working on disparate projects, this model doesn't fit perfectly. I started with just the scrum meetings. How that went will be described in my next posts on this subject.
http://blogs.msdn.com/b/steverowe/archive/2006/01/03/experimenting-with-scrum-part-2.aspx
In a nutshell, scrum is a workflow methodology for developing software. It is most often associated with practices such as eXtreme Programming but could really be used with any set of programming proactices. The basic premise is that work cannot be scheduled far in advance. Instead, it must be handled in discrete chunks and corrections to the course made regularly. The tools for this are threefold: the product backlog, the sprint, and the scrum meeting.
The list of potential features for a product is kept in ranked order in what is called a "product backlog." This list could contain new features, modifications to features, bug lists, etc. Everything that needs to be done goes on thsi list.
Next is the sprint. This is a defined period of time (the authors suggest 30 days) for work to be done. During this time, no course corrections will be made. If there is a new feature to be added, it will be handled in the next sprint. At the end of a sprint, the features set aside for it should be complete and shippable. The idea is that the product is ready at the end of each sprint. By "ready" it is meant, the features are complete, not that every feature is there. It might not be ready for cursotmer deployment yet. There will not, however, be partially-implemented features and things checked in that don't work.
Finally there are the scrum meetings. These are daily meetings of the team. They should be short. Each person in the room should basically give a quick status. "Here is what I did in the last 24 hours. Here is what I am doing in the next 24 hours. Here are the items that are blocking me." The purpose of this meeting is to provide visibility into the progress toward the sprint. At this meeting, work items may be reassigned to others but new items are not added.
The idea behind Scrum is that software development is not like manufacturing. It is more like original research. We don't know how long something will take. We don't know what roadblocks will be put in our way. We don't even really know what the end product needs to look like because requirements change so often. The response to this, rather than hiring 14 people just to maintain your Gantt charts, is to do away with them. If the environment is always changing, the best response is not to plan better up front but rather to learn to react to those changes. Scrum is one methodology to do this.
As a test development team working on disparate projects, this model doesn't fit perfectly. I started with just the scrum meetings. How that went will be described in my next posts on this subject.
http://blogs.msdn.com/b/steverowe/archive/2006/01/03/experimenting-with-scrum-part-2.aspx
jueves, octubre 18
Experimenting With Scrum
I recently tried using some parts of scrum with my team at
Microsoft. We're a development team in test and tend to have a lot of
small, independent projects rather than one larger integrated one. To
make matters worse, we work with audio and video and there is a lot of
specialized knowledge with each project. It is non-trivial to move one
person to another task. As such, it is hard to implement scrum as
described in the canon. There is no clear feature backlog and there is
no obvious way to define the results of a sprint for the group. I
always wanted to try scrum but couldn't come up with a good way to do
it. Over the past month or two, I tried to approaches. I'll go into
more detail about them in future posts but here are the highlights.
We went through two phases of work recently. One was fixing bugs and the other was working on new applications. Each has a different work flow. Fixing bugs is more fungible, short, and discrete. Working on new applications is more specialized, the work items longer, and it is less obviously discrete. For each, I had the team meet once a day for 15 minutes to discuss their status. It worked better in some situations than in others. When the work items were longer, the meetings often seemed redundant from day to day. When the work items were short, it felt more natural to meet daily.
http://blogs.msdn.com/b/steverowe/archive/2005/12/30/experimenting-with-scrum.aspx
We went through two phases of work recently. One was fixing bugs and the other was working on new applications. Each has a different work flow. Fixing bugs is more fungible, short, and discrete. Working on new applications is more specialized, the work items longer, and it is less obviously discrete. For each, I had the team meet once a day for 15 minutes to discuss their status. It worked better in some situations than in others. When the work items were longer, the meetings often seemed redundant from day to day. When the work items were short, it felt more natural to meet daily.
http://blogs.msdn.com/b/steverowe/archive/2005/12/30/experimenting-with-scrum.aspx
lunes, octubre 15
Dependent Test Cases
As most of my blog posts do, this one stems from a conversation I had recently. The
conversation revolved around whether all test cases should be
independent or if it was acceptable to have one rely upon another. It is my contention that not only are dependent test cases acceptable, but that they are desirable in many circumstances.
There are two basic ways to structure test cases. The most common sort is what I will term “Independent.” These test cases are self-contained. Each test case does required all setup, testing, and cleanup. If we were testing the Video Mixing Renderer (VMR),
an independent test case would create the playback graph, configure the
VMR, stream some video, verify that the right video had been played,
and tear down the graph. The next test case would repeat each step, but configure the VMR differently.
The second sort of test case is what I will call a “Dependent” test case. This test case carries out only those actions required to actually test the API. All other work to set up the state of the system is done either by the test harness or by a previous test case. For an example, assume we are testing the DirectShow DVD Navigator. The test harness might create the graph. Test case 1 might start playback. Test case 2 might navigate to the right chapter and title. Test case 3 might then run a test that relies on the content in that location. When all is done, the harness tears down the graph and cleans up. Test case 3 relies upon the harness and the test cases before it. It cannot be run without them.
Some will argue that all test cases should be independent. At first glance, this makes a lot of sense. You can run them in any order. They can be distributed across many machines. You never have to worry about one test case interfering with the next. Why would you ever want to take on the baggage of dependent cases?
There are at least two circumstances where dependent test cases are preferable. They can be used to create scriptable tests and they can be much more efficient.
Most test harnesses allow the user to specify a list of test cases to run. This often takes the form of a text or an xml file. Some of the better harnesses even allow a user to specify this list via the UI. Assuming
that the list is executed in the specified order (not true of some
harnesses), well-factored test cases can be combined to create new
tests. The hard work of programming test cases can be leveraged easily into new tests with a mere text editor or a few clicks in a UI.
Independent test cases are not capable of being used this way. Because they contain setup, test, and cleanup, the order they are run in irrelevant. This
can be an advantage in some circumstances, but it also means that once
you are done coding, you are done gaining benefit from the work. You
cannot leverage that work into further coverage without returning to
the code/compile cycle which is much more expensive than merely adding
test cases to a text file.
Let’s return to the DVD example. If
test cases are written to jump to different titles and chapters, to
select different buttons, to play for different times, etc., they can be
strung together to create a nearly infinite matrix of tests. Just
using the test harness, one can define a series of test cases to test
different DVDs or to explore various areas of any given DVD. I created a system like this and we were able to create repro cases or regression cases without any programming. This allowed us to quickly respond to issues and spend our energy adding new cases elsewhere. If
the DVD tests were written as independent test cases, we would have had
to write each repro or regression case in C++ which would take
substantially longer. Additionally, because the scripts could be created in the test harness, even testers without C++ skills could write new tests.
Dependent test cases can also be more efficient. When testing a large system like the Windows Vista operating system, time is of the essence. If
you want to release a build every day and to do so early enough for
people to test it, you need BVTs (build verification tests) that
complete in a timely manner. If the time for setup and cleanup is substantial, doing it for each test case will add up. In this case, doing it only once for each test run saves that time.
Dependent test cases work best when the system under test is a state machine. In that instance, setup becomes more complex and factoring good test cases becomes easier.
Dependent test cases are not the answer to all questions. They probably aren’t even the answer to most questions. A majority of the time, independent test cases are best. Their ability to be scheduled without reliance upon other cases makes them more flexible. However, dependent test cases are an important technique to have in your toolbox. In some circumstances, they can be a substantially better solution.
http://blogs.msdn.com/b/steverowe/archive/2006/02/16/533902.aspx
Ejemplo de escenario de pruebas
Example for writing of test scenario?
EX:Login page
In login page you have to give values for user name,
password and then click on ok button to login and cancel
button to close the login window
user name:Alphanumerics with 4-16characters long
password:lowercase letters with 4-8 chars long
OK :Next window
CANCEL :Close the window
Prepare Test Scenarios
Test Scenario Template
---------------------------
Test scenario1:Verify user name value
Test Scenario2:verify pass word value
Test Scenario3:verify OK button operation to login
Test Scenario4:verify cancel button operation to close the
window
Ts1:Verify user name value
a)Boundary value analysis (on Size)
min=4characters
maximum=16 characters
more than max or less than min are not allowed
b)Equivalence class partitions(on type)
A-Z or a-z or 0-9 are valid types to be entered
special symbols or blank fields are in valid
Test Scenario2:verify pass word value
SAME AS USER NAME
Test Scenario3:verify OK button operation to login
Decision table
user name ------pass word-----expected outcome after click
on "OK"
valid valid next window
valid invalid error message
invalid valid error message
blank value error message
value blank error message
Test Scenario4:verify cancel button operation to close the
Decision table
user name ------pass word-----expected outcome
blank blank close the window
value blank close the window
blank value close the window
value value close the window
Like the above example we will write test scenarios for all
the Functional Specifications.
Fuente: http://ssoftwaretesting.blogspot.mx/2010/07/example-for-writing-of-test-scenario.html
EX:Login page
In login page you have to give values for user name,
password and then click on ok button to login and cancel
button to close the login window
user name:Alphanumerics with 4-16characters long
password:lowercase letters with 4-8 chars long
OK :Next window
CANCEL :Close the window
Prepare Test Scenarios
Test Scenario Template
---------------------------
Test scenario1:Verify user name value
Test Scenario2:verify pass word value
Test Scenario3:verify OK button operation to login
Test Scenario4:verify cancel button operation to close the
window
Ts1:Verify user name value
a)Boundary value analysis (on Size)
min=4characters
maximum=16 characters
more than max or less than min are not allowed
b)Equivalence class partitions(on type)
A-Z or a-z or 0-9 are valid types to be entered
special symbols or blank fields are in valid
Test Scenario2:verify pass word value
SAME AS USER NAME
Test Scenario3:verify OK button operation to login
Decision table
user name ------pass word-----expected outcome after click
on "OK"
valid valid next window
valid invalid error message
invalid valid error message
blank value error message
value blank error message
Test Scenario4:verify cancel button operation to close the
Decision table
user name ------pass word-----expected outcome
blank blank close the window
value blank close the window
blank value close the window
value value close the window
Like the above example we will write test scenarios for all
the Functional Specifications.
Fuente: http://ssoftwaretesting.blogspot.mx/2010/07/example-for-writing-of-test-scenario.html
jueves, octubre 11
Becoming a Lead, Pt. 4 - We Not I
Another aspect of becoming a lead/manager that takes some time to
wrap one's head around is what your role is now. You are no longer
judged based on your own actions, but rather the collective actions of
your team. With all of the time you will likely spend developing
people, attending meetings, reviewing work, etc., your direct
contribution to the collective output of the team will be low. This is
more and more true, the greater the number of reports you have.
The role of a lead is fundamentally different from that of an individual contributor. A lead is tasked with maximizing the output of his or her work group. More often than not, this means that the lead takes a back seat when it comes to "real work." It is more important that you spend your time growing, unblocking, and dealing with bureaucracy than it is that you fix a bug or implement some feature.
Writing software takes a lot of concentration. It takes unbroken time to just sit and work. As a lead, your day is usually broken up. It is hard to get that time to concentrate. When you get it, it often comes at the expense of other things. You won't have the same opportunities you once had. Instead, your day will be working with others to solve their problems. You will be helping people develop themselves. You will be helping to prioritize the work. You paying more attention to the project as a whole than to particular features. This is all good work, but it is very different from your previous life as an individual contributor.
This becomes really apparent when it comes time for reviews. At Microsoft we have a review system where we are judged based on the goals we set for ourselves and our ability to achieve those goals. When you are an individual contributor, your review is easy. You just list off everything you did during the previous year. I wrote this. I tested that. I drove this iniative. When you become a lead, your review changes. When you sit down to write your first review as a lead, you'll notice that your individual accomplishments were not what they once were. That's okay though, you are no longer responsible for just "I." You are now responsible for "We." You are judged by, and it is okay to claim credit for, the work of your team. We wrote this. We tested that. One of my reports drove this iniative. The first time you write a review like that, it feels like cheating. After all, you didn't do that work. Your role has changed. That work took place because you facilitated it. You kept people focused and unblocked so they could give 100% of their effort. Welcome to your new job.
http://blogs.msdn.com/b/steverowe/archive/2006/03/15/552008.aspx
The role of a lead is fundamentally different from that of an individual contributor. A lead is tasked with maximizing the output of his or her work group. More often than not, this means that the lead takes a back seat when it comes to "real work." It is more important that you spend your time growing, unblocking, and dealing with bureaucracy than it is that you fix a bug or implement some feature.
Writing software takes a lot of concentration. It takes unbroken time to just sit and work. As a lead, your day is usually broken up. It is hard to get that time to concentrate. When you get it, it often comes at the expense of other things. You won't have the same opportunities you once had. Instead, your day will be working with others to solve their problems. You will be helping people develop themselves. You will be helping to prioritize the work. You paying more attention to the project as a whole than to particular features. This is all good work, but it is very different from your previous life as an individual contributor.
This becomes really apparent when it comes time for reviews. At Microsoft we have a review system where we are judged based on the goals we set for ourselves and our ability to achieve those goals. When you are an individual contributor, your review is easy. You just list off everything you did during the previous year. I wrote this. I tested that. I drove this iniative. When you become a lead, your review changes. When you sit down to write your first review as a lead, you'll notice that your individual accomplishments were not what they once were. That's okay though, you are no longer responsible for just "I." You are now responsible for "We." You are judged by, and it is okay to claim credit for, the work of your team. We wrote this. We tested that. One of my reports drove this iniative. The first time you write a review like that, it feels like cheating. After all, you didn't do that work. Your role has changed. That work took place because you facilitated it. You kept people focused and unblocked so they could give 100% of their effort. Welcome to your new job.
http://blogs.msdn.com/b/steverowe/archive/2006/03/15/552008.aspx
lunes, octubre 8
Becoming a Lead, Pt. 3 - Delegating
I've noticed that people new to leadership roles often struggle with
the concept of delegating. Learning to delegate is imperative for a
leader. When you become a lead, you gain responsibility for more than
one person can do. Without delegating, you'll fail to get everything
done and likely burn out trying. Delegating should be easy. Just tell
someone else to do something. So, why is it that so many people
struggle here?
When one becomes a new leader, he or she is likely to have been an individual contributor in that area prior to the change. This means that they are an expert. Many times a new leader is put in charge of less experienced team members. This sets up an interesting conundrum. The leader is more capable of doing the individual work than are their reports. Less experience means less elegant solutions that take more time to implement.
If I, as a leader, am able to solve a problem in 2 days that will take someone else 5 days to solve, the tendency is to just solve the problem myself. In that way, I'll get a better solution, faster. This is true. There is a catch though. It doesn't scale. If I am responsible for the work of 4 people, I can't work hard enough or fast enough to do it all myself. I can accomplish any one task faster than my reports but I can't accomplish *all* tasks faster.
It is important to yield responsibility to those less capable than yourself for two reasons. The first is that you don't have time to do everything. In addition to the individual work heaped upon your team, you likely also have managerial tasks to accomplish. These might include reviews, budgeting, meetings, people development, etc. There is just not enough time in a day to do all of that and accomplish the work of 4 people. It is important to realize that 5 days of a report's time might still get done sooner than 2 days of a leads time.
Second, if you never give responsibility for the hard tasks to the more junior members of your team, they'll never grow. Why is it that you--the new lead--are able to get the work done so much faster than your reports? It is because you have more experience. Giving people responsibility helps them grow. It may take them 5 days to accomplish the 2-day task this time but next time it will take them 4, then 3. Eventually they'll be able to do it as well as anyone.
The key to succeeding in delegation is to give people the opportunity to grow. This also means giving them the opportunity to fail. The two are different sides of the same coin. Growing the skills of your reports will make you successful as a leader. It will allow the team to take on more ambitious tasks and solve problems in better ways. A leader doing all the hard work him/herself precludes growth and is thus a prescrition for failure.
http://blogs.msdn.com/b/steverowe/archive/2006/03/14/551268.aspx
When one becomes a new leader, he or she is likely to have been an individual contributor in that area prior to the change. This means that they are an expert. Many times a new leader is put in charge of less experienced team members. This sets up an interesting conundrum. The leader is more capable of doing the individual work than are their reports. Less experience means less elegant solutions that take more time to implement.
If I, as a leader, am able to solve a problem in 2 days that will take someone else 5 days to solve, the tendency is to just solve the problem myself. In that way, I'll get a better solution, faster. This is true. There is a catch though. It doesn't scale. If I am responsible for the work of 4 people, I can't work hard enough or fast enough to do it all myself. I can accomplish any one task faster than my reports but I can't accomplish *all* tasks faster.
It is important to yield responsibility to those less capable than yourself for two reasons. The first is that you don't have time to do everything. In addition to the individual work heaped upon your team, you likely also have managerial tasks to accomplish. These might include reviews, budgeting, meetings, people development, etc. There is just not enough time in a day to do all of that and accomplish the work of 4 people. It is important to realize that 5 days of a report's time might still get done sooner than 2 days of a leads time.
Second, if you never give responsibility for the hard tasks to the more junior members of your team, they'll never grow. Why is it that you--the new lead--are able to get the work done so much faster than your reports? It is because you have more experience. Giving people responsibility helps them grow. It may take them 5 days to accomplish the 2-day task this time but next time it will take them 4, then 3. Eventually they'll be able to do it as well as anyone.
The key to succeeding in delegation is to give people the opportunity to grow. This also means giving them the opportunity to fail. The two are different sides of the same coin. Growing the skills of your reports will make you successful as a leader. It will allow the team to take on more ambitious tasks and solve problems in better ways. A leader doing all the hard work him/herself precludes growth and is thus a prescrition for failure.
http://blogs.msdn.com/b/steverowe/archive/2006/03/14/551268.aspx
Como reportar un bug
How to report a bug?
It’s a good practice to take screen shots of execution of every step during software testing. If any test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be reported/logged for the same. The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug should be attached to the test case that is failed.
At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project, Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority and Bug ID etc.) are filled and detailed description of the bug is given along with the expected and actual results. The screen-shots taken at the time of execution of test case are attached to the bug for reference by the developer.
After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then associated with the failed test case. This Bug ID helps in associating the bug with the failed test case.
After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug fixing process progresses.
If more than one tester are testing the software application, it becomes a possibility that some other tester might already have reported a bug for the same defect found in the application. In such situation, it becomes very important for the tester to find out if any bug has been reported for similar type of defect. If yes, then the test case has to be blocked with the previously raised bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such bug reported previously, the tester can report a new bug and fail the test case for the newly raised bug.
If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a file with four columns containing Test Step No, Test Step Description, Expected Result and Actual Result. The expected and actual results are written for each step and the test case is failed for the step at which the test case fails.
This file containing test case and the screen shots taken are sent to the developers for reference. As the tracking process is not automated, it becomes important keep updated information of the bug that was raised till the time it is closed.
It’s a good practice to take screen shots of execution of every step during software testing. If any test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be reported/logged for the same. The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug should be attached to the test case that is failed.
At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project, Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority and Bug ID etc.) are filled and detailed description of the bug is given along with the expected and actual results. The screen-shots taken at the time of execution of test case are attached to the bug for reference by the developer.
After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then associated with the failed test case. This Bug ID helps in associating the bug with the failed test case.
After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug fixing process progresses.
If more than one tester are testing the software application, it becomes a possibility that some other tester might already have reported a bug for the same defect found in the application. In such situation, it becomes very important for the tester to find out if any bug has been reported for similar type of defect. If yes, then the test case has to be blocked with the previously raised bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such bug reported previously, the tester can report a new bug and fail the test case for the newly raised bug.
If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a file with four columns containing Test Step No, Test Step Description, Expected Result and Actual Result. The expected and actual results are written for each step and the test case is failed for the step at which the test case fails.
This file containing test case and the screen shots taken are sent to the developers for reference. As the tracking process is not automated, it becomes important keep updated information of the bug that was raised till the time it is closed.
Fuente: http://ssoftwaretesting.blogspot.mx/2010/09/how-to-report-bug.html
Etiquetas:
Bug,
Software,
Software Testing
Ubicación:
Tlaquepaque, JAL, Mexico
jueves, octubre 4
Becoming a Lead, Pt. 2 – Learning To Trust
When
I became a lead, one of the first real changes I noticed was the
necessity of trusting those who worked for me. When you are no longer
doing all of the work directly, you have to report what others tell
you. This is very different from reporting on work you’ve done
yourself. This can feel really strange and can lead to a desire to
double check everything. Resist that temptation.
When you are an individual contributor, you *know* the truth of everything you say. If someone asks you whether a feature works or not, you know the answer. You are the one who ran the tests. You have first-hand knowledge of its state. Similarly, if someone asks you how far along programming for the new widget is, you know. You are the one doing the programming.
This all changes when you become a lead. If your manager asks you the state of a feature, you can only repeat what you were told. If the person telling you didn’t do things right, you’ll be passing on false information. This takes some time to come to grips with. It is a strange thing the first time you have to do this. Up until now, you’ve always known for certain the truth of your statements. Now, you do not and really cannot know.
How then should a new lead deal with this? You have to just trust people. Until they prove that they are not trustworthy, you need to just believe them. If they tell you that FeatureX is working, you have to report that. You don’t have the time to check up on everything, nor should you if you did. You have someone reporting to you whose job it is to do this work. You must assume competence. If you do not, you’ll harm your relationship with that employee and you’ll run yourself ragged. Whoever hired these people verified that they were competent. Unless something changed, they still are. Feel free to spend some time with your employees trying to understand what they are doing and how they are approaching the problems. Do this to educate yourself, not to question their work. With time, you will learn the abilities of those reporting to you and become much more comfortable with their reports.
http://blogs.msdn.com/b/steverowe/archive/2006/03/09/547373.aspx
When you are an individual contributor, you *know* the truth of everything you say. If someone asks you whether a feature works or not, you know the answer. You are the one who ran the tests. You have first-hand knowledge of its state. Similarly, if someone asks you how far along programming for the new widget is, you know. You are the one doing the programming.
This all changes when you become a lead. If your manager asks you the state of a feature, you can only repeat what you were told. If the person telling you didn’t do things right, you’ll be passing on false information. This takes some time to come to grips with. It is a strange thing the first time you have to do this. Up until now, you’ve always known for certain the truth of your statements. Now, you do not and really cannot know.
How then should a new lead deal with this? You have to just trust people. Until they prove that they are not trustworthy, you need to just believe them. If they tell you that FeatureX is working, you have to report that. You don’t have the time to check up on everything, nor should you if you did. You have someone reporting to you whose job it is to do this work. You must assume competence. If you do not, you’ll harm your relationship with that employee and you’ll run yourself ragged. Whoever hired these people verified that they were competent. Unless something changed, they still are. Feel free to spend some time with your employees trying to understand what they are doing and how they are approaching the problems. Do this to educate yourself, not to question their work. With time, you will learn the abilities of those reporting to you and become much more comfortable with their reports.
http://blogs.msdn.com/b/steverowe/archive/2006/03/09/547373.aspx
Casos de prueba para Pruebas Unitarias
¿Cuáles casos de prueba se deben realizar para las pruebas unitarias?
Estos son los casos de prueba obligatorios a realizar cuándo se están realizando Pruebas Unitarias
- Casos de prueba positivos.
Entradas correctas, salidas correctas. - Casos de prueba negativos.
Datos incompletos o faltantes, manejo adecuado. - Casos de prueba de excepción.
Excepciones que son lanzadas y atrapadas apropiadamente.
Fuente: http://ssoftwaretesting.blogspot.mx/2010/07/what-are-three-test-cases-you-should-go.html
lunes, octubre 1
Becoming a Lead, Pt. 1
At Microsoft, there are two classifications of people: Individual Contributors and Managers/Leads. Individual contributors are those who spend most of their time doing the work that makes software spring into existence. These are the programmers, testers, and program managers that create the products. Managers and Leads are those who spend a large part of their time making sure everyone else has time to do real work. Leads
are usually those who have a few people reporting to them and Managers
are those who have leads or other managers reporting to them. Those
of us who are managers pay attention to schedules, product roadmaps,
bug trends, people development, and—most importantly—making sure that
our people are not blocked.
There comes a time in many careers where a person makes the transition from an individual contributor to a lead. There are some big differences and not all of them are obvious. This series of posts will cover many of those aspects. The things I talk about will be specific to the software industry but should be applicable in any occupation. Before I start, however, let me give some thoughts on the prerequisites for becoming a good lead.
Great leaders are grown, not born. No one comes out of school ready to be a great lead. It takes a solid foundation before you can effectively lead. If you try to lead without that foundation, you’ll end up being more Pointy Haired Boss than Great Leader. To get a solid foundation, spend some time (several years at least) doing whatever it is you expect to be leading people at. As
this blog is about the software industry, spend several years
programming or testing (or both) before trying to lead testers or
developers. Once you become a lead, you will have less direct exposure to the technology. You’ll need a solid background to understand everything those reporting to you are working on.
Not everyone is cut out to be a lead. I’ve seen too many times when a great developer is forced to become a manager. What results is often the loss of a good programmer and the creation of a bad manager. I’m very much with the author of First, Break All the Rules. One
of his basic premises is that people have certain talents and if you
don’t have the right talent for a particular job, the best you can be is
mediocre. For instance, I’m not artistically inclined. I could go to art school and get lots of training, but I’m never going to be a great artist. Likewise, if you are not naturally a leader, you’ll never make a great one. With the right talent, and the right training, a person can become a great leader. Without both, they never will.
How do I tell if someone is a natural leader? I have a thought experiment I like to apply. If
I put together 3 people on a project and tell them what the goal is but
don’t assign any of them roles, by the end of the project one of them
will be contributing more to the overall direction than the others. Someone will be the person designating who works on what. This is not the bossiest person. Without a talent for leadership, the person will be seen as pushy and their leadership will be rejected. The good leader will lead without having to claim the mantle of leadership. Instead, it will be given to him/her by the others on the team. It may not be verbalized, but there will be one person the others look to for help making decisions. That’s the natural leader.
I once had a report who claimed he wanted to be a leader. I gave him responsibility for an area and another person to help him do the work. A few months later, the helper was doing his own thing. He was being given no direction. Needless to say, when this person came to me and asked why I didn’t make him a lead, I just had to point to that incident. When given the chance to lead, he didn’t take on the role. Leadership is a behavior, it is not a title. If you think you want to become a manager, do so because you enjoy leading, not because you want the title or the prestige.
http://blogs.msdn.com/b/steverowe/archive/2006/03/08/546210.aspx
Automatización de pruebas de software.
Las pruebas automatizadas constituyen una necesidad casi absoluta en los proyectos que impliquen la creación y desarrollo de software, estas incluyen una amplia gama de beneficios que no están disponibles para las pruebas manuales.
En la actualidad existe una gran variedad de herramientas para realizar pruebas automatizadas. Estas proporcionan una ventaja evidente partiendo de tiempo y conservación de los recursos, sin comprometer la calidad del procedimiento de prueba, la exactitud de los informes finales y la eficiencia del proceso de pruebas, son efectivas y económicamente factibles. El uso de las mismas permite tener más comodidad en la elaboración de los errores y fallos que se producen antes de que el producto de software es liberado, o en momentos en que hay una necesidad de proporcionar asistencia al cliente para resolver los defectos encontrados.
Las pruebas automatizadas han mejorado enormemente el proceso básico de perfeccionamiento de aplicaciones web y sus beneficios se encuentran disponibles para desarrolladores. A continuación se citan algunas de sus ventajas:
- Confiables: Las pruebas realizan con exactitud diversas operaciones cada vez que se ejecutan, de tal modo que evita posibles errores humanos.
- Recursivas: Evidencian cómo el software reacciona bajo diferentes condiciones, cuando se está probando una misma operación repetidamente.
- Programables: Permiten programar pruebas sofisticadas que pongan en evidencia la robustez del software que se pruebe.
- Abarcadoras: Facilitan la construcción de un ambiente de pruebas que cubra cada característica del software.
- Reutilizables: Permite reutilizar pruebas en diversas versiones.
- Factibles: Se pueden ejecutar más pruebas en menos tiempo y el número de los recursos se reduce.
- Rápidas: Su ejecución es perceptiblemente más rápida.
- Flexibles: Las pruebas deben ser fáciles de entender, de modificar y de extender.
Las pruebas automatizadas se ejecutan mediante el empleo de herramientas que facilitan las tareas de un probador de software.
lgunas pruebas de software tales como las pruebas de regresión intensivas de bajo nivel pueden ser laboriosas y consumir mucho tiempo para su ejecución si se realizan manualmente. Adicionalmente, una aproximación manual puede no ser efectiva para encontrar ciertos tipos de defectos, mientras que las pruebas automatizadas ofrecen una alternativa que lo permite. Una vez que una prueba ha sido automatizada, ésta puede ejecutarse repetitiva y rápidamente en particular con productos de software que tienen ciclos de mantenimiento largo, ya que incluso cambios relativamente menores en la vida de una aplicación pueden inducir fallos en funcionalidades que anteriormente operaban de manera correcta. Existen dos aproximaciones a las pruebas automatizadas:
- Pruebas manejadas por el código: Se prueban las interfaces públicas de las clases, módulos o bibliotecas con una variedad amplia de argumentos de entrada y se valida que los resultados de obtenidos sean los esperados.
- Pruebas de Interfaz de Usuario: Un marco de pruebas genera un conjunto de eventos de la interface de usuario, tales como teclear, hacer click con el ratón e interactuar de otras formas con el software y se observan los cambios resultantes en la interface de usuario, validando que el comportamiento observable del programa sea el correcto.
La elección misma entre automatización y ejecución manual de pruebas, los componentes cuya prueba será automatizada, las herramientas de automatización y otros elementos son críticos en el éxito de las pruebas, y por lo regular deben provenir de una elección conjunta de los equipos de desarrollo, control de calidad y administración. Un ejemplo de mala elección para automatizar, sería escoger componentes cuyas características son inestables o su proceso de desarrollo implica cambios continuos.
En el desarrollo contemporáneo de software existe una tendencia creciente a usar Frameworks como los denominados XUnit (por ejemplo JUnit y NUnit) que permiten la ejecución de pruebas unitarias para determinar cuándo varias secciones del código se comportan como es esperado en circunstancias específicas. Los casos de prueba describen las pruebas que han de ejecutarse sobre el programa para verificar que éste se ejecuta tal y como se espera. La automatización de pruebas es una característica clave del desarrollo ágil de software en donde se le conoce como "desarrollo guiado por pruebas". En ellas, las pruebas unitarias se escriben antes que el código que genera la funcionalidad. Sólo cuando el código pasa exitosamente las pruebas se considera completo. Cuando hay cambios, el programador descubre inmediatamente cualquier defecto que rompa los casos de prueba lo cual baja el costo de la reparación. Dos inconvenientes de este estilo de trabajo son:
- Algunas veces se "desperdicia" la capacidad del programador escribiendo las pruebas unitarias. El entrecomillado se debe precisamente que asegurar la calidad del producto no es desperdicio alguno.
- Normalmente se prueban los requerimientos básicos o el flujo normal del caso de uso en vez de todos los flujos alternativos, dado que extender las pruebas más allá de la prueba base eleva el costo del producto. En algunas ocasiones los flujos alternativos son probados por un equipo de pruebas más o menos independiente del equipo de desarrollo.
Muchas herramientas de automatización de pruebas proveen características para grabar y reproducir acciones del usuario para posteriormente ejecutarlas un número indefinido de veces, comparando resultados obtenidos con resultados esperados. La ventaja de ésta aproximación a la automatización es que requiere de menos desarrollo de software, sin embargo el confiar en éstas características del software lo hace menos confiable en la medida que muchas veces dependen de la etiqueta o posición del elemento de interfaz, y, al cambiar, el caso de prueba debe ser adaptado al cambio o probablemente fallar. Una variante de estas pruebas es la prueba de sistemas basados en la web en las que la herramienta de prueba ejecuta acciones sobre el navegador e interpreta el HTML resultante. Una variación más es la automatización sin scripts, que no usa grabación y reproducción de acciones sino que construye un modelo de la Aplicación Bajo Prueba (AUT en sus siglas en inglés, ABP) que permite a la persona que prueba ("tester") que cree pruebas simplemente editando parámetros y condiciones.
Data-driven testing
Una vez teniendo el script para prueba automatizada, con poco esfuerzo más podemos obtener mucho más beneficio de esta misma nave. En este caso, podemos parametrizar la prueba y hacer que cada valor ingresado en cada campo sea tomado de un data pool (fuente de datos de prueba, tal como una tabla o archivo), y así estar probando distintos escenarios.
Nuestros datos de prueba necesitan pueden ser una tabla con las siguientes columnas, que corresponden a los campos de la forma de registro de cliente: First_Name, Last_Name, Country_Name, City_Name, Address, Balance.
Entonces tendremos un datapool con estas columnas. Luego podremos pensar distintas combinaciones de valores para probar. Por ejemplo, valores nulos, o valores muy grandes, strings muy largos, balances negativos, etcétera. Así, la misma prueba la puedo ejecutar con todas las combinaciones de datos que se me ocurran con tan solo ingresar los valores en una tabla. Esta ejecución la puedo hacer cada vez que se necesite, obteniendo información sobre el estado de la aplicación en forma casi inmediata.
Si esta prueba la tuviera que ejecutar en forma manual con cada dato de prueba interesante, me llevaría más tiempo que ejecutarla automáticamente. O sea, los beneficios de este caso de prueba automatizado se ven directamente en este ciclo de pruebas, y no recién en el siguiente ciclo de pruebas de regresión. Son beneficios a corto plazo, con lo cual, visto de esta forma, es muy conveniente comenzar a automatizar.
De acuerdo a lo que cuenta Cem Kaner en uno de sus artículos automatizar una prueba lleva entre 3 y 10 veces más que ejecutarla en forma manual (este artículo es de 1997, pero es aún vigente, o en tal caso, el costo es menor). Incluso, si en los siguientes ciclos de prueba tenemos cierto costo de mantenimiento de estos casos de prueba, imaginemos que también de 10 veces más que el costo de ejecución manual, también obtendremos beneficios si ejecutamos 11 veces la prueba de regresión.
Etiquetas:
Automatización,
Herramientas,
Script,
SDLC,
Software Testing,
Técnicas,
Técnicas de Pruebas,
Tester,
Testing
Ubicación:
45500 Tlaquepaque, JAL, Mexico
Suscribirse a:
Entradas (Atom)