September 27, 2011

Accessibility Testing Checklist

Accessibility is the attribute of a software application that makes it possible for people with disabilities use it. Accessibility is very important because a large number of potential users have limited abilities or disabilities. Examples of disabilities include visual impairments (from colorblindness to partial sight to complete blindness), deafness (partial or complete hearing loss), mobility impairment (inability of use hands or other parts of the body) or neurological (ADD, epilepsy etc.). Examples of limited ability people include people with limited education, old people with medical conditions and children. Thankfully, a number of assistive devices (e.g. screen readers, speech recognition software and Braille terminals) are supported in a number of operating systems. As testers, it becomes our responsibility to ensure that people of limited abilities or with disabilities are able to use the assistive devices with the application that we are testing. So what things we must test to ensure that the software application is accessible?

1. Each functionality and content is available using only the keyboard (not using the mouse at all). There is no requirement of a minimum speed of individual keystrokes.
2. Each page, each section and each table has a title that indicates its purpose.
3. Each text has a contrast of at least 4.5:1.
4. Each abbreviation or unusual word is explained.
5. Each link is self-explanatory.
6. Each label in the application is self-explanatory.
7. Each non-text content like image, audio or video has an equivalent text/ transcript.
8. Sufficient time is available to the users to read the content and take action based on it.
9. Each error message has text that explains the occurred error.
10. Help is available in text to the user from any place in the application.
11. It is possible to know the current location from anywhere within the application.
12. Color is not the only means of communicating any information. There is also an equivalent text indicating this information.
13. Similarly, shape/ size/ location on the page is not the only means of communicating any information.
14. There is no content that flashes more than 3 times per second.

This is only a partial checklist to ensure an accessible application. Refer the Web Content Accessibility Guidelines here for more information.

September 25, 2011

A/B Testing

Many websites use a type of software testing called A/B testing or split testing. The objective of this testing is to determine and positively influence the user experience on the website. It involves testing distinct design layout options of the website. A/B testing is also performed on non-web elements of the website such as emails. Many large companies use A/B testing. So, what is A/B testing?

A/B Testing

A/B testing determines the better of two content or design options on real users using web analytics tools. It requires an existing benchmark to measure against. For example, let us say that you test an ecommerce website. Looking at the website logs or analytics, you know that only 20% of the users who start a checkout process actually complete it. You suspect that your multi-page checkout process could be a cause of checkout abandonment. Instead of directly changing the checkout process to single-page, you decide to execute an A/B test. For this, you set up two checkout options - the current multi-page and the new single-page checkout. From now on, 50% of the users who start the checkout get the multi-page checkout option and the others get the single-page checkout. In order to not confuse the users, you record which option was presented to which user and continue to provide the same option to the users on their repeat visits too. You monitor the test results for a month. Then you analyze the results and take necessary action.

Possible test results
1: Multi-page Complete = 10%, Single-page Complete=30%: This is strongly in favor of the single-page option. So, you decide on deploying this option.
2. Multi-page Complete = 18%, Single-page Complete=22%: This is a minor difference. Maybe, the cause of low checkout completion is not the number of pages. So, you decide to look for other elements and then run a different A/B test.
3. Multi-page Complete = 28%, Single-page Complete=12%: This is in favor of the existing option. So, this is not the cause of low check out completion. You decide to test another element e.g. the actual text on the checkout pages.
4. Multi-page Complete = 10%, Single-page Complete=12%: This is marginally in favor of the new option. But, more urgently you need to find out the cause of both options performing below the benchmark.

Points to keep in mind
a. The two options should have only a very limited number of differences. If there are many differences, it is hard to pin-point the improved element with accuracy.
b. The sample sizes in A/B testing should be statistically significant. For example, the results based on an A/B test on 200 users is realistic. A test on just 5 users is not.
c. The two options should be tested simultaneously, rather than one after the other. If tested consecutively, it is possible for other factors (e.g. changed user patterns/ demographics) come into play and this may skew the results.
d. Instead of just two options, multiple options can be tested. In this case, it will be called A/B/N testing (for N options).
e. Each and every element of the web application interface is a candidate for A/B testing. The elements to be considered include home page, actual text, font faces and sizes, colors, images, links, linked pages, placement of elements and so on.
f. A/B testing should be done on an ongoing basis. After improving one element of the user experience, the next should be targeted and tested.

I hope that you now understand A/B testing. There is an A/B test currently in progress on the home page of this blog. Can you spot it?

September 24, 2011

Code injection attacks

If you are going to do software security testing of applications, you must be aware of possible code injection attacks. This is especially required when testing web applications because they face a hostile environment, the internet. Code injection means adding extra code to an executing application with the purpose of modifying the application behavior. This extra code can be in the form of HTML, Javascript or SQL or even unhandled type of text (e.g. special characters and long strings). Here are the types of code injection attacks.

September 22, 2011

Common Security Terms

I thought about mentioning some important computer security terms. It would be good if you know and understand these terms which are commonly used in computer security. See my video, Cyber Security Basic Terms and Concepts or read on.

1. Authentication
It is the process of confirming the genuineness of an entity or data. For example, when a user logs into a website, the website tries to confirm if the user is genuine. That is, he is using the correct user name and password which are active in the site database. There can be other user data involved in authentication e.g. security token, Personal Identification Number (PIN) or finger prints/ retina print.

2. Authorization
It is the grant of permissions to do certain actions. For example on a website, a general user may view their transactions and modify their profile. But, if the general user is inactive, he may only activate his account and not even view his own profile. An admin user may view and modify the settings of the website, but may not view any user's data.

3. Cryptography
It is the use of techniques to assure secure communication. Such techniques include using encryption algorithms and digital signatures. For example, a website may require encrypted data sent from the server to the client browser. This data is decrypted in the client and then rendered to the user.

4. Exploit
An exploit can be a wide range of items e.g. a set of commands, a set of data or even a particular system. It is possible to use an exploit against an insecure information system with the purpose of generating undesirable behavior in the system. Exploits always target a vulnerability (see below) in the system. For example, an attacker places exploit code on a trusted website. A user accesses the site from an insecure client machine which results in the user's browser executing the exploit code. The attacker then runs some attack using the user's machine with user credentials.

5. Firewall
A firewall is a software-based or hardware-based device that can be configured to allow genuine network traffic to go through and stop malicious traffic. For example, a firewall may intercept all network packets sent to an application, such as a browser. If the traffic originates from a known dangerous source, it may drop all the packets preventing harm to the client machine.

6. Identity
It is the set of data that is unique to a person. The digital identity of a person is the identity that is used on-line. For example, the identity of an employee may comprise of attributes such as Employee number, Date of joining, First name and Last name. The employee is then required to use this identity within the organization throughout their stay.

7. Penetration test
It is a test to check the security of a network while pretending to be an external attacker. It is also called a pentest. The goal of a pentest is to gain access and some level of control over a device within the network. Valuable information is discovered by a pentest e.g. vulnerabilities present in the network and the effectiveness of automatic network scanning software.

8. Physical security
It is the presence of physical barriers to information resources of an organization. Examples include gated access manned by security guards, access card swipes required to enter the premises and logged access to restricted areas within the premises.

9. Threat
It is a danger to the information resources of an organization. It can be intentional e.g. an external attacker or accidental e.g. an earthquake. There are usually numerous threats to the information resources e.g. natural catastrophes, power outages, hardware/ software failures and malicious individuals/ organizations. Information resources of different organizations may face the same threats.

10. Vulnerability
It is a weakness within the information system of an organization. For example, software vulnerability arising from insufficient testing, hardware vulnerability arising from its insecure storage and physical site vulnerability arising from its location within a natural disaster prone area.

If you are interested to know about many more computer security terms, you can visit the SANS glossary of terms here.

September 11, 2011

Automation Criteria - guidelines on how to write test cases that will be automated


Everyone knows that a strong house can be built on a strong foundation only, never upon a weak one. This post is in continuation to the earlier post, How to write automatable test cases? Test cases here mean "manual" test cases, the kind that a tester can execute against the application under test by hand. Each of the following guidelines is also applicable to create valid test cases that would be executed, so there is no special effort here to prepare such test cases for automation.

1. The test case must be correct. It is obvious that the test case must be correct with respect to the workflow and expected application behavior. This guideline is especially important for automation because though it is possible for a manual tester to ignore obvious incorrectness in a test case, the automated test script would not be able to do the same. False negatives would be reported every time the automated test script is executed.

2. All the business scenarios must be covered by the test data. This refers to the completeness of the test case. The test case must contain test data for all applicable business scenarios that users would face.

3. The test case should have sufficient detail so it is possible for another person to execute it without asking questions or getting clarifications. Pre-conditions, test steps, test data, expected results and post-conditions are important components of a test case. The test case should be written with the target consumer in mind. If the automation engineer has good knowledge of the application under test, the test case components may be summarized. If not, they should be detailed out.

4. [Important] The test case must be capable of finding bugs in the current release of the application. If a test case has not caught a bug in the last few releases of the application, the likelihood of it doing so now is limited. Extra effort is required to automate test cases. So, why not automate the test cases with a high likelihood of catching bus?

5. The test case should have been accepted by the team. The test case that would be automated should not be in a raw state e.g. just whipped up by someone. It should have been reviewed/ discussed, stored in the correct file and accepted by the team as a valid test case.

6. The test case should be under version control. Placing the test case in the version control repository shows the changes made to it subsequently. Changes to the test case should be propagated to the automated test script, whether the latter is under construction or already built. Therefore, there must be a process to update the automated test script whenever the test case is revised and accepted.

Correct, complete and up-to-date test cases are important assets for any testing team. Due attention is paid to such test cases. Similar attention should also be accorded to the automated test script of such a test case. After all, its the same test case, just written in a different format. Therefore, the automated test script should be reviewed/ discussed, accepted and placed under the version control repository. The results of each execution of the automated test script should be given similar attention.

September 10, 2011

Do managers have bigger brains?

Within the last couple of days, I have been intrigued by a news item that said that Managers have bigger brains. Then, I used Google to find out the related text. The purpose of this post is not to analyze whether or not "managing" results in a bigger brain, or even the implications of this discovery. The focus of this post is to analyze how simply using natural language to describe the result of a study can distort what is being communicated. Then, move on to the application of this analysis to requirements analysis in software testing.

Managers have bigger brains: Since this study was conducted University of New South Wales researchers, I searched the UNSW website and found this article here. My comments were:
1. I found the following text at this link "UNSW researchers have, for the first time, identified a clear link between managerial experience throughout a person’s working life and the integrity and larger size of an individual’s hippocampus – the area of the brain responsible for learning and memory – at the age of 80." So, they are not talking about the entire human brain but only one of its parts.
2. The article talks about finding a relationship between the size of the hippocampus and the number of employees managed. It does not state the exact relationship.
3. Per the article, the researchers used MRI in a sample of 75 to 92 years old. Around the middle, the article moves on to the relationship between exercise and learning and other topics presented in the symposium where this study was presented.

This news item also appeared on other websites such as MSN India here.
4. I found the text "Staffers agree or not, managers do have bigger brains, says a new study." The prior article had no mention of the staff agreement to the manager having a bigger brain. So, did the research take the subjects' staff's agreement into account?
5. This news item says "Researchers, led by the University of New South Wales, have, in fact, for the first time, identified a clear link...". The previous article just mentions "UNSW researchers". So, were there teams from elsewhere involved in the research?

What can we take away from the analysis?
a. It is possible for people to over-generalize or over-specialize a description. So, we should probe further to find out the caveats and exceptional conditions. For example, a requirement may say "The user account shall be locked after 3 unsuccessful attempts". On probing further, we may find that this is true for all users, except the administrator user. The system may be rendered in-operational if the administrator gets locked out :)
b. It is possible for people to just provide references to other information, without naming it explicitly. Instead of relying on our assumptions, we should ask for the reference directly. For example, a requirement may say "Other requirements implemented in the previous release shall continue to be supported". We should ask for which release - the last major release, the last release (major or minor), the first release or any other release? If there is a conflict between the requirements of this release and the "previous release", is it okay to have the requirement of this release take priority?
c. We should question the source of the requirement. Has it come directly from the customer or system analyst or just a suggestion from somebody? In other words, is it a genuine requirement? For example, someone may suggest making reports completely configurable. It may be against the client's objective that any user comes up with any format of a report, leading to confusion. The suggestion should be declined. Of course, suggestions made by the management need to be declined tactfully, if infeasible to implement.

September 03, 2011

Testing Training - How I train professionals?

Ever since I started my career in software testing, I have been asked to give training sessions on testing topics. The topics on which I have trained people include basics of software testing, writing correct and re-usable test cases, rapid test execution, bug tracking systems and defect management to test automation approaches, performance testing, system security testing and test methodologies. 

I do have a couple of advantages. First, two years experience in giving software training during my initial career. Second, I have always been keenly interested in how people learn (probably because of 6 teachers in my family, including my mother). I am life-long student of the psychology of learning.
Let me explain the approach I use during training. This is both for your benefit and my benefit. Your may benefit from these tips by increased recall of the participants after the session which leads to more application of the training material in projects. I will benefit by looking up my tips to ensure that I continue to follow them and build them further.

1. Know your topic
This is most important. You should know your topic really well. Not just a little more than the participants, but many times more. Why? In order for learning to take place, the participant has to believe in the superior knowledge of the trainer. For example, let us say that you are training on creating automated test scripts. You have done this before using a functional test automation tool. Someone in your class asks about parameterization and you are stuck. What will happen? What you say from that point onwards would not be credible. Therefore, give training only on topics that you know inside out.

2. Plan your training session
You should know the objectives that the training session should achieve. This information is available from the sponsor of the training. Also, if you know the participants, you can find out their current knowledge level and their expectations from the session. If you don't know the participants in advance, you should spend some time at the beginning of the training session to find this information. For example, if you are training on system security, the participants need to know the basic security concepts like information integrity, information confidentiality and information availability. The training session should cover the material required to increase the participants knowledge from the current state to the desired state.

3. Create your training material
Once you are clear on the objectives of the training, the next step is to design the sub-topics and training material. For example, if you would be training on writing test cases, you should cover the inputs to the test cases (requirements documentation, design documentation, prior test cases, test case formats etc.) and tips on how to write test cases (with all scenarios, pre-requisites, test steps and expected results). The training material can include practical examples of good test cases.
One caveat: The training material should not be too lengthy or cluttered. The main focus should be on the participants understanding the topic well. They can always look up the references if they need more details later.

4. Starting the training session
For learning to take place, the participants should be engaged in the training session. A useful way to do this is to explain the objectives of the training session in terms of what they already know and how the training will help them work in a better way. For example, if training on bug tracking systems, the objective is to use the bug tracking system more efficiently, you could mention that you will share tips on using the bug fields correctly and setting up email notifications for more efficiency.

5. Treat the topic logically
If you plan your training session well, you would have identified the logical sequence of the sub-topics. Answer questions from participants as required but stay on course. For example, if training on performance testing, the logical sequence for you may be creating the automated test scripts, parameterization and correlation, modeling the test, test execution and analyzing the results. If you have moved to test modeling and you get a question on parameterization, answer it quickly and then say you are moving back to test modeling. Also, explain any new concept or term with more than one example as soon as you introduce it.

6. Make the session interactive
People like sessions during which they can ask a question any time. If the question is related to the training session, you should answer it. In fact, you should always pause for questions on the conclusion of a sub-topic. If a question is not directly related, you could park it. For example, if you are training on functional test automation approach and someone asks you which tool is best for automated testing, you could say that it is not directly related and you would take it offline later. You always need to be respectful of the participants' time that they have chosen to spend listening to you.
One trick I use to keep my session interactive is to pretend that a term or word has slipped my mind. This forces people to think and they come back with suggestions energetically.

7. Summarize the training session
After completing the session and answering questions, be sure to summarize the main points in the training session. The material covered must be linked to the objectives of the training session. For example, if you trained others on object repositories, you could say that by now the participants should have awareness of the types of object repositories and they should be able to make an informed decision to use the best type in their project.

8. Finishing up
Spend a little more time sharing the training material, providing further references and providing a contact to answer questions later on. Also, reiterate the expectation from participants now they attended the training session.

Well, these were but some of the guidelines I use when training others. Your training session should be more productive if you use the above. What do you think is most important in making a training session successful?