Saturday, May 21, 2011

Automate tasks in software testing

When the term automation is mentioned, it is common for people to think of automated tests. But, time and effort can also be saved by automating other tasks such as generating test data and reporting test results automatically. A few factors should be considered for effectively deciding whether to automate a task or not. Last year, we had fruitful discussions on this topic within the STS group.

Let us see some candidate tasks for automation. Testers spend a lot of time on these tasks. If such tasks are fully or partially automated, the saved time can be spent on testing the system.

1. Test identification based on risk or other factors
List all the available test cases along with related features, components and priorities. The automation can take the inputs e.g. impacted features and generate the list of test cases that should be executed on the new application build.

2. Test estimation
Automation can be used to help estimate the test effort and duration. This can be done based on the preferred estimation approach. Whether by querying the historical data for actual efforts and durations, or by applying a custom formula to estimate test effort and test duration.

3. Generation of test data
Build a library of business rules for your test data. Build the initial seed data manually. The automation can use the seed data and generate test data based on the chosen business rules. Since test data generation is usually consumes much time, the automated generation can be performed ahead of time. Then, the pre-generated test data can be used directly during test execution.

4. Generation of bug reports
This automation can be built into the test automation framework. Whenever an automated test script confirms an error, it logs into the bug tracking system and reports a bug with required information. Such as bug title, steps to reproduce, test data used, environment used and so on.

5. Generation of test reports
This automation can execute queries against the test management system (and bug tracking system, if different) and generate test reports. Even distribute them by email or publishing to a website.

6. Release notes preparation
Release notes contain both static and dynamic data. This automation can execute queries on the test management system to retrieve the dynamic data such as features passed, bug fixes passed and known bugs.

Before prioritizing the automation development, always analyze the following factors.
a. Degree of automation achievable
b. Skills required to automate
c. Effort required to automate
d. Number of proposed users
e. Effort required to train users
f. Effort required to execute automation
g. Effort required to maintain automation
h. [Important] Manual effort saved

In the future, I see a number of such tasks automated with the help of vendor tools or bespoke in-house automation.
Let me know if you liked this post. I would love to know your thoughts on this topic.

Sunday, May 15, 2011

Software testing myths


Myths abound in the software development industry. Software testing is no exception. People tend to hear such statements and pass it on to others minus the full context. Not only that, they even tend to use such myths when making own decisions, especially when in hurry.

1. A ratio of 1 tester to 5 developers is enough to get good test coverage.
This myth may be true in long-term new development projects with multiple stages of testing (e.g. SIT, alpha, beta and UAT). This ratio may fall short in:
a. Maintenance projects with highly inter-dependent components, requiring testing of all impacted components
b. Projects requiring huge effort to create an independent test environment
c. Projects requiring extensive regression tests (due to, say, business impact, contracts or regulations)

2. Anyone can test software.
Anyone can test software if they know what to do. A professional tester is well-versed in the project requirements, software testing concepts, test design and test execution techniques and is also an assertive communicator.

3. Developers can test software better than testers. After all, they are the ones who developed it.
While developers know the code they develop intimately, they may not be the best people to test it (or test code written by other developers), especially at the integration level or system level. Testers execute their tests in a clean test environment with a fresh mindset, doing both positive and negative testing. Developer testing complements (and does not substitute) testers' testing.

4. Test cases are only derived from requirements.
Requirements are only one input to write test cases. A professional tester uses many other inputs like design documents, standards, checklists, prior test cases, past bugs reported by the customers and prior versions of the application/ similar applications.

5. Testers need only execute a set of test cases.
A static test case suite is unlikely to lead to the discovery of new bugs in the application. Before execution, the test suite should be updated by removing the useless or redundant test cases, completing or correcting the existing test cases and adding new test cases to increase the likelihood of discovering fresh and interesting bugs in the application.

6. Tests already executed need not be repeated in the release.
It depends. If the new builds do not impact the application areas covered by the tests already executed. Much more likely, some tests need to be re-executed to re-gain confidence in those areas.

7. Testers test every permutation and combination of inputs a user can provide to the application.
This is not possible except in very trivial cases. Even testing all possible inputs of a form with a few text, date and numeric input controls within the given time may well be impossible. What testers actually do is to design the minimum number of input sets that tests each input control. Specific inputs to each control are limited by using testing concepts like equivalence partitioning.

8. Testers can test equally well (if not better) when the time is short.
Nobody can perform a thorough job if rushed. Testers are no exception. If the available time is reduced too much, a tester prioritizes the tests to execute and keeps executing the higher priority tests until time runs out.

Image: Idea go / FreeDigitalPhotos.net

Monday, May 2, 2011

Simple testing

Lately (actually since a while), I have been looking for ways to increase my productivity. One of the ways I recently came across is simplification. Why?

1. Better focus, especially on creative or complex tasks
2. Less stress, due to elimination of so many unimportant tasks
3. More time on hand

So, how to simplify our software testing tasks? Here are some examples. Design your own according to your unique situation.
1. Analyze your work and establish what is important in your role. Put it down in a sentence. For example, Maintaining your test suite, executing it and reporting defects may be critical to your role. Formatting your status report or pointless gossiping with your colleagues is definitely not.
2. Allow yourself time to plan. Plan simple i.e. think about the easiest set of actions that will complete your task.
3. Guard your time. If someone interrupts you, dismiss the interruption asap. If someone approaches you for help, consider helping once you are done with your tasks or say no.
4. Own your work. If some task had a problem and requires re-work, don't blame other people. They would respond with their reasoning and you would end up spending time on this exchange. Then attend to the problem. Instead, just fix the problem at once and move on.
5. Use software apps to your advantage. See examples.
a. Block your mail calendar for important tasks in advance and in order of importance. This would allow you to see your upcoming tasks for the next day, week or month and you can better prepare for them.
b. To be responsive, check your email a limited number of times in the day. If you leave it on, you would get distracted every time you get a new email. And you will have the urge to stop your current task and read/ take action on the email you got.

This was a short and simple post and it made me feel better writing it.