Tuesday, May 25, 2010

Why you need to articulate more?

You need to communicate your ideas, thoughts and progress whether you are a tester, a manager or the CEO of your company.

What is the benefit of articulating your thoughts? When people hear from you, they come to know where you are coming from, what your approach is and what are the actions that you are going to take? Once they understand your position, they tend to support you. Why is this? This is because your communication makes them think in a way similar to yours.

A person with the same knowledge as you but who does not articulate their thoughts receives none of these benefits. If you don't believe this, go back to your school days. Think of the popular students. Were they the ones who always stayed silent or were they the ones who had interesting things to say to the teachers and in your friend circle?

But, what happens when you articulate more? You influence more people and in a deeper way.

1. Tester: A tester who speaks of new test ideas in team meetings and writes informative bug reports and test reports is perceived well by his colleagues and manager. The team and the manager are impressed by the tester and increasing responsibilities/ authority come his way.

2. Manager: A manager who clearly conveys the objectives of the project to his team members and updates the management effectively runs a more successful project. The management is eager to assign bigger and more important projects to him.

3. CEO: A CEO who pitches the strengths of his organization well in front of the clients tends to win their confidence (and more business for the company). When the same CEO outlines the company strategy clearly to the employees, he aligns the employees to the company objectives well.

When you articulate more, you:

a. Force yourself to think about more topics of interest.
b. You study/ research these new topics. You learn.
c. You plan according to the objectives of your communication.
d. You design what you are going to say in structure and words suitable to your recipients.
e. You overcome shyness when you actually communicate.
f. You become mentally agile to answer your recipients' questions well.
g. You become faster and more effective in each step of the communication process.

So, seize every opportunity to articulate. And, articulate better than you have ever done before. Get more and more benefits.

Saturday, May 22, 2010

How to review better?

We often get artifacts such as test plans, test cases, test results or bug reports to review. This is especially true for those of us in the leading or senior positions. Do you wish to be a good reviewer? If so, try NOT to fall into the following categories as others have in the past.

1. The Silent Reviewer
You submit your material for review. There is NO response from the reviewer. You decide to follow-up a bit later. Still, there is NO response. What the Silent Reviewer is not telling you is the reason they are not able to perform the review. Often times, the Silent Reviewer assumes that you know everything that you need to provide them to do the review. Diplomatically coaxing the reviewer to share the reason can be helpful. The trouble is that not many authors think of this approach or take the necessary time to ease the review along.
A dangerous version of this type of reviewer is the Silent and Angry Reviewer. They do not perform the review but tend to form a poor opinion about you.

2. The To-and-Fro Reviewer
You submit your material for review and look forward to completing other work. Little do you realize that you are going to be spending a lot of time with this particular reviewer. The To-and-Fro Reviewer takes one look at the material and shoots off a message e.g. You have not used the correct template OR You have not provided a specific piece of information. Your reply results in request for still more information or maybe, a review comment. You feel confused and exhausted after a long chain of communication with this reviewer. The trouble is the To-and-Fro Reviewer is performing the review on their whims and fancies.

3. The Opinionated Reviewer
You request for a review and what do you receive? A personal comment on your lack of knowledge, lack of competence or something basic that you have missed or done incorrectly. You feel bad after hearing from the Opinionated Reviewer because they are not focusing on the material but rather on you. You may also find the Opinionated Reviewer transform into another category after blaming you personally.

4. The Insensitive Copier
The Insensitive Copier wants to look good in the eyes of others. Even when they do a thorough and useful review, you find that they have copied their review comments to your manager, their manager, colleagues and the Head of the unit. Their perception of the quality of your work is now visible at the high levels. Worse, they may ask you to address their review comments immediately. If you do not address the review comments promptly, you risk looking lazy in the eyes of the management. So, you have to re-schedule your other (important) work and tackle the review comments pronto.

5. The Skimpy Reviewer
This reviewer provides you little value. Maybe, it took them less than two minutes to review your work. You get comments like:
a. It seems alright at first glance.
b. The formatting seems to be off in places.
c. I am sure that you have done a thorough job. Don't have any review comments as of now.

6. The "Busy" Reviewer
This reviewer performs the review, even to a high standard, but in their own time. What good are review comments on your test cases when testing is mid-way and there is little or no time to update the test cases? In extreme cases, you receive their review comments after the event is over e.g. the release has been deployed in production. There is little use for the review comments except as learning for the future.

If you do not agree with the above, try to remember the time when you were at the receiving end of one of the above types of reviewers? How did you feel after the review? Drained, exhausted, bitter?

If the review is not too important or you are too busy to do it well, you should politely decline it giving the genuine reason. However, if a review is important, you should do it well. Schedule an available time for your review. Make sure that you have all the required material, do the review and pen your review comments using your knowledge and wisdom. Go through your review comments and make sure that they are professional, factual and focus on the material (and not on the author). And please, communicate your review comments promptly giving sufficient time to the author to address them correctly.

Tuesday, May 18, 2010

GUI Test Automation with RoutineBot

Recently, I came across a simple and effective GUI Test Automation tool. It is named RoutineBot. RoutineBot works by searching and performing actions on image patterns. Other than that, it can emulate keystrokes and mouse movements and clicks.

A few salient points about RoutineBot are:
1. It works with both Windows applications and Web applications.
2. It supports Pascal, Basic and JScript scripting languages.
3. It provides warning and error messages to help debugging, returns results of the test and closes the test on timeout.
4. It provides Environment options (e.g. snapshot interval i.e. time during which you should navigate to the image, location of the log file etc.)
5. It currently comes with a free unlimited 30-day trial version.
6. The full version is priced quite economically when compared to other tools with similar features.

Here is my experience when I decided to try out RoutineBot.

The installer is in a single file, routinebot.exe of a small size (2.2 MB). On running the installer, the installer shows the standard wizard (confirmation to install RoutineBot, License Agreement, selecting destination folder, selecting start menu folder, creating desktop icon, selected settings so far and the complete setup screen). The installation was done in a matter of seconds on my machine.

Applications tested

Since the example mentioned on RoutineBot website used a Windows application (Calculator), I decided to use RoutineBot to automate few test steps on a Windows application. However, I used the Notepad application instead. The steps that I decided to automate were:
a. Open Notepad.
b. Type some text in it.
c. Save the text in a file.

Opening an application is very simple in RoutineBot. The Execute command did that for me. Next, I needed to enter some text. The command to do so is EnterKeys. Note that EnterKeys can be used not only to type text but also other keystrokes like Ctrl, Alt, Enter, Function Keys and so on. The next part is where I had to learn using the RoutineBot approach of using image patterns. I needed to click the File menu item. Therefore, I changed the tab to Select Test Image. Then, I clicked on the Make snapshot button. It starts a countdown of 5 seconds during which I had to move the mouse pointer near the File menu item. Next, I used the Select and crop button to select just the File menu item text. Then, I saved the selected image (of the File menu item text) as a .bmp file (I could have chosen the .jpg or .gif format instead) using the Save sample button. Next, I selected the action of MouseFocuse (the default action). Now, I switched to the Script tab and placed a MouseClick command after the MouseFocuse command. Similarly, I automated the click on the Save menu item (under the File main menu item), typed a filename and clicked on the Save button. Every time I added something to the script, I clicked the Run script button to ensure that it worked just as I wanted it to work.
My final script looked like:
EnterKeys('Hello world!');

Firefox browser
Next, I chose to automate the following test steps of a Web application:
a. Open Firefox browser.
b. Navigate to Google search engine.
c. Search for "RoutineBot".
d. Click on the first result (expected as the RoutineBot website's home page).

I tried the Record option but, as expected, it generated too many lines in the script. Therefore, I chose to create my own script by using the Add Action button. The working script looked like:
Execute('D:\Program Files\Mozilla Firefox\firefox.exe');
EnterKeys('www.google.co.in~'); //Note that the ~ stands for the Enter key

Licensing and Price
1. I found the price of RoutineBot reasonable. You can see the pricing information here.
2. Note the important current licensing approach here, "One registered copy of RoutineBot may either be used by a single person who uses the software personally on one or more computers, or installed on a single workstation used non-simultaneously by multiple people, but not both."
3. It currently comes with free updates (minor version changes) but paid upgrades (major version changes).

Other tips
1. RoutineBot did not display complete UI on my screen that had the resolution set to 800X600. I found that when I increased my screen resolution to 1024X768 or higher, there was no display problem.
2. If you add actions from the buttons (Add Action or Select Action) as well as write commands in the script yourself, you should look out for formatting errors. Anyway, RoutineBot points the exact location where the syntax is incorrect.
3. When you Hide RoutineBot, it not only becomes invisible but also disappears from the list of open applications (meaning, for example, that you cannot use Alt-Tab key combination to open it). On hiding, RoutineBot displays as an icon in the taskbar. You can click this icon to unhide it.
4. When you execute your script, you should ensure (programmatically or otherwise) that the RoutineBot Interpreter toolbar (with Stop, Pause and Hide buttons) does not overlap an image of interest. In other words, the image to be searched should be visible. Otherwise, RoutineBot will not be able to find the image and perform actions on it.
5. If the image pattern changes in the application, then your script may not work correctly. Therefore, for images likely to change, it is better to use smaller images (that stay constant) or to use the button's caption to click it.

This is my overall view of RoutineBot. I was able to learn building test automation scripts using RoutineBot within a few minutes. Then, I read the documentation that is available on the tool website (see links below). After reading the documentation and trying simple examples, I became confident that I had mastered the basics of the tool. Since, I the trial period of the tool is far from over, I plan to automate test cases of a web application with it and see for myself how well RoutineBot is able to automate them.
Important links

Disclaimer: I have reviewed RoutineBot as a potential user. I do not represent the tool vendor (AKS-Labs). If you want to evaluate RoutineBot for your needs, you should use your own judgment and contact the tool vendor directly.

Monday, May 17, 2010

What have I learnt after blogging for a year?

The timing of this post is almost correct. I have been blogging for a little over a year now. Software Testing Space has now over 50 articles. Here are the top lessons that I learnt while blogging about topics in software testing. You will find these tips handy if you blog or plan to blog in the future.

1. Getting ideas for posts is critical.
In the beginning, I spent a lot of effort of listing and evaluating ideas to blog about. Then, I would select a topic that was interesting to me. That was a mistake. Your post is helpful only if the content is useful to your readers. Off late, I have found that a good source of ideas is the questions posed by readers. Of course, the response to the question should provide value and be interesting. You can also write on topics on which there is little, misleading or no information available online.

2. The title of the post is its most important part.
A reader is more likely to read the post if your title is interesting. A short title is better than a longer one. If you can frame the title as a question, even better.

3. Posts that draw on personal experience are more popular.
Posts should be interesting. A post based on personal experience captures a reader's attention more than one that just deals with a topic in a theoretical fashion. If you have no personal experience to draw upon, then you should provide the data supporting what you write. At least, you should explain the logic of your thoughts on the subject.

4. The post itself should not be too short or too long.
The post should cover the subject well. For example, if you are writing about how to create a good test strategy, you should first define what you mean by a test strategy. Then go on to point out the benefits of a good test strategy and finally outline the method by which you can create a powerful test strategy.
If your post is too long, a reader may just skim the post. However, this is not always true. I have had readers write to me and sometimes they remember even a point made in passing one of the earlier posts.

5. Grouping and linking information makes it easy to find it.
Earlier, I just managed with a chronological listing of all the posts. If a reader had to find a related post, they would need to look up the title of each prior post. Grouping your posts by category makes it easier for all. Further, linking to other resources (including previous posts) within the post is even better.

6. Add content regularly.
I have seen blogs that only have a handful (say, 2 or 3) posts or haven't been updated for a year or more. It is sad. If a reader visits your blog again, they expect to find some new useful or interesting content. These days, I aim to create at least 10 to 15 new posts every month.

7. Use the power of humor.
Writing with humor is an art. To be honest, I find writing with humor while being sensitive and respectful to all concerned rather challenging. However, I have written one humorous post and plan to write more in the future.

8. Finally, take help from others.
You should remember that there are many other experts in your field. They may be happy to share their knowledge with you. I found that interviewing other experts is a good way to way to gain knowledge and broaden your mind. I have started interviewing experts and soon more interviews should follow. I, too, write on other sites. If you would like me to write for you, you are welcome to let me know :}.

Saturday, May 15, 2010

Tips to build your test data

If you want to know what is test data, view my video on Test Data. Now, let us see the different types of test data:

Test data types

a. Application configuration data
Your application likely needs test data to function (or even launch). Examples of application configuration data include the information to connect to database(s), admin user name and password and server information (e.g. email server) in order to send email notifications.

b. Application data
Other than the above, an application may require application specific data. Examples of application specific data are the menus/ links (and their hierarchies) and detail item information (e.g. item name, item details and item price).

c. Customer or user data
The users creating your application may create their own data within the application. Examples of customer or user data are user details (name, address and other personal details), user searches and user transactions (items browsed and purchased).

Tips to build test data

The questions that you should consider when creating or sourcing the test data for your tests are:

1. Does your application handle blank test data?
In case, your application does not find a particular test data, it should not crash. It should either display an informative (just the required information and no more; otherwise it may be security risk) message or it should display no data but working functionality. For example, if the information to connect to a database is missing, the application should throw an appropriate error message. If the user has not entered any required details, it should prompt the user to enter those details first.

2. Does your application handle invalid test data?
Data input to an application may be invalid in terms of data type (e.g. character data provided instead of the required numeric data), data size (insufficient or excessive size), outside the valid data range or just incorrectly formatted (dots instead of spaces). There are several approaches to deal with invalid test data. An application may reject the invalid test data, provide valid options for the user to select or attempt a conversion to valid data. In any case, invalid data should not be accepted by your application.
Even if the application handles new invalid data correctly, you should test the existing data for invalids. See the articles on database testing for details.

3. Does your application handle valid test data across the entire range of such data?
Of course, your application should accept valid data. However, using a single test data value for a range is not sufficient. Your test data should include the boundary values and several values (you should ask yourself how many values are reasonable) in between.

4. Does your application store confidential user data?
If you use a copy of the current production data, you could be exposing confidential user information to unauthorized persons. You could consider using only the application configuration data and application data from production. Another way to prevent exposure of confidential user information is to obfuscate the customer data from production.

5. Does the application handle the data volumes present or likely in production?
If your test data includes only a handful of data values and there are thousands of data values in production, your testing will not be realistic. You should match your test data quantity with that in production. Further, for realistic tests, your test data values should not be simple duplicates but vary as in production.

6. Is new data released into production from time to time?
You should test the data being released into production independently as well as with your application. After testing, you should update your test database with the new test data released to production.

Thursday, May 13, 2010

What types of testing should you perform?

There are many kinds of testing that one hears about. You would likely know about many types of testing e.g. white box testing and black box testing, unit testing, integration testing, system testing, system integration testing and so on. The types of testing that would be performed on the application are usually identified in the test planning phase. If you wish to know which kinds of testing are suitable in your project, consider the following questions:

1. Do you have intimate knowledge of the design, logic and code of your application?
If you do, you may want to opt for white box testing. Otherwise (or additionally), you can perform black box testing. You can base your black box testing on the requirements and your knowledge of how your application should function.

2. What types of testing are common in projects?
Most applications are written not as single lengthy source code but rather as a collection of units (e.g. routines, functions or modules). In unit testing, these units are tested in isolation. In integration testing, the interactions between the units are tested.
Further, every application has some functionality. You test the functional requirements in functional testing.
Usability testing is used to determine if the application is easy to learn and use. However, usability testing is more commonly performed on applications with a large user base.

3. Does one test run/ test cycle on your application take substantial time?
If so, you may want to perform sanity testing/ smoke testing to determine if the application is stable and functioning well enough to deserve lengthy or detailed tests.

4. Will your application be used by more than one user? Will it hold or display confidential data?
If so, you should perform security testing not only on your application but also on the application infrastructure (e.g. application servers, database and network).

5. Does your application need to satisfy specific non-functional requirements?
You can group the non-functional requirements. Each group may require a special kind of testing. For example, if yours is a web application that is supposed to run on multiple browser versions, you need to do browser compatibility testing. If your application needs to respond within specific times, you need to do performance testing.

6. Will the user need to install the application before using it?
If so, you should perform installation testing (install and uninstall of the application on each supported platform).

7. Is it a new application or an upgrade?
If it is an upgrade, testing the new features of the application is not enough. You need to ensure that the previously working features are still working correctly. You need to do regression testing.

8. Does your application communicate with other applications/ systems?
System testing (testing the application as a whole) is applicable to every application. In case your application interacts with other applications or systems, you should also perform system integration testing.

9. Do you have an independent team (excluding developers and testers) who can test the application?
If so, they can perform alpha testing on (major) releases before completion.

Tuesday, May 11, 2010

Sample database test plan

In my video, Database Testing Explained, I referred to the sample database test plan below. First, view the video on Database Testing and then read on. First are some tips to create a good database test plan:

1. Database testing can get complex. It may be worth your while if you create a separate test plan specifically for database testing.
2. Look for database related requirements in your requirements documentation. You should specifically look for requirements related to data migration or database performance. A good source for eliciting database requirements is the database design documents.
3. You should plan for testing both the schema and the data.
4. Limit the scope of your database test. Your obvious focus should be on the important test items from a business point of view. For example, if your application is of a financial nature, data accuracy may be critical. If you application is a heavily used web application, the speed and concurrency of database transactions may be very important.
5. Your test environment should include a copy of the database. You may want to design your tests with a test database of small size. However, you should execute your tests on a test database of realistic size and complexity. Further, changes to the test database should be controlled.
6. The team members designing the database tests should be familiar with SQL and database tools specific to your database technology.
7. I find it productive to jot down the main points to cover in the test plan first. Then, I write the test plan. While writing it, if I remember any point that I would like to cover in the test plan, I just add it to my list. Once I cover all the points in the list, I review the test plan section by section. Then, I review the test plan as a whole and submit it for review to others. Others may come back with comments that I then address in the test plan.
8. It is useful to begin with the common sections of the test plan. However, the test plan should be totally customized for its readers and users. Include and exclude information as appropriate. For example, if your defect management process never changes from project to project, you may want to leave it out of the test plan. If you think that query coding standards are applicable to your project, you may want to include it in the test plan (either in the main plan or as an annexure).

Now, let us create a sample database test plan. Realize that it is only a sample. Do not use it as it is. Add or remove sections as appropriate to your project, company or client. Enter as much detail as you think valuable but no more.

For the purpose of our sample, we will choose a database supporting a POS (point of sale) application. We will call our database MyItemsPriceDatabase.


This is the test plan for testing the MyItemsPriceDatabase. MyItemsPriceDatabase is used in our POS application to provide the current prices of the items. There are other databases used by our application e.g. inventory database but these other databases are out of scope of this test.

The purpose of this test plan is to:
1. Outline the overall test approach
2. Identify the activities required in our database test
3. Define deliverables


We have identified that the following items are critical to the success of the MyItemsPriceDatabase:
1. The accuracy of uploaded price information (for accuracy of financial calculations)
2. Its speed (in order to provide quick checkouts)
3. Small size (given the restricted local hard disk space on the POS workstation)

Due to limitation of time, we will not test the pricing reports run on the database. Further, since it is a single-user database, we will not test database security.

Test Approach

1. Price upload test
Price upload tests will focus on the accuracy with which the new prices are updated in the database. Tests will be designed to compare all prices in the incoming XML with the final prices stored in the database. Only the new prices should change in the database after the upload process. The tests will also measure the time per single price update and compare it with the last benchmark.

2. Speed test
After analyzing the data provided to us from the field, we have identified the following n queries that are used most of the time. We will run the queries individually (10 times each) and compare their mean execution times with the last benchmark. Further, we will also run all the queries concurrently (in sets of 2 and 3 (based on the maximum number of concurrent checkouts)) to find out any locking issues.

3. Size test
Using SQL queries, we will review the application queries and find out the following:
a. Items which are never used (e.g. tables, views, queries (stored procedures, in-line queries and dynamic queries))
b. Duplicate data in any table
c. Excessive field width in any table

Test Environment

The xyz tool will be used to design and execute all database tests. The tests will be executed on the local tester workstations (p no.s in all).

Test Activities and Schedule
1. Review requirements xx/xx/xxxx (start) and xx/xx/xxxx (end)
2. Develop test queries
3. Review test queries
4. Execute size test
5. Execute price upload test
6. Execute speed test
7. Report test results (daily)
8. Submit bug reports and re-test (as required)
9. Submit final test report

1. Test lead: Responsible for creating this test plan, work assignment and review, review of test queries, review and compile test results and review bug reports
2. Tester: Responsible for reviewing requirements, developing and testing test queries, execute tests, prepare individual test results, submit bug reports and re-test


The testers will produce the following deliverables:
1. Test queries
2. Test results (describing the tests run, run time and pass/ fail for each test)
3. Bug reports


The risks to the successful implementation to this test plan and their mitigation is as under:
       Name        Role        Signature        Date
1. ____________________________________________________________
2. ____________________________________________________________
3. ____________________________________________________________

Friday, May 7, 2010

What is the difference between severity and priority?

After having seen this question floated in so many forums, I decided to write about it. First, the basics. These terms are used with respect to bugs. Severity and Priority are two attributes of a bug report. If you have seen How to report bugs effectively, you can see simple definitions of these terms. I have explained the difference between severity and priority in detail in my short video, Severity and Priority in Software Testing.

Here are the main differences between severity and priority:
In simple words, severity depends on the harshness of the bug. In simple words, priority depends on the urgency with which the bug needs to be fixed.
It is an internal characteristic of the particular bug. Examples of High severity bugs include the application fails to start, the application crashes or causes data loss to the user.It is an external (that is based on someone's judgment) characteristic of the bug.
Examples of high priority bugs are the application does not allow any user to log in, a particular functionality is not working or the client logo is incorrect. As you can see in the above example, a high priority bug can have a high severity, a medium severity or a low severity.
Its value is based more on the needs of the end-users.Its value is based more on the needs of the business.
Its value takes only the particular bug into account. For example, the bug may be in an obscure area of the application but still have a high severity.Its value depends on a number of factors (e.g. the likelihood of the bug occurring, the severity of the bug and the priorities of other open bugs).
Its value is (usually) set by the bug reporter.Its value is initially set up by the bug reporter. However, the value can be changed by someone else (e.g. the management or developer) based on their discretion.
Its value is objective and therefore less likely to change.Its value is subjective (based on judgment). The value can change over a period of time depending on the change in the project situation.
A high severity bug may be marked for a fix immediately or later.A high priority bug is marked for a fix immediately.
The team usually needs only a handful of values (e.g. Showstopper, High, Medium and Low) to specify severity.In practice, new values may be designed (typically by the management) on a fairly constant basis. This may happen if there are too many high priority defects. Instead of a single High value, new values may be designed such as Fix by the end of the day, Fix in next build and Fix in the next release.
I hope that you are now clear about the difference between severity and priority and can explain the difference to anyone with ease.

Thursday, May 6, 2010

Excuses given by testers when bugs are reported by clients

Here is a list of reasons given by testers when they do not report or even find (important) bugs.

I found this bug BUT...
1. It was not approved for submission (by the test lead/ test manager/ fellow testers/ programmer).
2. The bug report was rejected. (never mind the reason of rejection!)
3. This bug is reported but as part of another bug report which is still open.
4. Did not report it because it is intermittent in nature.
5. Reported it verbally due to lack of time.
6. So many bugs are still open. It would not have made sense to report yet another bug.

I did not find this bug BECAUSE...

7. I was not informed that this functionality is complete (and to be tested).
8. This bug is only visible with negative testing and all our test cases are positive test cases.
9. There is no existing test case to find this bug.
10. The test case to find this bug is not in our test plan.
11. This bug can be found only by the client's test cases which we do not have.
12. This functionality was blocked during my test.
13. I have tested this module briefly (I was just assigned this module OR this module was re-assigned to another tester quite early).
14. I have been busy re-testing the numerous bug fixes.
15. They stopped the testing before I had the time to test this.
16. It worked fine the last time I tested it. They must have changed the application after that.
17. It worked fine with the test data that I used.
18. This bug is related to the latest changes in the requirements, about which I was not informed.
19. This bug is specific to the client's environment.
20. If you examine it carefully, it is not really a bug.

Don't worry; we have all used excuses one time or the other. By the way, did you note the similarity to the top replies given by programmers when their applications do not work correctly?
When you lead a team of testers, you should watch out for these remarks and make your team culture and test process robust enough to prevent these problems from occurring.

Wednesday, May 5, 2010

Regression testing

First, the basics. The term "regression" is used to describe the decay, weakening or degeneration of software. Software is modified to add enhancements to it, fix known defects, make performance improvements, make the code more maintainable or make the code compliant to patterns. But, software can regress when it is modified. This can happen in a number of ways:
a. The faulty design is re-used to create an enhancement
b. Bug fixes get rolled back
c. Prior bug fixes do not work anymore due to changes in the environment (e.g. a new browser version)
d. A bug fix in one part of the software creates a new bug in the same area or a related area

View my Regression Testing tutorial or read on... 

Regression Testing

Regression testing determines if extra defects have been created in the modified software. As a tester, you are faced with a number of choices while performing regression testing:
1. At what levels should you perform your regression tests?
2. Which test cases do you execute (the existing test cases or new test cases or a combination)?
3. How frequently should you run the regression test?
4. Which part of the regression test should be automated and which part should be manually executed?
5. How do you ensure that your regression tests are effective?
6. How do you ensure that your regression tests are optimized?

You can come up with the best way to do regression tests if you select the best options to the above questions. Therefore, let us discuss these questions in detail.

1. At what levels should you perform your regression tests?
As you know, software testing happens at a number of levels (e.g. unit testing, integration testing, system testing and so on). Since the focus of testing is different at different levels, you are going to have a better likelihood of finding regression defects if you perform regression testing at different levels. For example, you may find defects in unit testing which would not have been captured by your existing system tests. You may find defects in integration testing which would not have been captured by any unit tests.

2. Which test cases do you execute (the existing test cases or new test cases or a combination)?
First of all, you should be aware of the impact of the changes to the software (since the last regression test). Then select a suitable set of test cases that sufficiently exercise the impacted areas of the software.
You may want to use the existing test cases if they are sufficient in their coverage. Further, they should have been updated along with the prior changes to the software e.g. the test cases are updated based on the defects discovered in the past.
If you know that the existing test cases have skimpy coverage or out of date, you may want to create new test cases for your regression test.
You should also ensure that there are no duplicate tests in your chosen regression test suite.

3. How frequently should you run the regression test?
This depends on the cost you incur in running your regression test and the value you receive out of it. Even if you don't run a regression test every day (note that some teams do), you should at least run one complete regression test before the software is released to production or to your client.

4. Which part of the regression test should be automated and which part should be manually executed?
Tests which are stable, repeated frequently, simple and require no intervention by the tester are good candidates for automation.

5. How do you ensure that your regression tests are effective?
Your regression tests should be able to discover defects. Upstream regression tests should discover a greater number or the more severe defects or at least discover defects more easily than downstream tests.

6. How do you ensure that your regression tests are optimized?
If you are aware of the scope and timing of the build process, you can align your regression tests with it. This will lead to an optimal number of regression test runs.
Further, you should examine your regression test cases to eliminate duplicate test cases, merge test cases wherever possible and automate tests (based on the criteria above) to minimize the time/ effort it takes to run your regression test.

Want to learn more? View the video that explains Regression Testing.

Monday, May 3, 2010

How to test software without any requirements?

You have been handed an application with no requirements. You are supposed to test it. Can you do it? Sure, you can. Just look at the following test ideas or view the video, How to Test Software without Requirements?
  • Does the application launch?
  • Does the application have a help/ demo file? You can find abundant information in the help/ demo file to help you design your test cases.
  • Does the application accept user input?
  • Does the state of the application change on accepting each user input?
  • Does each control in the application work? Examples of controls include menu items, toolbar buttons, links, text boxes and buttons.
  • Does each controls have a consistent look and feel (e.g. style, size, font and alignment)?
  • Is each label or text in the application spelt correctly?
  • Is it possible to copy/ paste data to/ from the application?
  • Does the application show all the displayed data (with or without scrolling)?
  • Is it possible to perform the tasks promised by the name of the name of the application?
  • Does the application close?
  • (If it is a Windows application) Does the application follow the common standards for Windows applications?
  • (If it is Web application) Does the application follow the common requirements for Web applications?

I am sure that you can come up with more test ideas. And this is an extreme example. In the real world, even if you know nothing about the application for which you are going to design tests, you may have one or more of the following resources to help you beside the requirements specification:
1. A knowledge transfer (either in person or via a document) regarding the application
2. High level business requirements
3. Design documents
4. Business analyst or product manager
5. Project manager or developers
6. Prior versions of the application
7. Older requirements specifications
8. Past bug reports or customer complaints
9. Installation guide and release notes
10. Your domain/ industry knowledge
11. Laws/ statutory requirements that must be satisfied by the application

Open source test management tools

As the name indicates, test management tools are used to create and manage tests. A test management tool commonly contains features to create:
  • Requirements specification
  • Test cases (based on the requirements)
  • Test plan (usually some test cases grouped according to their priority)
  • Reports and metrics based on the test plan execution
  • Built-in defect management sytem or integration with defect management systems

If you are looking for free/ open source test management tools, here are two candidates that you should seriously consider:

1. Testopia
It is the test case management extension for the popular defect management tool, Bugzilla.

2. Testlink
It allows creation of test cases and test plans, tracking test results and generating reports. It integrates with other defect management systems like Bugzilla and Mantis.

Saturday, May 1, 2010

Software Testing Humor - Funny things that testers hear!!!

One comes to learn about the things that testers in various companies hear from their project managers. Enjoy.

1. You need two days to write test cases!? You already have the requirements specification. Just copy and paste from it.

2. You don't have the latest requirements!? I am positive that we communicated the latest changes to all developers.

3. Remember that we are using the Agile methodology. Time is critical. Do not waste it by creating any test cases or bug reports. Just test and discuss the bugs directly with the concerned developer.

4. Note that the build you are testing can change any time. Even if you notice that the build has changed, just continue your testing as if nothing has happened.

5. There is no need to test the xxxx module. I have already had a [senior] developer test it and he said it is working fine.

6. You are new to the project. We will have another [senior] tester repeat your tests and compare both your results.

7. I want to be aware of each bug as it is found. In addition to logging the bug report, call me/ mail me as soon as you find a bug.

8. You have found a bug? Are you sure that you have tested the latest build [implying a very recent untested build]?

9. What bug did you say you have found? Please confirm each bug with the developer before raising/ logging it.

10. Why doesn't your bug report have a test case ID? Note that no bug will be accepted without an existing test case ID.

11. Your bug is a duplicate of bug ID xxxx. Note that NO [implying not even the first] duplicate bugs will be entertained by the developers.

12. Finally, It is 8 p.m. The developers have worked very hard to create this release. Now, they have all gone home. You can do all your testing. Just remember that the release has to be delivered to the client first thing tomorrow morning.

Okay, I admit that these things are not at all funny for the recipient at the time they are said. But, they are hilarious once you think or talk about them later :)