August 31, 2010

Why you must know your product's competitors?

Why does a customer purchase (a license) of your product? More likely than not, the primary reason for doing so may boil down to
1. Getting something new e.g. increased productivity or increased efficiency or increased resources
2. Overcoming a risk e.g. miscommunication or failing to meet statutory requirements

However, your product may not be the only solution available in the market to satisfy the customer's primary requirement. If your product is not so well-known, it has to compete with leading products in the market in the particular category. If your product is the market leader, it may have to compete with products catering to specific niche markets. Even if yours is a one of a kind product, there may be a proven manual system that it has to compete with.

When the customer evaluates or first uses your product, it is no leap of imagination to think that s/he would be actively comparing it with its competitors. If you test software, you can ill-afford to ignore your product's competitors. Software testing should include not only testing with respect to your organization's or customer's requirements but also testing to check how the product functions with respect to its competitors.

Knowing the product's competitors is not the prerogative of product managers alone. As software testers, we pretend to be customers using the product. Therefore, just like customers, we should be aware of the alternative products. Only then would we come to know about how our product functions on its own and how it functions with respect to its peers.

August 21, 2010

Find a Bug

Scenario: You are a competent tester who has just joined a company and is testing on the first project there. The application you are testing is a financial web application. During test execution, you make the following observations. Rate each observation on a scale of 1 to 10, 1 being definitely not a bug and 10 being certainly a bug.

1. The logo on the home page and other pages of the application is not the client's. For example, if the client's name were 1, the logo is that of another company, say 2. The logo in the previous version of the application is your client's.

2. There is a group of links on the pages of the application. When you click any link in this group, it opens a page with the "Not Found" error.

3. You log on as a new user into the application. You make a deposit of $ 1,000 into your account. When you visit the Transactions page, it shows you your transaction of $ 1,000. However, when you visit the Balance page, it shows your available balance as $ 2,000.

Should you go ahead and report these observations as bug reports pronto?
.
.
.
.
.
.
.
.
.
.

My advice is No. All your observations mean is that you should investigate further. For all you know, there might be perfectly reasonable explanations for these observations. For example,

1. Your client is getting this application developed for a partner organization. Or, their parent organization. Hence, the different logo. The change request is already under construction and you would get it soon.

2. The group of links will be supported by an external entity. During your test, you should just have checked if each link is correct. The external entity would populate these links before the test finish date and then all the links in the group would work.

3. There is indeed a requirement which states that when a new user makes the first deposit between $ 1,000 and $ 2,000, the client organization doubles the balance by making an equal deposit into the user's account. However, you are not aware of this requirement.

The objective of this post was to help you realize how using Heuristics may lead you along a wrong path. However, I did not state this objective at the beginning of the post because I did not want to bias you.

Testing is an investigative process. We should observe the application carefully, formulate theories if what we see is not in line with our mental model, gather more information from multiple sources and analyze and test our theories before pronouncing a discrepancy or a bug in the application.

P.S. The idea to write this post came after reading, ICICI Bank ATM bug … Try it!............. by Amit Jain, a member of the Software Testing Space group.

August 14, 2010

As you wish!

Once upon a time, there was a tester who had just joined a large end-user company. He had a good profile and was promptly inducted into an ongoing project to develop an application for internal use across offices of the company. Things started moving slowly for this person.

Though nobody in his team realized at the time, this tester was quite capable. He was also quite talented, hardworking and methodical in his working style. He quickly introduced himself to each of the team members (but in his hesitant way). During his initial days in the project, he made it a point to read (and re-read) all the documentation related to the project. Whenever he had questions, he made the effort to approach the relevant team member and clarify his thoughts.

Some time passed and now he was assigned some modules to test in the project. Since the tester was quite knowledgeable about the requirements by this time, he could understand the existing test cases quite well. He was also able to see the short-comings in these test cases and made refinements to his own copy of the test cases before executing them. He found defects. In fact, he found a lot of defects. This led to the problem.

The Development Lead on the project was an influential person. He was always "in the know". He was also widely recognized in the company as a competent developer and business domain expert. But somehow, he did not like so many defects being logged by our tester against "his" application and "his" team. He called the tester for a one-on-one meeting and questioned his work. Did the tester really understand the business objectives of the application? Should the tester have been spending more time validating the application? Were his defects really valid at this point in time?

Our tester did not like confrontation. So, when the Development Lead changed the following rules, our tester said "As you wish" and smiled meekly:
1. The tester would now only test the modules specifically assigned to him and not anything else.
2. If the tester thought he found a bug, he was to approach the Development Lead, take an appointment and present his bug. Only when the Dev Lead had analyzed and agreed to the bug was the tester required to report it in the bug tracking system.
3. The tester was not to approach any team members directly. If he had questions, he was to take permission from the Dev Lead and only then meet the team members.

The Development Lead did not however stop at that. During his status reporting meetings with the management, he took the time about how he thought of our tester as a somewhat loose cannon. He said that the tester needed to be monitored closely but assured the management that he had put the tester on an "improvement plan".

Life had become tougher for our tester. While testing, he could see bugs in many areas of the application but he had to keep them to himself. When he attended the meetings with the team, his mind would be full of ideas. However, many times he hesitated to share those ideas. Other times, when he would get the courage of speaking them, he was quickly interrupted by the Dev Lead, "We will talk about it later". The tester would just smile weakly and say "As you wish". Many other team members too started taking little notice of the things the tester had to say. When the tester found bugs in the modules assigned to him, he would make careful notes and take them to the Dev Lead for discussion. The Dev Lead rejected most of the bugs he thought of as "not realistic enough", "too early to fix" or "doubtful". The result was that he allowed the tester to report the most obvious bugs only. But our tester kept on plodding without regrets.

Time passed. A number of releases were made to downstream teams. Finally, the application was deployed to production. The Dev Lead was also now the in-charge of supporting the application in production. In the first few months, internal customers reported several issues in the application. These were promptly fixed. Six months after the go-live date, the Lead happened to think about the issues reported in the go-live period. He listed the issues and started analyzing them. Just out of curiosity, he picked up our testers submitted notes from the previous year to see whether he had put forward any one of the real issues. He was amazed. All save one issue reported in production against the tested modules was discussed by the tester, while the application was still in development. Now, the Lead was very ashamed of how he had treated the capable tester.

He promptly called the tester for a meeting. He hurriedly explained his pre-occupations at the time and how he now realized the good work done by the tester. The tester was now free to discuss his ideas with others and report bugs as he saw fit. Then he asked the tester if this was okay? All our tester had to say was "As you wish", again with a meek smile.

Let us treat each team member with respect. Who knows how much a team member may be able to contribute to our success?