## Wednesday, June 16, 2010

### Equivalence Partitioning - Why you should NOT use it AS IS in black box testing?

Equivalence Partitioning is an interesting test design technique. You can learn it from my video, Equivalence Partitioning and Boundary Value Analysis tutorial.

Put simply, EP involves dividing the input test data into partitions, valid and invalid. It promises to reduce the number of input test data values that must be used by suggesting that only one test data value be used from each partition to determine the behavior of the application for each partition.

Let us consider an example of the input, day of the week. The valid partition is 0..6 (or Sunday to Saturday). marked in green. There would be two invalid partitions, <0 and >6, which are marked in red.
-------------------------------------------------------------------------------------------
...  -3   -2   -1      0    1    2    3    4    5    6    7    8    9  ...
-------------------------------------------------------------------------------------------
Therefore, you decide to execute your test case with three distinct values, say -3, 3 and 9 (a value each from the first invalid partition, the valid partition and the second invalid partition). Correct?

Wrong decision. The technique assumes that the application treats any value within a particular partition identically. Well, may be NOT. Unless you are privy to the source code and can confirm that indeed each value in each partition is handled exactly the same.

Partitioning is fine. I disagree with the "Equivalence" part of "Equivalence Partitioning" term. I will mention why. Let us say that the programmer wrote separate "Case" or "Switch" statement(s) for each value from 0 to 6. If you only ever execute your test case for the input value 3, you would never test the application's code for the other values in the valid partition (0,1,2,4,5 and 6).

At this point, I can anticipate questions. How about the invalid partition? Should you execute your test case with multiple values in the invalid partition? How far back should you go? Obviously, you cannot use each value in a large partition. However, you may consider the following:
a. Use at least one or two values in the beginning of the partition e.g. -1 and -2.
b. Use at least another value deeper in the partition e.g. -100.
c. Use a value outside the range of the data type (integer in this case) e.g. -12345678

In short, when you have to test without looking at the source code, test using each input value in a small partition. And, test using spread out sample values in a large partition.

## Monday, June 7, 2010

### Future of software testing

We don't know the future. Nobody does. However, I have been involved with software testing since 1998 and I have seen changes taking place. It used to be simple earlier. We received the requirements and designed simple test cases based on these requirements. Then we received the latest build of the application, execute our test cases and report discrepancies in bug reports. We did all of this as carefully and as fast as we could. As I experienced software testing first hand, I began to learn it. There were multiple approaches available to perform testing tasks. Each approach had its own pros and cons. There were multiple types of testing that we could do. Each of my team members had a different set of skills, strengths and weaknesses. It was not simple anymore. 3 or 4 years ago, I started following the online software testing communities. I read a lot of material and comments on hundreds of topics in software testing. I saw thought leaders in software testing repeatedly pointing us test practitioners to the basics. Over time, I have begun to consider software testing as quite complex and challenging. The good news is that software testing still has a long way to go before it truly matures.

Here is the list of my predictions. These are not revolutionary changes that are going to catch you all of a sudden. In fact, you can see some of these changes today. But, if you are aware that these changes could speed up or intensify in the coming future, then you have a better chance to prepare yourself to take advantage of them.

Prediction #1. Companies will demand more value for the testing resources they put in.
The recent downturn has shaken everyone. We have been forced to become savvier with our investments. The same is true for companies. Companies will demand a better testing service from their resources (in-house testing team or a vendor providing testing services). The companies will now demand:
a. Faster turnaround time
b. Greater coverage of specified and implied requirements
c. Testing in more perspectives (functional, performance, security, usability and so on)
d. Increased collaboration with all other teams involved in sales/ product development, software development, deployment and support
e. Lower costs (of test infrastructure (test environment), test tools and testing personnel)
f. More transparency of the test process

Prediction #2. Software testing will become more complex.
Keeping in mind the increased expectations of clients, the increasing complexity of applications and the increasing knowledge of test practitioners, software testing will become more complex. In future, testers will need to find answers to the following questions among others:
a. What are the most important business objectives of the application that I am testing?
b. What technologies does my application employ? How do I test each of those?
c. What test infrastructure will I need to test my application? How can I set those up with the least cost (of setting it up, using it, maintaining it and tearing it down)?
d. What tests would provide the best value against the cost of creating them?
e. How is my application integrated with other systems? How do I test various aspects of each integration?
f. What is the best test methodology that I can use?
f. Which of my communication provides the best value to other stakeholders?
g. How do I utilize my natural strengths in testing? How do I circumvent my natural weaknesses?

Prediction # 3. Crowd-sourcing will continue to become popular.
uTest is becoming more popular by the day. Today, uTest has more than 20,000 testers and a client list that includes Google and Microsoft. The clients of crowd-sourcing companies can buy just the testing services they need when they need them and even select the individual testers for the test. No wonder, many companies consider crowd-sourced testing services as viable alternatives to large in-house testing teams or inflexible testing services vendors.

Prediction # 4. In order to get hired and stay hired, testers will need to distinguish themselves from the crowd.
Today, there are masses of software testers. Their profiles and resumes look similar. If I were going to hire someone for my team, I would not like to just go for someone with the basic knowledge and skills. I would like to get the details. And, I would probably like to interview someone with substantial achievements. Someone who has "walked the extra mile". Someone who has achieved more than their counterparts at the same level. Be it extra-ordinary knowledge, uncommon or advanced skills or a solid recognition from testing experts.

Prediction # 5. Social skills and working style will become important.
Other than software testing knowledge and skills, testers will be required to be socially adept. They will be required not only to plan and test well, but also communicate well. They will be required to establish themselves as part of a team, support the team, speak up when required and influence others when required. In future, just testing won't do for the testers. They will also be required to collaborate effectively and strive to maintain long-term relationships with their extended teams. Further, testers will be required to show align themselves to the (stable or changing) business objectives and the team.

What do you feel? Do you think that you are ready for these changes? Are you going to take advantage of them? What other changes do you see on the horizon?

## Sunday, June 6, 2010

### The Ultimate Machine: How would you test it?

As many of you know, we now have our Software Testing Space group in LinkedIn. A lot of exciting discussions are taking place there. If you have not joined it, you should consider doing so.

My friend, Freddy Gustavsson from our STS group posted an interesting challenge on testing the Ultimate Machine. Well, the Ultimate Machine is attributed to Claude Shannon (1916 to 2001). It consists of a simple box with a single switch on its top surface and a closed lid. If you flip the switch on, the lid opens, a mechanical hand arises from it and the hand flips the switch off. Thereafter, the hand goes back into the box and the lid closes. Freddy's challenge was that if he asked me to test this Ultimate Machine, which tests would I perform? Here are the questions that I thought of. I would like to design tests based on these questions.

Positive tests
a. If the tester flips the switch on, does the hand (or an arm in certain implementations) switch it off?
b. If the tester does not touch the switch, does the machine just sit there and do nothing?

Negative tests
a. What happens when the tester flips the switch on and switches it off before the hand can reach it?
b. Is it possible to flip the switch somewhere between the on and off positions? What does the machine do in such a case?

Non functional tests
a. Can the tester operate the machine as it is or does it need to be installed? If installed, what does it take to install it?
b. Once the machine is switched on, how long does it take to switch itself off?
c. How many times can be machine be operated  before it stops working (due to battery discharge, wearing off of mechanical parts and so on)? Does it slow down after a while or show operational problems (make hitherto unknown noise for instance)?
d. Does the machine operate correctly if the ambient conditions are changed?
i. The machine is vibrated e.g. during travel or a minor tremor.
ii. The machine is operated outdoors (e.g. during a dust storm or in rain/ sleet/ snow).
iii. The machine is operated in a strong magnetic field (e.g. certain places on a factory floor or in a hospital).
iv. The machine is operated in different light conditions (e.g. direct sunlight, in the office daytime with blinds open/ blinds closed, at dusk or in total darkness).
v. The machine is operated in different temperatures (e.g. at room temperature, in a freezer or near a furnace).

Usability tests
a. Is it easy for the tester to locate the switch?
b. Is the switch easy to flip (works smoothly, clicks to indicate it has flipped)?
c. Is the switch safe for the tester to operate (no rough surfaces or pointed end)?
d. Is the box pleasing to look at? How much space does it occupy on the tester's desk?

White box tests
We could open the box. Then, we can try to understand the mechanism inside and create more tests based on our understanding. For e.g. I would like to know if the time it takes to switch off is configurable and is the machine connected to any external system.

I am sure that you can think of more questions to help test the machine. Did you notice that I used the various types of testing to think about the above questions? What approach would you have used to design tests for the Ultimate Machine?

## Saturday, June 5, 2010

### Do developers hate testing?

Having recently read a post on the UTest blog, I decided to think back about my developer days. What challenges did I remember from my experience as a Developer? Did I test my code or not?

1. Sometimes, it required a huge effort on my part just to know the correct set of requirements. Sometimes, the requirements would be given by a business person who would be focused on the promises already made out to the customers. The product manager/ business analyst would rarely verify the technical feasibility of implementing the requirements before committing to the customers. In such a situation, it became a matter of identifying the design alternatives available and exploring the promising ones in detail. As a developer, I used to take such situations as intellectual challenges. This resulted in me saying Yes, even when a simple No would save me a lot of work.

2. At other times, I was handed over source code that had been written, changed and enhanced (all these in different orders) by at least 3 or 4 developers before me. These developers were now either busy with other projects or no longer available in the company. I found this source code riddled with problems. There would be sections of the code that were incomplete, the sections would work with some input values but not with others, were partial duplicates of other sections or had logical problems. Getting such code to work was challenging. It involved plowing through the code (reading a section, unit testing it, resolving the problems, unit testing it again, refactoring it to make it more understandable, unit testing it and so on).

3. The other thing I recall is the sheer number of problems I had to face in order to implement a single requirement. Examples of these problems were:
a. The design was either non-existent or did not cover the particular requirement. This meant that my task doubled in scope. I first had to create a good design and then implement it.
b. The base components were not present or had defects. So, I had to decide either to re-write the base components from scratch or debug them before using them.
c. A similar requirement was already implemented in the application. The trouble was that I was either not able to understand that implementation or suspected hidden defects in that implementation.
d. The underlying development or run-time environment had issues (meaning defects or constraints) in certain conditions. This meant that I had to either somehow circumvent those limitations or break down my implementation into multiple parts in such a way that I did not run into these issues.

4. The next thing that I remember is how tight the schedules were. There were two reasons for this. The first reason was that the development effort was routinely under-estimated. Maybe my project manager (who estimated the development effort) wanted to increase my productivity constantly. Or maybe s/he estimated the effort considering the simplest development possible. The second reason was that I wanted my code to be perfect (in some cases, more than perfect). This meant that I would write the code implementing a requirement and run it repeatedly looking for problems. Once I had code running without problems, I would refactor it to make it leaner and strictly according to the coding standards (for example, naming the variables according to the specified nomenclature, adding comments every few lines and so on). If time permitted, I would even attempt to enhance the design or add extra functionality (though I stopped adding any extra functionality quite early on). My worries about how my code may not work under specific conditions led me to test it, modify it and again test it repeatedly. Given the tight schedule and my worries about the quality of my code meant that I was under stress.

It is not that as a Developer, I hated testing. Quite the reverse actually, I always wanted my code to be perfect and tested constantly. It was the sheer number of problems that I had to solve in a tight schedule that allowed some defects to fall through the cracks. And the defects discovered by the testers were only small fraction of all the defects that originally existed in the code.

Though I tested my code constantly looking for defects, there was only so much I could do given the tight schedule. In fact, I found that in internal projects (where the delivery deadlines were relaxed), I was able to submit code in which the testers could hardly find any defects.

Software testing involves a lot of work. The tight schedules do not help because it is very common for the development scope to increase during development. If the schedules are more realistic, I am confident that developers would test their code better and there would be a lot less defects in their applications.

## Tuesday, June 1, 2010

### Team Competition - How to compete against other software testing teams and win?

There may be occasions when you or your software testing team is pitted (compared from a business point of view) against another individual or another team. Examples include:
1. You are a senior tester and the management has just hired a junior person. Now, some tests have been assigned to the junior tester. The idea is to see if there is a discernible difference in the outputs of the two of you.

2. Your management or client has decided to off-shore (OR bring on-site) some of the testing work. Some work is now being done in parallel in both locations. The idea is to see which team provides more value with respect to the cost it incurs.

3. An experienced person with the business background has been added to the team. The idea is to see if business knowledge or software testing knowledge provides greater value in testing.

Before I go on, let me clarify that healthy competition is a fast way to grow your capabilities. Otherwise, things tend to stagnate. And quite often, the competing teams end up collaborating with and supporting each other the other side of the tunnel.

But, in the meantime, you have to think about at least maintaining your position. If you do NOT take any additional action, your role may be diminished or change into something else regardless of your preference. How can you maintain (and subsequently grow) your position?

1. Study the other team.
When sports team prepare for competitions, they actively study the other team. You should do the same. You should find out the following information about the other team:
a. The number of members in the team
b. Their names and location
c. Their general background, experience, knowledge, achievements and struggles
d. Their prior exposure to your company's specific applications, tools and processes
e. Their working style/ culture
f. Any natural advantages they have over you

2. Follow their work and deliveries.
You should actively follow their work and the deliveries they produce. Use all channels (official and informal) to keep you updated about their progress.

3. Analyze the other team's strengths and weaknesses.
Your team should now be able to have a fair idea of the other team's strengths and weaknesses. Be ready to update their image in your mind as you follow their progress regularly.

4. Identify your own strengths and weaknesses.
This is a good time to be more aware about your own team. What things is your team good at? What tasks is it not so good at? In what respects does your team absolutely struggle?

5. Change your approach appropriately.
If are keenly aware of the other team and your own team, it should be easy for you to identify the ways in which you can out-perform them. Some examples are given below but you should find out your own ways to out-perform the other team. And, implement the changes quickly.
a. If the other team only executes the predefined test cases, your team may want to additionally invest time in enhancing and then executing the available test cases. Or perform exploratory testing in addition.
b. If the other team focuses on functional bugs, your team may want additionally focus on performance bugs, usability bugs and look and feel bugs.
c. Get work done faster. If their team members watch the clock (come in at time, work and leave on time), your team can shorten your breaks or concentrate more on the tasks or stay on the task longer.
d. If the other team has individual performers, your team may want to harness the power of team work.
e. If their team works in a silo, your team may have a good rapport with the other engineering teams.

6. Inform the decision makers yourself.
You can be sure that the decision makers who initiated this change would be actively following its effects. You should inform your own achievements in a positive light to them. Do NOT rely on someone else to do this for you. If so, they may tone down your achievements according to their own biases, opinions and agenda.