Wednesday, March 31, 2010

Software Test Automation Estimation (10+ factors which you should consider)

You may have seen my video, Test Estimation with formula example and questions and answers. In this post, we will discuss the factors that you should analyze in order to arrive at a realistic effort estimate for creating the test automation.

1. Knowledge transfer

Is your team new to the Application under test or would they need training or time with the application to be comfortable with it?
Is your team new to the chosen test automation tool(s)? Would they need training on the tool before they are productive with it?

2. Test automation environment

How long would it take to set up the test environment with the application and the tool for each team member in your test automation team?

3. The chosen tool's compatibility with your application

Do you need to perform a test tool evaluation before you begin automation?
How compatible is the chosen test automation tool with your application's technologies (e.g. does the tool recognize each type of object in your application, does the tool work fast with your application and so on)?

4. Test automation framework

Would you need to create a test automation framework from scratch?
Would you need to use an existing test automation framework? How simple or complex is it to learn to use this existing test automation framework?

5. Test cases to be automated

Would you have the test cases available for automation? Are these test cases automatable?

6. Size of the automation

Considering the size of the test case, the speed of your application and the speed of the chosen test automation tool, how long would it take to automate each test case?
By how much would the usage of your test automation framework affect the effort of automating each test case?

7. Test data requirements

Would test data be available or would your team need to generate own test data?
Where would the test data be located?

8. Types of testing required

What kind of unit tests would be performed on your test automation script?
What kind of integration tests would be performed on the integrated automation scripts?
What kind of tests would be performed to check the validity of test data?
Do you need to create automation only for functional tests? Or for other tests as well e.g. performance tests?

9. Supporting automation

Would your team need to create automation for non-testing activities (e.g. creating a lot of dummy data within the application or emailing test logs to users)?

10. Version control and test automation builds

How slow or fast is the version control?
How long would it take to integrate the test automation created by each team member? How frequently would the test automation build be created?

11. Reviews

What test automation items would be reviewed e.g. test scripts, test data?
What will be the frequency of these reviews?
How long would it take to complete the reviews and re-work?

12. Maintenance

How frequently would the base lined test cases be updated? How long would it take to update your test automation suite in line with these changes?
How frequently would your team receive an updated application build? (Keeping your framework and your chosen tool's capabilities in mind), how long would it take to update your existing test automation?

Roles and Responsibilities of a Software Test Lead

This post is the second one in continuation to the earlier one, Software Tester Roles and Responsibilities.

I have explained the role of a Test Lead in my video, Test Lead Interview Questions and Answers systematically and with details and examples. First, view the Test Lead Interview Questions and Answers video. Then read below the common responsibilities of a Software Test Lead.

1. Be updated on the latest testing techniques, strategies, testing tools/ test frameworks and so on
2. Be aware of the current and upcoming projects in the organization
4. Plan and organize the knowledge transfer to the Software Test Engineers and self
5. Collect the queries related to the requirements and get them resolved by the business (e.g. the client, business analyst, product manager or project manager) assigned to the project
6. Plan, organize and lead the testing kick-off meeting
7. Scope the required tests
8. Design the required test strategy in line with the scope and organization standards
9. Create the software test plan, get it reviewed and approved/ signed-off by the relevant stakeholders
10. Evaluate and identify the required test automation and test management tools
11. Estimate the test effort and team (size, skills, attitude and schedule)
12. Create the test schedule (tasks, dependencies and assigned team members)
13. Identify the training requirements of the Software Testers
14. Identify any test metrics to be gathered
15. Communicate with the client or on site/ offshore team members, as required
16. Review the test cases and test data generated by the Software Test Engineers and get them to address the review comments
17. Track the new/ updated requirements in the project and modify testing artifacts accordingly
18. Determine, procure, control, maintain and optimize the test environment (hardware, software and network)
19. Get information on the latest releases/ builds from the development team/ the client
20. Create and maintain the required test automation framework(s)
21. Administer the project in the test management system
22. Administer the Application under test (e.g. add users for the tests), as required
23. Assign tasks to the Software Testers based on the software test plan
24. Check the status of each assigned task daily and resolve any issues faced by the team members with their tasks
25. Ensure that each team member is optimally occupied with work (i.e. each Software Tester should not be too overloaded or too idle)
26. Re-assign the testing tasks, as required
27. Track the assigned tasks with respect to the software test plan and the project schedule
28. Review the test automation created by the Software Test Engineers and get them to address the review comments
29. Own and maintain the test automation suite of the project
30. Schedule and execute the test automation on the project
31. Review defect reports  and assign valid defects to the relevant developer/ development manager
32. Assign returned defect reports and assist the concerned Software Test Engineer, as required
33. Ensure the resolved defects are re-tested
34. Consolidate and report test results to the concerned stakeholders
35. Be approachable and available to the Software Testers, as required by them
36. Update the software test plan, as required
37. Ensure that the test cases are updated by the Software Testers, as required
38. Ensure that the test automation is updated based on the updated test cases
39. Gather the decided test metrics
40. Escalate and obtain resolution of the issues related to the test environment and team
41. Plan, organize and lead team meetings and ensure action is taken based on the team discussions
42. Plan and organize training for the Software Testers
43. Review the status reports of the Software Testers
44. Review the time logged by the Software Test Engineers for various activities
45. Report the status to the stakeholders (e.g. the client, project manager/ test manager and the management)
46. Keep the Software Test Engineers motivated
47. Improve the test process based on the suggestions by others and own judgment
48. Manage own energy level and time

Note: These responsibilities may be tailored depending on the specific organization for which you are working. However, you should be aware of the responsibilities above so that you may perform well as a Test Lead.

Want to learn more? View the video Test Lead Interview Questions And Answers.

Monday, March 29, 2010

How to improve your skills as a software tester?

If you test software, you have likely considered improving your software testing skills. The answer to this important question is not limited to reading some articles, following some blogs and joining some forums. You should first be aware of the different actions you can take to develop your software testing skills. Then include these actions in your routine and be diligent in taking these actions regularly. Spend substantial time on these actions and you may see dramatic improvement in your skills. First, view the video, How to improve skills in software testing.

Building software testing skills

If you have viewed my video, Software tester job role, you would be aware of the major responsibilities explained therein. Your desired actions may be roughly mapped to the major skill categories below.

Testing knowledge Learn about the latest testing strategies, techniques, approaches and (developments in) testing tools. You can get some good information and insights from the popular software testing forums, tool vendors' websites, websites related to testing and software testing blogs.
You should also be aware of the latest development technologies used in your application and know programming (for the purpose of reviews and automation).
Understanding requirements Learn more about the business of your application. You can get more knowledge from the business people (e.g. clients, product managers, business analysts, domain experts and end users) and by reading (current and past requirements documentation, white papers, case studies, trade magazines and so on).
Planning There is a wealth of material on the topics of estimation and project planning on the web. The key is to apply the principles of project planning to your activities consistently.
Test design Do not be satisfied with just one or two test cases for each low-level requirement. Brainstorm more and more test ideas. Design test cases based on these test ideas.
Pay more attention to the design of test data. Consider interesting individual test data items and test data combinations. Do not limit yourself to just using one or more test data design techniques.
Test automation This is one area with the potential of a lot of improvements. For example, you can learn automating test cases that are difficult to automate, organize your test automation better, design or refine a test automation framework and automate testing activities that are being done manually at present.
Test execution Learn as much as you can about your application's production environment. You may get this knowledge from the IT/ Support/ Helpdesk team. Replicate the production environment as far as feasible as your test environment.
Change your pace deliberately when you execute tests. If you speed up, you may be able to get tests done quicker. Then if you slow down, you may be able to observe things that did not strike you before. Read my earlier post, How to find more or better bugs (12 tips to explode your bug count and/or severity)? for more tips.
Reporting (defect reporting, reporting test results and reporting status) Analyze the earlier reports created by you and others. Find ways to express the information more completely, accurately, concisely and better suited to the specific audience of your report.
Team playing Be more committed to your team.
Take ownership of your responsibilities (stand by what you say or do).
Respect your team members and help them out when you can.
Process improvement Always challenge the current test process in your mind. Research and suggest better approaches and tools to testing tasks.
Constantly learn from others in your team. Try to apply the knowledge you gain in your project immediately.
Time management Plan your day according to the priorities of the tasks at hand. Leave some time to refine your work. Take actions on the most important tasks first.
Self management Last, but not the least, always be in good health and high spirits. My father always tells me so. Believe me, it shows in your results.
And care about the fellow testers genuinely. Contribute to the software testing community whenever you get a chance.

Share more tips to improve our testing skills by putting in a comment.

Performance testing using HP LoadRunner software

Dhananjay Kumar sent me an email saying that he wanted to explore HP LoadRunner. He said that he wanted information on HP LoadRunner configuration and on analyzing the HP LoadRunner results.

If you want to learn HP LoadRunner version 12 (current version at this time), please view my full set of LoadRunner Tutorials. You should view these videos in the order given.
There is an excellent white paper titled, "HP LoadRunner software—tips and tricks for configuration, scripting and execution". You can read this white paper that contains numerous tips as well as code samples. There are tips to create a high level performance test plan and tips for scripting, building scenarios, data handling and test execution.

Now let us turn our attention to the analysis of the performance test results. As I have mentioned in my earlier post, What knowledge do you need to have in order to do good or best performance testing?, after you run your test, you should first check if the performance test itself was valid. Initially, you may not be able to realize whether the errors you see are errors thrown due to the limitations of your application/ infrastructure or due to defects in your test environment. Therefore, you should start out with a simple and small test (e.g. by using a small number of virtual users, few transactions per second, short test duration and so on). Examine any errors raised by your test. Try to establish the reason of the most common errors and change your script(s), test or tool/ infrastructure configuration accordingly. However, you should make only one change at a time to your script(s) or test. The benefit of this is that if your change were incorrect, you would be able to rollback your change and try something else.

When you are able to run a simple and small test successfully (with minimal errors), it is time to develop your test. Again, do this incrementally. Run the test once, analyze the results and address any problems with the test environment before you make another change to it. Continue to develop your test until you have your full test available with you.

A test running with no errors or minimal errors may still be incorrect. Look for the supporting data, for example:
1. Does the test show that it has generated the Virtual Users specified by you?
2. Do metrics like transactions per second climb up in line with the ramp-up specified by you?
3. If the test puts a substantial load on your application, does your application slow down during the test?
4. Do your load generators become busier during the test?
5. If the test is supposed to create test data in the application, do you see this data in the application after the test is complete?

Even with a good handle on the errors and supporting data from multiple sources, you should not run your test just once. Run the same test multiple times and see if the results match with each other (more or less). If the results do not match, you should find out the reason for the mismatch and address it in your test environment.

Finally, if your test runs multiple times as you expect, it is time to check your test results against your application's performance requirements.

As you can now see, the performance test process (test preparation, test development, test execution and test analysis) is iterative.

Wednesday, March 24, 2010

How to find more or better bugs (12 tips to explode your bug count and/ or severity)?

Well, we know that we do not find all the bugs in the application under test (given that the application at hand is not simple). However, we do want to discover and report the most and the best bugs that we can. You need more ideas if you want to find more or better bugs than you do at present. View the video, How to become Software Testing Expert in your team or company. Then read on.

Tip 1. Review the application's requirements often. You may notice that no or partial test cases exist for certain requirements. You may find bugs when you test for these requirements. Keep abreast with the change requests/ changes to requirements. Be quick and you may find bugs immediately after a requirement change has been first implemented.

Tip 2. It is possible that you have positive test cases only. If so, create negative test cases for each requirement and execute them.

Tip 3. Execute each test case with a variety of interesting test data (generated for example by the boundary-value analysis technique  or pair-wise testing technique).

Tip 4. Test the interface of your application with external systems carefully. This is where a number of bugs may exist.

Tip 5. Another place to look for bugs is the application settings (since this is one area that may be rarely tested in fear that the application may stop working correctly or stop working altogether).

Tip 6. Repeat your tests using other supported client configurations (other CPU/RAM/operating systems/screen resolutions/browsers etc.)

Tip 7. Look at the previous bug reports against the same application or similar applications. See if you can test your application using the ideas or information contained in the previous bug reports.

Tip 8. Do not ignore the cosmetic bugs. If they would inconvenience a user, they should be reported and fixed.

Tip 9. Create a list of items that you would like to test if you had the time. When you test your application, you may find yourself taking mental notes of things that you would like to test later. Jot these things in your list. Come back to the application another time and test the things in your list against all areas of the application.

Tip 10. In case you are working as part of a testing/ QA team, do not restrict yourself to the areas that you are supposed to test. You may find bugs in the other areas. This would also have the nice side-effect of increasing your overall knowledge of the application.

Tip 11. Instead of rushing through your tests at top speed, slow down  (I know it is difficult). You would give yourself time to think more clearly. You would start to observe things about your application that you did not observe before.

Tip 12. Finally, take pride in the bugs reported by you. Feel free to mention an interesting bug that you found to a team member.

A word of warning. Do not get carried away if you suddenly start finding a lot more/ better bugs. You should still take the time to contemplate each bug, reproduce it, match it to the requirement and carefully create a good bug report.

Enjoy bug-hunting!

Tuesday, March 23, 2010

Automated Testing: How to write automatable test cases?

Test cases are used in software testing extensively. Plainly speaking, a test case consists of one or more steps. The test case may have expected results given for one or more steps. Test cases commonly have other information such as an ID, a description, some pre-conditions and test data.

If you want to automate test cases without re-working them or making a lot of assumptions, you should ensure that the test cases at hand are specific. Here are some guidelines below. If you have been handed test cases authored by someone else, you can use these guidelines to determine if you need to modify these test cases before you automate them. If you are going to write test cases that would be automated later, you should follow these guidelines to make your test cases automation-friendly. Some of these guidelines are also applicable to writing test cases that are not marked for automation. I have used examples from Microsoft Word for your convenience.
GuidelinesFor example, instead ofIt should mention
1. The test case should specify each pre-condition.Not giving any pre-condition Close MS Word if it is already open.
2. The test case should not leave an action to the judgment of the automation tester.Just saying, "Open MS Word."Open MS Word from Start > Programs > Microsoft Office > Microsoft Office Word nnnn.
3. The test case should specify the test data in each applicable step. Saying, "Open any existing MS Word file."Open an existing MS Word file (C:\Abc.doc).
4. The test case should have all the required steps.Mentioning "It should be possible to save the changes made by the user." as an expected resultTwo separate steps, 1) for saving changes to a new file and 2) for saving changes to an existing file.
5. The test case should not hide some details.Mentioning "Type a word with incorrect spelling. The word should be underlined with a squiggly line."Type a word with incorrect spelling. The word should be underlined with a red squiggly line.
6. A step should not force one choice. Saying "Enter a word using the keyboard or Insert > Symbol feature".Two steps, 1) entering a word using the keyboard and 2) entering a word using the Insert > Symbol feature.
7. The test case should specify (and not imply) any clean up steps.Missing any steps to clean upThe steps to remove the added word from the Dictionary or reset MS Word to its original setting (if the test case requires a word to be added to the MS Word Dictionary).
8. The test case should describe the expected results as completely as required.Saying "Close MS Word without saving your changes. There should be a dialog box asking you to save your changes."Close MS Word without saving your changes. There should be a dialog box titled "Microsoft Office Word" with the text, "Do you want to save the changes to ...? and three buttons labeled Yes, No and Cancel."
The above guidelines should ease your struggle when you examine the test cases to automate them. Have you found any other problem with the test cases when you sought to automate them? Comment your problem.

Friday, March 19, 2010

How to do database migration testing/ ETL testing effectively and quickly?

My earlier post, How to do real database testing (10 tips to perform serious database tests)?, turned out to be quite popular. You should know about database migration testing too. First, view my video on Database Migration Testing/ ETL testing (the volume is a bit low so, if needed, please turn Subtitles on by clicking Cc in the YouTube player). Then read on.

Database migration testing is needed when you move data from the old database(s) to a new database. The old database is called the legacy database or the source database and the new database is called the target database or the destination database. Database migration may be done manually but it is more common to use an automated ETL (Extract-Transform-Load) process to move the data. In addition to mapping the old data structure to the new one, the ETL tool may incorporate certain business-rules to increase the quality of data moved to the target database.

Now, the question arises regarding the scope of your database migration testing. Here are the things that you may want to test.
1. All the live (not expired) entities e.g. customer records, order records are loaded into the target database. Each entity should be loaded just once i.e. there should not be a duplication of entities.
2. Every attribute (present in the source database) of every entity (present in the source database) is loaded into the target database.
3. All data related to a particular entity is loaded in each relevant table in the target database.
4. Each required business rule is implemented correctly in the ETL tool.
5. The data migration process performs reasonably fast and without any major bottleneck.

Next, let us see the challenges that you may face in database migration testing.
1. The data in the source database(s) changes during the test.
2. Some source data is corrupt.
3. The mappings between the tables/ fields of the source databases(s) and target database are changed by the database development/ migration team.
4. A part of the data is rejected by the target database.
5. Due to the slow database migration process or the large size of the source data, it takes a long time for the data to be migrated.

The test approach for database migration testing consists of the following activities:

I. Design the validation tests
In order to test database migration, you need to use SQL queries (created either by hand or using a tool e.g. a query creator). You need to create the validation queries to run against both the source as well as the target databases. Your validation queries should cover the scope defined by you. It is common to arrange the validation queries in a hierarchy e.g. you want to test if all the Orders records have migrated before you test for all OrderDetails records. Put logging statements within your queries for the purpose of effective analysis and bug reporting later.

II. Set up the test environment
The test environment should contain a copy of the source database, the ETL tool (if applicable) and a clean copy of the target database. You should isolate the test environment so that it does not change externally.

III. Run your validation tests
Depending on your test design, you need not wait for the database migration process to finish before you start your tests.

IV. Report the bugs
You should report the following data for each failed test:
    a. Name of the entity that failed the test
    b. Number of rows or columns that failed the test
    c. If applicable, the database error details (error number and error description)
    d. Validation query
    d. User account under which you run your validation test
    e. Date and time the test was run

Keep the tips below in mind to refine your test approach:

1. You should take a backup of the current copies of the source and target databases. This would help you in case you need to re-start your test. This would also help you in reproducing any bugs.
2. If some source data is corrupt (e.g. unreadable or incomplete), you should find out if the ETL tool takes any action on such data. If so, your validation tests should confirm these actions. The ETL tool should not simply accept the corrupt data as such.
3. If the mappings between the tables/ fields of the source and target databases are changed frequently, you should first test the stable mappings.
4. In order to find out the point of failure quickly, you should create modular validation tests. If your tests are modular, it may be possible for you to execute some of your tests before the data migration process finishes. Running some tests while the data migration process is still running would save you time.
5. If the database migration process is manual, you have to run your validation queries externally. However, if the process uses an ETL tool, you have the choice to integrate your validation queries within the ETL tool.

I hope that you are comfortable with the concept of database migration testing. (whether  data is migrated between binary files and an RDBMS or between RDBMSs (Oracle, SQL Server, Informix or Sybase)).

Wednesday, March 17, 2010

What is the best place to store test data for your automated tests?

You need test data to execute your tests. The test data is used to provide inputs to the application under test and/ or verify the test results. Test data may be used to check if the application works normally with expected inputs, handles incorrect inputs gracefully and (optionally) if the application works with multiple test data values. You can source the test data from an existing data store, create the test data by hand or automate the creation of the test data. First, view my Test Data tutorial. Then read below.

Now, the question arises where should you store test data once you have generated it? There are numerous data stores possible. Go through the table below to see a comparison of the data stores.

Ease of setupMaintainabilityRe-useCost
Text filesGood
(but it is not simple to secure the text files and it is not possible to store images in them)
(it is easy to make mistakes while creating or editing the test data)
(since you are likely quite comfortable with spreadsheets)
(since you may end up with test data in multiple sheets of multiple spreadsheets)
(requires at least the spreadsheet viewer to read the spreadsheet)
(since you first need to design the table structure to store your test data)
(due to permanent storage and the availability of tools to view and edit the test data)
(if the test data design is generic enough)
(owing to the possible high cost of the RDBMS)
Test data management toolExcellent
(due to the features provided by the tool)
Average to Excellent
(depending on the features of the tool)
(possibly, if porting to another test data management tool)
Poor to Excellent
(depending on the cost of the tool license)
(you need to know XML but it is good
for defining hierarchies)
(debugging test data may be challenging)
Application configuration filesGood
(you can take the help of developers to set up your test data)
(due to the presence of other data that is related to application settings)
(if porting to other applications under test)
Poor to Excellent
(depending on the cost of the development license)
Now, you know the different types of data stores for your automated tests.

Tuesday, March 16, 2010

What knowledge do you need to have in order to do good or best performance testing?

Performance testing deals with design and execution of tests that examine the performance of the application under test. The application's performance is measured using a number of metrics such as application's response times, application throughput and concurrent users. Many software testers struggle when they begin performance testing/ load testing. This is because performance testing requires familiarity with a number of special concepts as well as proficiency in certain special skills. However, the good news is that you can learn the required concepts, develop the required skills and deliver results in your performance tests successfully. The purpose of this post is not to define the key terms used in performance testing but to introduce them to you. You can search these terms on the web and build your knowledge. Definitely see this video on Load Testing and Performance Testing Questions and Answers.

1. Performance testing tools (commercial, open source and custom) There are numerous performance testing tools available publicly. Some tools are commercial and the others are open source. The full-featured tools provide the functionality to create test scripts, add test data, set up tests, execute the tests and display the results. If the performance testing tool has not been chosen yet, you should evaluate the tools according to your project requirements as per the evaluation process described here.
If you have the time and technical skills, you may even create your own simple tool to help in performance testing.
2. Profilers
Profilers are tools that measure data related to the application's calls or the resources (e.g. memory) used by the application when the application is running.
3. Virtual Users
If your application supports multiple users concurrently (at the same time), you should test your application's performance using multiple users. The users modeled by the tool are called the Virtual Users.
4. Key business transactions Your application may allow the user a large number of work flows or business transactions. All the work flows/ business transactions may not be important in performance testing. Therefore, it is common to test using only the important or key business transactions. Refer or solicit your application's performance requirements for guidance in this regard.
5. Workload Workload is the load on the multi-user applications in terms of virtual users performing different business transactions. For example, in the case of a social networking application, 50 virtual users may be searching contacts, 40 of them may be messaging and 10 may be editing their profiles, all within a period of 30 minutes.
6. Isolation of test environment In order to get results with confidence, it is critical that your test environment is used only for the purpose of the performance test. This means that no other systems or users should be loading the test environment at the time of the performance test. Otherwise, you may have trouble replicating (or even understanding) your test results.
7. Modeling (script and test) You should script the key business transactions as realistically as possible (model) using the performance test tool. You should also design the tests with realistic test settings (e.g. virtual users ramp-up, user network bandwidth, user browser and so on) and the model workload.
8. Test data It is common for the test scripts to be executed numerous times during one performance test. Therefore, you may need a large amount of test data that does not exhaust during the test. Once a performance test finishes, the application may be full of dummy test data entered by the test. You may need a means of cleaning up this data (for example, by re-installing the build).
9. Server configurations You should be aware of and control the server configuration (CPU, memory, hard disk, network bandwidth and so on). This is because the application performance depends on the server resources.
10. Network configurationsYou should know about the protocols used by your application. You should also know about load balancing (in case, multiple servers are used by your application).
11. Client configurations You should know the common client configurations (in terms of CPU, memory, network bandwidth, operating system, browser and so on) used by your users. This would help you model realistic tests.
12. Load generators Depending on the load that you need to generate during the test, you may need one or more load generator machines. One tool may need fewer resources (CPU usage, memory usage etc.) per virtual user and another tool may need more resources per virtual user. You should have sufficient load generation capacity without maxing out your load generator(s) in any way.
13. Performance counters During the test, you should monitor the chosen performance counters (e.g. % Processor time, Average disk queue length) on your load generators as well as the application's servers. You should choose the performance counters so that you may come to know about the depletion (or near depletion) of any important resource.
14. Response time You should be aware that the application response time includes the time it takes for the request to travel from the client to the server, the time it takes the server to create the response and the time it takes for the response to travel back to the client.
15. Monitoring During a performance test, you should be monitoring the test progress, any errors thrown as well as your chosen performance counters on your servers and load generators.
16. Results Analysis After the completion of a performance test, you should spend time on analyzing the test results. You should check if the test created the required virtual users, generated the required load and ran to completion. You should check the errors thrown during the test. If you see any unusual results, you should form a conjecture to explain those results and look at the data carefully to either accept or reject your assumption.
It takes practice to be adept at analyzing performance tests well.
17. Reporting You should be comfortable with reporting performance test results. It is common for the performance test reports to contain present/ past metrics as well as charts.
If you make some effort, it is not difficult to educate yourself with the knowledge required for performance testing.

Sunday, March 14, 2010

Multilingual Testing: How to test multilingual applications?

In the year 2008, I wrote about several tips to test multi-lingual applications. In order to perform this testing in the correct way, you need to be aware of and address the following issues:

1. Unrealistic test environment
2. Lack of correct translations
3. Navigating the application with the GUI in an unknown language
4. Not testing all labels, controls or the data
5. Not considering the cultural issues in unknown languages

You can read this article here.

If you have ever tested a multi-lingual application, did you face any other problem? Do you have any tips to offer?

Friday, March 12, 2010

Testing Reports: How to test reports with a checklist of 30+ items?

A report is an output produced by the application under test. Reports come in numerous varieties. Some reports mainly have numbers and/ or charts and some reports mainly have text. Some reports are short and some run into pages. Whatever type of report you test, you should find the following list of things to test in a report handy. See my video, How to test software reports?, that explains everything with examples or read on...

Tests for data
1. Does the report validate any input data (e.g. date range or username) provided to it?
2. Is the input data entered by the user shown correctly in the report?
3. Is the report created based on the input data given by the user?
4. Does the report calculate values (e.g. subtotals by a unit such as a reporting period, totals, averages, minimum, maximum) correctly?
5. Is each data item on the report formatted correctly?
6. Does the report group data on the basis of a unit correctly?
7. Does the report show and use for calculation the correct statutory details (e.g. tax rates)?
8. Is the chart according to the data in the report?
9. If required, does the report show the correct reference number?
10. If required, does the report show the correct reference number(s) of the previous/ related report(s)?
11. Does the report show the correct text (e.g. executive summary, key drivers, approach used to create the report, current issues and action plan)?
12. If required, does the report show the correct supporting data?
13. Does the report include the correct date and time that it was created on?
14. Does the report show the correct names of the author, reviewer and approver?
Tests for presentation
15. Is the report layout as agreed?
16. Does the report show the correct heading (name, purpose and the target audience of the report)?
17. If applicable, does the report show the correct table of contents?
18. Does the report draw any tables correctly? Do the tables clearly show the data? Is the data aligned within the tables? Are the tables aligned with each other?
19. Is the data correctly broken up into sections or pages?
20. Does similar data and text appear in the same font (faces, sizes, colors etc.)? Are these fonts as agreed?
21. Is the header correct? Does the correct header appear on every page?
22. Is the footer correct? Does the correct footer appear on every page?
23. If it is a multi-page report, do the correct page numbers appear on every page?
24. Are the graphics used (e.g. logo and background) correct?
25. Are the links on the report correct?
26. If required, does the report include notes for interpreting the data, resources for further information or next steps?
27. Is the report similar in look and feel as the other reports produced by the application?
Tests for other factors
28. Is the time taken to generate the report reasonable?
29. Is it possible to distribute the report (e.g. by email, shared drive, feed etc.) as required? Does the distributed report open on all supported devices?
30. Is it possible to export the report to all supported formats?
31. Does the report print? Completely and with the same data and same presentation?
Please comment if you find this checklist useful in your project.

Thursday, March 11, 2010

Why your bugs are rejected (20+ reasons why developers can reject your bug reports)?

Scenario: You test your application. You discover some bugs. You are happy. You craft your bug reports and log them in the bug tracking system. The next day, you find that a number of the bugs submitted by you have been rejected by the developers. What may be the reasons? First, view my Bug Reporting tutorial. Then read below.

You may face bug rejections even when you are "seemingly" careful in your testing and bug reporting activities. Of course, you want your tests to be taken seriously. Do you know the various reasons why your bugs may be rejected? Reflect on the reasons given below and consider them when reporting bugs.

Bug rejection reasons related to requirements
1. There is no documented requirement that mentions the result expected in your bug report.
2. You are not aware of all the (minor) details (and caveats) of a requirement.
3. The requirement that you thought was not implemented is indeed implemented in another way. But you are not aware of this decision.
4. You are not aware about the changes to a particular requirement.
5. Your bug report relates to some work that has not started yet. Or, the work has started but has not been submitted for test yet.
6. Your bug report refers to some dummy test data in the application. This bug would not exist when the real/ realistic data is set up in the application.
7. The bug is in an area of the application not within the scope of development e.g. external systems, external content etc.
8. It has been like this since the beginning or for a long time OR the users have always accepted this result.

Bug rejection reasons related to test environment
9. The settings (e.g. language, region and fonts) on your test system are incorrect.
10. The application settings (e.g. settings in the administration section) used by you are incorrect.
11. Some required files are missing on your test system.
12. The required programs are missing on your test system. Or, you have the incorrect versions of the required programs.
13. The application does not support the platform (e.g. operating system and its versions or browser and its versions) tested by you.
14. You have tested against an old build. The bug is not present in the current build submitted for test.

Bug rejection reasons related to test data
15. The test data used by you is unrealistic.

Bug rejection reasons related to the test steps/ test case
16. The steps described in your bug report are unrealistic.

Bug rejection reasons related to the bug report itself
17. The Title (Summary) and Description of the bug report are mismatched.
18. It is not possible to understand your bug report (due to incomplete information provided or other reasons).
19. It is a duplicate bug report. For example, it is a complete duplicate of another bug report OR it describes the same problem that occurs in another area of the application as given in a valid bug report OR it describes a problem that is a part of the problem described in another valid bug report)

Bug rejection reasons related to management decisions
20. A long time after you submitted the bug, it is decided that it is not feasible to put in the effort required to fix it.
21. Due to strategic changes in the plan, all bug reports related to specific application functionality are rejected. Your bug report is one of those that are rejected.

Other reasons
22. The bug report describes a problem that is due to another bug. When this other bug would be fixed, the problem mentioned in your bug report would not exist.
23. The bug is too minor.

Once your bug is rejected, you would do well to remain calm. Read your bug report carefully and try to reproduce the bug. Then, analyze the reason for the bug rejection, establish whether it is a valid bug or not and take appropriate action. If it really an invalid bug, treat this event as a learning experience. If you think that it is a valid bug, you should confidently take further action in support of your bug report. Either way, you would become more mature as a tester. However, remain motivated and enthusiastic even when you mature. Motivation and enthusiasm play a big part in your testing success.
Have you ever had a bug rejected? What was the reason? Share it in your comment.

Wednesday, March 10, 2010

Software Test Estimation: How to estimate testing efforts (6 approaches to get test effort estimate)?

Test effort estimation is a skill required of a Test Lead or a Test Manager. However, test estimation is not a skill that one can learn quickly. It requires understanding of several key concepts and practice. In this post, I will explain what test effort estimation is, point you to your existing knowledge of estimation and provide you the key concepts that you can use in your estimation. First, view my video on Test Estimation techniques with formula example and Questions and Answers. Then read below.

First of all, we should understand what we mean by software test effort estimation. Test effort estimation is answering two basic questions about testing:
I. What will be done?
II. How much effort would it take?

There are other questions e.g. I. Who will do what? II. When will they do it? III. How will they do it? but these questions are not related to effort estimation but to planning and scheduling.

Even if you have not estimated test effort before (having relied on the effort estimates given by the client or your project manager), keep in mind that you do effort estimation on a regular basis. Let me explain. Do you recognize the following situations?

1. You are appearing in an examination. The duration of the examination is 3 hours and you have to answer n questions. You average the time available for answering one question while leaving out certain time for revision at the end. You look at the questions. Some questions are easy for you but some are not. You reserve less time than average for answering the simple questions and more time than average for the difficult ones.

2. You have to attend a job interview. The interview is at 10 a.m. You estimate the time it would take you to reach the interview venue, say 1 hour. You add some time e.g. 30 minutes for delays like traffic snarls. You estimate some time, say 30 minutes for collecting your documents and some time, say 30 minutes for dressing up. This means that you would need to wake up no later than 7:30 a.m. that morning to reach your interview venue in time.

3. It is the beginning of another day at work. Your manager has given you 20 test cases to execute today. In addition, you need to complete the annual self-appraisal form. You estimate that it would take you 1 hour to complete your appraisal form. Out of 8 hours of your work day, you have 7 hours remaining. You reckon that you need to execute a test case every 21 minutes (7 hours X 60 minutes / 20 test cases).

If the above situations look common to you, it means that you already do effort estimation even if you do not consciously recognize it as such.

How to estimate testing efforts?

Next, let us see the factors that you need to consider before you do test effort estimation:

a. Size of the system
It would take longer to test a larger system. In some projects, it is possible to know about the size of the system in terms of Function Points, Use Case Points or Lines of Code. You should take the size of the system into account when estimating the test effort.

b. Types of testing required
Sometimes, it is important to perform multiple types of testing on the system. For example, other than functional testing, it may be necessary to perform load testing, installation testing, help files testing and so on. You should create the effort estimates for each type of testing separately.

c. Scripted or exploratory testing
It may be feasible to only execute test cases or do exploratory testing or do both. If you intend to do scripted testing and do not have test cases available, you should estimate the time it would take to create the test cases and maintain them. Scripted testing requires test data to be created. If the test data is not available, you should estimate the effort it would take to create and maintain test data.

d. "Non-testing" activities
Apart from creating and executing tests, there are other activities that a tester performs. Examples include creating test logs/ reports, logging defects and entering time in the project management tool.

e. Test cycles
By a test cycle, I mean a complete round of testing (build verification testing followed by attempted execution of all test cases followed by all defects logged in the defect tracking system). In practice, one test cycle is not sufficient. You should estimate the number of test cycles it would take to promote the system to the client or production.

Now, let us understand the various approaches that you can use for test effort estimation. You may choose any of these approaches for your estimation. However, in my opinion, a combination of multiple approaches works best (by best, I mean that the effort estimates are close to the real actual efforts). In any case, you should be aware about the following approaches:

1. Use historical data from other projects
This approach is useful when you have effort data available from earlier projects which are very similar to the current project. For example, this approach is useful in the case of long-running projects where the test effort data from previous releases is readily available.

2. Your organization's approach
Your organization may have their custom approach to estimate test effort in projects.

3. Delphi method
This is useful when you have a number of experts knowledgeable in the testing to be done. The experts estimate separately and then their estimates are consolidated.

4. Use your own expert judgment
This approach is useful to arrive at a rough test effort estimate quickly.

5. Software size based approach
If the size of the system is available and the formula to convert software size to test effort is available, this approach may be used.

6. Activities based approach
This approach is useful if you can list the activities required. This approach may be used Top-Down (listing the high level activities and breaking them down to lower level activities) or Bottom-Up (listing the individual activities and combining them to higher level activities). Using this approach in the Top-Down manner is better since you can control the level of detail in your effort estimate. Remember to consider activities for each type of testing, any test cases or test data need to be created, the "non-testing" activities and the multiple test cycles.

As I mentioned before, you may choose any approach to do your test effort estimation. However, using at least two approaches is better. This way, you can compare the test effort estimates and address any obvious problems. Whatever hybrid approach you choose, you should document the assumptions made by you in the estimate.

Once you have arrived at the test effort estimate for your project and have convinced the stakeholders that it is a reasonable estimate, it does not stop there. You should track the actual progress in your project constantly to see if it is in line with your test effort estimate. You may find that some of your assumptions were not correct. You should revise your assumptions and your approach in line with your observations.

Continue to use your refined test effort estimation approach across test cycles and releases. In time, you should have a good estimation approach available with you.

Refer the Q and A related to software test effort estimation. These guide you during your test effort estimation and in your discussions with project stakeholders. In addition, these make for great interview questions.

Wednesday, March 3, 2010

How to do real database testing (10 tips to perform serious database tests)?

Many (but not all) applications under test use one or more databases. The purposes of using a database include long-term storage of data in an accessible and organized form. Many people have only a vague idea about database testing. If you are serious about learning database testing, view the videos, Database Testing and SQL Tutorial for Beginners. Then read on...

Firstly, we need to understand what is database testing? As you would know, a database has two main parts - the data structures (the schema) that store the data AND the data itself. Let us discuss them one by one.

Database testing

The data is stored in the database in tables. However, tables may not be the only objects in the database. A database may have other objects like views, stored procedures and functions. These other objects help the users access the data in required forms. The data itself is stored in the tables. Database testing involves finding out the answers to the following questions:

Questions related to database structure
1. Is the data organized well logically?
2. Does the database perform well?
3. Do the database objects like views, triggers, stored procedures, functions and jobs work correctly?
4. Does the database implement constraints to allow only correct data to be stored in it?
5. Is the data secure from unauthorized access?

Questions related to data
1. Is the data complete?
2. Is all data factually correct i.e. in sync with its source, for example the data entered by a user via the application UI?
3. Is there any unnecessary data present?

Now that we understand database testing, it is important to know about the 5 common challenges seen before or during database testing:

1. Large scope of testing
It is important to identify the test items in database testing. Otherwise, you may not have a clear understanding of what you would test and what you would not test. You could run out of time much before finishing the database test.
Once you have the list of test items, you should estimate the effort required to design the tests and execute the tests for each test item. Depending on their design and data size, some database tests may take a long time to execute. Look at the test estimates in light of the available time. If you do not have enough time, you should select only the important test items for your database test.

2. Incorrect/ scaled-down test databases
You may be given a copy of the development database to test. This database may only have little data (the data required to run the application and some sample data to show in the application UI). Testing the development or test or staging databases may not be sufficient. You should also be testing a copy of the production database.

3. Changes in database schema and data
This is a particularly nasty challenge. You may find that after you design a test (or even after you execute a test), the database structure (the schema) has been changed. This means that you should be aware of the changes made to the database during testing. Once the database structure changes, you should analyze the impact of the changes and modify any impacted tests.
Further, if your test database is being used by other users, you would not be sure about your test results. Therefore, you should ensure that the test database is used for testing purpose only.
You may also see this problem if you run multiple tests at the same time. You should run one test at a time at least for the performance tests. You do not want your database performing multiple tasks and under-reporting performance.

4. Messy testing
Database testing may get complex. You do not want to be executing tests partially or repeating tests unnecessarily. You should create a test plan and proceed accordingly while carefully noting your progress.

5. Lack of skills
The lack of the required skills may really slow things down. In order to perform database testing effectively, you should be comfortable with SQL queries and the required database management tools.

Next, let us discuss the approach for database testing. You should keep the scope of your test as well as the challenges in mind while designing your particular test design and test execution approach. Note the following 10 tips:

1. List all database-specific requirements. You should gather the requirements from all sources, particularly technical requirements. It is quite possible that some requirements are at a high level. Break-down those requirements into the small testable requirements.

2. Create test scenarios for each requirement as suggested below.

3. In order to check the logical database design, ensure that each entity in the application e.g. actors, system configuration are represented in the database. An application entity may be represented in one or tables in the database. The database should contain only those tables that are required to represent the application entities and no more.

4. In order to check the database performance, you may focus on its throughput and response times. For example, if the database is supposed to insert 1000 customer records per minute, you may design a query that inserts 1000 customer records and print/ store the time taken to do so. If the database is supposed to execute a stored procedure in under 5 seconds, you may design a query to execute the stored procedure with sample test data multiple times and note each time.

5. If you wish to test the database objects e.g. stored procedures, you should remember that a stored procedure may be thought of as a simple program that (optionally) accepts certain input(s) and produces some output. You should design test data to exercise the stored procedure in interesting ways and predict the output of the stored procedure for every test data set.

6. In order to check database constraints, you should design invalid test data sets and then try to insert/ update them in the database. An example of an invalid data set is an order for a customer that does not exist. Another example is a customer test data set with an invalid ZIP code.

7. In order to check the database security, you should design tests that mimic unauthorized access. For example, log in to the database as a user with restricted access and check if you can view/ modify/ delete restricted database objects or view or view and update restricted data. It is important to backup your database before executing any database security tests. Otherwise, you may render your database unusable.
You should also check to see that any confidential data in the database e.g. credit card numbers is either encrypted or obfuscated (masked).

8. In order to test data integrity, you should design valid test data sets for each application entity. Insert/ update a valid test data set (for example, a customer) and check that the data has been stored in the correct table(s) in correct columns. Each data in the test data set should have been inserted/ updated in the database. Further, the test data set should be inserted only once and there should not be any other change in the other data.

9. Since your test design would require creating SQL queries, try to keep your queries as simple as possible to prevent defects in them. It is a good idea for someone other than the author to review the queries. You should also dynamically test each query. One way to test your query is to modify it so that it just shows the resultset and does not perform the actual operation e.g. insert, delete. Another way to test your query is to run it for a couple of iteration s and verify the results.

10. If you are going to have a large number of tests, you should pay special attention to organizing them. You should also consider at least partial automation of frequently run tests.

Now you should know what database testing is all about, the problems that you are likely to face while doing database testing and how to design a good database test approach for the scope decided by you.