Showing posts with label load testing. Show all posts
Showing posts with label load testing. Show all posts

May 27, 2023

Load Testing

Great job on starting a new lesson! After reading this lesson, click Next 👉 button at bottom right to continue to the next lesson.

Load Testing is a specialized performance testing that you use to assess the performance (e.g. page load times and processing times) and stability (e.g. error-free operation with a specific number of users) of your system under expected and peak load conditions. It involves subjecting the system to a number of virtual (simulated) users or transactions to measure its latency, response time, throughput, resource usage, and scalability.

Examples of Load Testing

  • You perform Load Testing on an e-commerce website to determine how it handles concurrent user requests during peak shopping seasons (for instance Thanksgiving in the USA or Diwali in India), ensuring that the system remains responsive and can handle the expected workload.
  • In a video streaming platform, you conduct Load Testing to evaluate the platform's ability to deliver high-quality videos to multiple users simultaneously, without buffering or  degradation of resolution or audio quality.
  • For a banking system, you can use Load Testing to validate the system's response time and stability when processing a large number of financial transactions in a short period, such as during salary transfers or tax filing seasons.

Tips for Load Testing

  • First, identify the most frequently executed or performance-critical business processes in your software.
  • Identify realistic load scenarios that reflect the expected user behavior and transaction volume, considering factors such as peak usage periods, geographical distribution, and user actions.
  • Popular tools for Load Testing include Apache JMeter, LoadRunner, Gatling, and Locust, which provide features for simulating concurrent user traffic, generating realistic load scenarios, and analyzing performance metrics.
  • When you run a load scenario, monitor key performance indicators (KPI) like network and software latency, response time, throughput, CPU and memory utilization, and database performance, to identify performance bottlenecks (areas for improvement).
  • Gradually increase the load in your load scenario to simulate realistic workload and observe how the system handles the increasing demand, helping to uncover scalability limitations.
  • Conduct Load Testing on a production-like environment that closely resembles the actual production setup to obtain accurate performance insights and avoid discrepancies due to different hardware or configurations.

FAQ (interview questions and answers)

  1. What is the purpose of Load Testing?
    Assess how a system performs under expected and peak load conditions, assessing if it can handle the anticipated user traffic, transactions, or data volume while maintaining acceptable performance levels.
  2. Is it necessary to perform Load Testing for every software application?
    No, but it is useful for applications that are expected to experience high usage or have critical performance requirements, such as e-commerce platforms, banking systems, and applications with a large user base.
  3. Can Load Testing uncover scalability issues in a system?
    Yes, Load Testing is an effective way to identify scalability issues in a system. By gradually increasing the load and measuring the system's performance, Load Testing can reveal bottlenecks (like resource limitations, or architectural flaws) that may impact the system's ability to handle increased user demand.
Remember to just comment if you have any doubts or queries.


September 15, 2015

HP LoadRunner Version 12.02 Tutorials

LoadRunner Introduction tutorial: This video introduces HP LoadRunner and the loadrunner performance testing process. It also explains how to download and install LoadRunner.

LoadRunner business process and sample web application tutorial: First, we gather the requirements. This includes identification of the business processes to be tested. In this video, we understand what a business process is and see a sample business process. Then execute that business process in the sample web application provided with LoadRunner. This application is the HP Web Tours application. In performance testing, when we run the business process, we should choose the most common or at least realistic test data. Our requirement gathering should include finding this test data.

LoadRunner VuGen Vuser script tutorial: In this video, I have explained how to create the automated performance testing script in LoadRunner Virtual User Generator (VuGen). In VuGen, we can create a single protocol script and record the business process. We can edit the generated Vuser script to enhance it. VuGen user interface has several useful features like Solution Explorer, Step Navigator, Editor, Output pane, Errors pane, Bookmarks pane and Thumbnail Explorer.

LoadRunner Tutorial 4 - Vuser Script Replay: In this video, I have explained how to replay the recorded Vuser script. The results of the replay or playback are shown in the replay summary tab in the VuGen. These are shown in the test results window and the replay log. We can save the replay log as a text file. LoadRunner VuGen provides runtime settings, which are individual settings for each Vuser script. These runtime settings include run logic, pacing, log, think time, browser emulation and speed simulation. We can playback a Vuser script from the command prompt instead of VuGen.

LoadRunner Tutorial 5 – Parameterization parameters data-driven Vuser script: In this particular video, I have explained one of the concepts to make the Vuser script ready for load testing. This is parameterization. It means that instead of using a fixed value in our Vuser script, we can have a parameter. In the load test, different Vusers would use different values so this makes the Vuser script more realistic. Also, this separates the data from the script code making the Vuser script data-driven.

LoadRunner Tutorial 6 – text check and image check: In this video, we see one more concept to make the Vuser script ready for load testing. This is content checks. In order to verify that the server returns the correct responses, we can use text checks or image checks. The default is that the text checks and image checks are disabled during playback. This is because they consume more memory. We need to enable these in run-time settings. We can implement the text check with web_reg_find function. We can implement the image check with the web_image_check function.

LoadRunner Tutorial 7 – Transactions: In this video, we see one more important concept to make the Vuser script ready for load testing. It is transactions. A transaction is a part of a Vuser script. A transaction is used to measure the time it takes to complete one or more actions of a business process. We insert lr_start_transaction marker before the first step and an lr_end_transaction marker after the last step of our transaction. In HP LoadRunner VuGen, we can mark any number of transactions in a Vuser script. However, the name of each transaction has to be different. The LoadRunner Controller measures the time it takes to perform each transaction.

LoadRunner Tutorial 8 - Controller - Manual Scenario: In this video, we start with the LoadRunner component called Controller. Using Controller, we can set up the performance test, run it and monitor the performance test as it is running. The settings of a performance test are called a scenario. We can set up a Manual Scenario. Here we need to specify the number of virtual users. We add the Vuser scripts in the Scenario Groups pane and schedule the scenario in the Scenario Schedule pane. Our scenario settings should be as close as possible to what happens in the real production environment.

LoadRunner Tutorial 9 - Controller user interface and manual scenario continued: In this video, we understand the user interface of the Run tab in LoadRunner Controller. It has the panes for scenario groups, scenario status, available graphs, graph display and graph legends. In the Vusers dialog box, we can stop any vuser, add vusers, run vusers, see the tasks of any vuser and see the vuser log. We can see the errors, warnings and other messages in the output window.

LoadRunner Tutorial 10 - Controller goal oriented scenario explained: In this video, we understand the goal-oriented scenario in Controller. In a goal-oriented scenario, we specify the goal. For example, transaction response time or number of hits per second. LoadRunner creates a scenario based on the goal automatically. There are five types of goals, Vusers, transactions per second, transaction response time and for web applications, pages per minute or hits per second.

LoadRunner Tutorial 11 - Analysis and SLA: In this video, we learn about the Analysis component of LoadRunner. Analysis collects and consolidates the test logs from all the load generators. Using Analysis, we can see the performance test results. In Analysis, we can analyze the performance test results by comparing the measurements to our performance requirements. Or we can review multiple graphs or even merge several graphs into one graph. We also learn about SLA (Service Level Agreement). SLA are one or more goals specified for the load test scenario. Analysis compares the goals with the test results. Analysis finds out if the SLA statuses passed or failed.

LoadRunner Tutorial 12 - Analysis detailed explanation: In this video, we dive deeper into the Analysis component of LoadRunner. Analysis shows the summary report with sections of statistics summary, worst performing transactions, scenario behavior over time and transaction summary. We can view graphs. In addition, we can merge graphs together. In order to see the factors related to a transaction’s performance, we can use auto-correlate. In Analysis, both HTML report and Microsoft Word report are available.

October 02, 2013

JMeter Web Performance Testing Training Videos


You can get trained in web performance testing with JMeter by using our training videos. These high definition videos are 13 hours long altogether. They cover many aspects of performance testing - performance testing process, using JMeter effectively, setup and execution of performance tests, analysis of results and much more.

August 19, 2013

JMeter Web Performance Testing Training Course


I have launched this course in web performance testing using the leading open source free tool, Apache JMeter. As you may know, I have a number of years experience in delivering performance testing and load testing projects successfully. I started this course on repeatedly finding even experienced software testing professionals having many gaps in their knowledge of performance testing and load testing. This course has 13+ hours of online training videos with lots of example performance test plans. The topics taught in this course include performance testing concepts, JMeter installation and using the UI effectively, using Proxy Server, building requests, Thread groups, Logic controllers, Samplers, Listeners and statistical results, data parametrization, test script and test plan modeling, server technologies, profilers and many more.  In order to get started, please see my demo videos

JMeter short tutorial

JMeter detailed tutorial

You can see more details of these training videos at JMeter Web Performance Testing Training Videos.

August 06, 2013

Performance Test Reports - JMeter Listeners Tutorial

I run a training course on Apache JMeter called Web Performance Testing with JMeter. One of the important concepts in JMeter is Listeners and that I want to share with you. Reports are called Listeners in JMeter vocabulary. Listeners are used to collect the performance test results and display it to the performance tester. Now, let us learn about Listeners in JMeter and how to use them. You can see listeners working in my short JMeter video.

Listeners are used to collect and display performance test results. But there are many types of Listeners. One of my favorites is View Results in Table. It is a simple Listener that shows data about each response. Here is what it looks like.


In the above Listener, the Sample Time (ms) indicates the Response Time, the Status indicates if the request was successful (green means yes, orange means no), the Bytes indicate the size of the response and the Latency indicates the Latency Time. Of course, you can copy the test results to Microsoft Excel and format them further. But, there is an easier option. You can have JMeter save the test results to a CSV file or XML file. Further, you can even specify which fields you want to save by clicking the Configure button. JMeter can also summarize the test results automatically for you. For this, you can use Summary Report. Here is what it looks like. It summarizes the results for each request. 


JMeter has Listeners that automatically show the test results graphically. There are several like Spline Visualizer and Distribution Graph but my favorite is Graph Results. Here is what it looks like. It shows the Average and Median Response Time and the Throughput.


Some other points to note are:
1. All Listeners have access to the same test results. Only their display is different.
2. As with other test elements, Listeners work according to their scope. This means that if you put a Listener under the Test Plan, it will capture and show all the test results. If you put the Listener a request, it will capture and show test results of that request only.
3. In a particular scope, Listeners are last in execution order because they need all the results as input.
4. Listeners use substantial CPU and RAM resources on the computer. In order to avoid the Listeners becoming a bottleneck, you should decide the best Listener for your purpose and put just that Listener in your Test Plan.

Overall, JMeter Listeners are basic but configurable and very simple to use. You can save effort on test results reporting by using these available Listeners.

March 16, 2013

Apache JMeter: How to build and run a web test?

JMeter is a popular open source load testing tool that is written in the Java programming language. Besides other things, JMeter can be used to load test web applications. It is very simple to deploy JMeter on a Windows computer. JMeter is full of features to build, configure and run a realistic web test. It can also be used to perform functional testing on a web application. With this introduction, let us see how we can quickly build and execute a web test in Apache JMeter.

If you are new to performance testing or JMeter, please see my video, JMeter Load Testing Beginner tutorial. I have explained performance testing in the first 40 minutes. In the next 40 minutes, I have demonstrated the important basic JMeter features.

Also, you can see a complete JMeter load test explained in my video, Learn JMeter Load Testing in 18 minutes.

The latest release of JMeter can be downloaded here. At this time, the latest version is 2.9. JMeter requires Java 6 or later (meaning that the client computer needs to have Java version 1.6.x or later). You can check the java version on your computer by running the command, java -version in the command prompt window. If it tells you the correct Java version, you can proceed to set the JAVA_HOME environment variable under Control Panel> System > Advanced System Settings > Environment Variables > User variables. The JMeter binary comes as a zip file that you need to extract in a folder in your computer. Under this folder, there will be a folder called bin. You can launch JMeter by executing jmeter.bat file in the bin folder.


Building the web test plan
We will now build a simple web test. This test will involve navigating to the Software Testing Space blog and then my profile page. Add a Thread Group by right-clicking on the Test Plan node and then clicking Add > Threads(Users) > Thread Group. Leave every value as default.
Now, add a HTTP Request by right-clicking Thread Group and clicking Add > Sampler >  HTTP Request. Change the Name value to HomePage. Put the Server Name or IP value to inderpsingh.blogspot.com/ Then scroll to the bottom and check Retrieve All Embedded Resources from HTML files. Add another HTTP Request. Change the Name value to ProfilePage. Put the Server Name or IP value to blogger.com. Put the Path value as profile/05923580987480854491.


Running the web test plan
Before running the web test, you need to arrange to get the results. For this, add a listener. A simple listener is View Results in Table. Add it by right-clicking on Thread Group and then clicking Add > Listener > View Results in Table. Now click the green Start button or in the menu, click Run > Start. If you have not saved your web test plan, you can do so now. You should see the results. Results containing green icons indicate success, those containing orange/ red icons indicate warnings or errors. If your results contain green icons, it means that you successfully ran the web test plan with 1 thread (meaning an independent virtual user) running each HTTP Request once.
Now, run the same web test plan with 2 users running 3 iterations of 2 HTTP requests. Important: Please note that this blog is not a web site to be stressed. This is only an example. Therefore, please be reasonable and use very small numbers. Change the Thread Group values. Put Number of Threads (Users) as 2. Put Loop Count as 3.
Click on View Results in Table. Then click the Clear button in the toolbar to clear the previous results. Run the web test plan by clicking the Start button. When the test plan is running, you should see 2/2 in the upper right corner. This means that 2 threads/ users out of a total of 2 are active. Once the requests are sent and responses received, you should get the new results.

As you can see, it is quite easy to build and run simple web tests in JMeter. I hope that this post was able to raise your interest in using JMeter. You can see a more complex and complete JMeter load test plan in my video, JMeter Load Testing.

October 28, 2011

Performance Test Scripts Sections

Performance test scripts model the virtual user's expected interaction with the system. A performance test script is usually created within the performance testing tool. The default performance test script generated by the tool needs to be re-factored, parametrized, co-related and unit tested before it can be used in a performance test. Each performance test script contains various sections. It is important to know about these in order to create robust scripts that work correctly.

1. Environment: This section of a performance test script contains the general information related to the entire script. Examples of data in the environment section are repository description of the scripts, protocol used, browser used and time units (e.g. ms) used.

2. Include: It gives the reference of other pre-existing scripts that contain functions, constants and variables used in the current performance test script. Example of an include script is the file containing all browser response status codes (e.g. 200, 404 and 500).

3. Variables: These are used when it is not possible to know the data value in advance. For example, a performance test script modeled to work with any username/ password would use variables to read these values at run-time from a data source (e.g. a CSV file) and subsequently use these variables to perform user actions. Another example is using a variable to store the cookie value, which cannot be predicted in advance. 
The scope of a variable can differ. A variable can be local to a script and a virtual user. Or it can be local to a particular virtual user across all scripts executed by this virtual user. Or the variable can be global in scope across all scripts and all virtual users in the load test.

4. Constants: These are defined once in the performance test script and may be used multiple times in the script. They provide configuration control. A change in a constant value is automatically reflected wherever the constant is used in the entire script.

5. Timers: These are special variables that track the time elapsed between sending a request to the system and loading of the responses received from the system. Timer values are aggregated to determine the response times of an entire user transaction or a part thereof.

6. Code: This is the main section of the performance test script. It contains script instructions that model a user performing a transaction in the system. It also contains the validation checks on the responses given by the system. The code is written in the scripting language generated by the performance testing tool or any scripting language that is supported by the performance testing tool.

7. Waits: These are commonly used to model the pauses given by users between operations in the system. The performance testing tool does nothing during the wait period. Note that if all Wait statements were removed, it would put an unrealistic load on the system due to the non-stop issuing of requests by the user.

8. Comments: These are useful to explain the sections in a performance test script. Comments are especially important in scripts representing lengthy user transactions.

April 01, 2010

Stress Testing

Prem Phulara had commented on my earlier post, What knowledge do you need to have in order to do good or best performance testing asking me to explain stress testing.

This post is dedicated to Prem. In this post, we will discuss stress testing.

Stress testing is a special type of performance testing. It is used to stress (or in other words, put a high load on) the system to determine its highest operational capacity. Stress tests establish the highest load that the system is able to withstand while still being operational (and not throwing too many errors). The additional benefit of stress testing is that it exposes the resources that run out the fastest when the load on the system approaches its maximum sustainable value. It is common to design a series of load tests with increasing loads to find out the maximum sustainable load.

Here are the tips to perform stress testing correctly but quickly:

1. Before you begin work on stress testing your system, you should be aware of the goals of your stress test. For example, your goal could be to determine the maximum throughput of your application for a given business transaction/ workflow on the given infrastructure or your goal could be to determine the maximum number of concurrent users with a given workload and the given infrastructure.

2. It is crucial that you identify the correct performance testing tool for your stress test. Otherwise, you would be forced to make compromises in your tests. See How to evaluate automated software test tools quickly?

3. You would need to model test script(s) according to the business transactions specified or implied in your goals.

4. If your scripts are correct, a large number of business transactions would execute during your tests. Therefore, ensure that you have a large set of test data that does not exhaust during the test. You may consider sourcing any existing test data from your application's database. Another way of quickly generating some test data is to use Microsoft Excel to extend a series. There are also free test data generators available on the web.

5. As with other tests, you should have an isolated (as far as possible) environment for conducting your tests. Unless you stress test a system on a stand-alone machine, your test environment may consist of the client machine(s), the load generator machine(s), the server(s) (web server(s), application server(s), database server(s) and so on) and the connectivity between the above. You would need your performance testing tool installed on at least one machine. You would set up your test, run it and analyze the test results from this machine. Depending on your choice of the performance testing tool, other load generator machine(s) may or may not require the full performance testing tool installed on them.

6. When you set up your first load test, you should model it realistically. For example, you should add the required scripts to the test, set up the specifications of the Virtual Users (e.g. initial VU, ramp-up, constant load, ramp-down and distribution of VU assigned to each script etc.) and give the run-time settings (e.g. test duration, time lag between iterations, network bandwidth distribution of the VU, their browser distribution etc.). You would really be limited by the features provided by the chosen performance testing tool.
You should specify a low load in your first load test. This is when you could check if your script(s) work correctly (e.g. the script reads the test data, it accesses the system and interacts with it, it is able to send data to the system which is accepted by the system and is able to send/ receive data to/ from other script(s) as required). If you have multiple scripts in your test, you could check if the scripts run in the correct sequence or in parallel, as specified. When your test is running, you should check the performance of each machine in your
test environment e.g. check that the server(s) and load generator(s) have become busier but not to their limit. On Microsoft Windows machines, you may use the PerfMon (Start > Run > PerfMon) tool to add your desired counters and monitor them during the test. See other tips in the post, Performance testing using HP LoadRunner software.

7. After establishing that your load test is valid, you should increase the load systematically. Do not go "big bang" and deploy the load that you think is the maximum sustainable load. For example, if you think that your system may support up to 1000 concurrent Virtual Users, start with a load test with 100 VU and keep on adding 100 VU every 5 minutes. You may find that your system supports 800 concurrent VU but crashes with a load of 900 concurrent VU. After resetting your test environment, you should then start with a load test with 800 VU and keep on adding 10 VU every 30 seconds. Then you may find that your system supports 850 VU but crashes with a load of 860 concurrent VU. Repeat the last test at least another couple of times to confirm your test result.

8. If you recall, I mentioned that the additional benefit of a stress test is to identify bottlenecks or resources that run out the fastest when the load is increased. You should analyze the test results of several of your last tests to find out about these bottlenecks. Maybe it is the network which is clogged, or your web server runs out of available memory. Confirm your finding by repeating the last few tests. Your system's capacity may be improved by increasing the resource that runs out first. Ensure that you include the details of the important bottlenecks observed during your stress test in your test report.

March 29, 2010

Performance testing using HP LoadRunner software

Dhananjay Kumar sent me an email saying that he wanted to explore HP LoadRunner. He said that he wanted information on HP LoadRunner configuration and on analyzing the HP LoadRunner results.

If you want to learn HP LoadRunner version 12 (current version at this time), please view my full set of LoadRunner Tutorials. You should view these videos in the order given.
 
There is an excellent white paper titled, "HP LoadRunner software—tips and tricks for configuration, scripting and execution". You can read this white paper that contains numerous tips as well as code samples. There are tips to create a high level performance test plan and tips for scripting, building scenarios, data handling and test execution.

Now let us turn our attention to the analysis of the performance test results. As I have mentioned in my earlier post, What knowledge do you need to have in order to do good or best performance testing?, after you run your test, you should first check if the performance test itself was valid. Initially, you may not be able to realize whether the errors you see are errors thrown due to the limitations of your application/ infrastructure or due to defects in your test environment. Therefore, you should start out with a simple and small test (e.g. by using a small number of virtual users, few transactions per second, short test duration and so on). Examine any errors raised by your test. Try to establish the reason of the most common errors and change your script(s), test or tool/ infrastructure configuration accordingly. However, you should make only one change at a time to your script(s) or test. The benefit of this is that if your change were incorrect, you would be able to rollback your change and try something else.

When you are able to run a simple and small test successfully (with minimal errors), it is time to develop your test. Again, do this incrementally. Run the test once, analyze the results and address any problems with the test environment before you make another change to it. Continue to develop your test until you have your full test available with you.

A test running with no errors or minimal errors may still be incorrect. Look for the supporting data, for example:
1. Does the test show that it has generated the Virtual Users specified by you?
2. Do metrics like transactions per second climb up in line with the ramp-up specified by you?
3. If the test puts a substantial load on your application, does your application slow down during the test?
4. Do your load generators become busier during the test?
5. If the test is supposed to create test data in the application, do you see this data in the application after the test is complete?

Even with a good handle on the errors and supporting data from multiple sources, you should not run your test just once. Run the same test multiple times and see if the results match with each other (more or less). If the results do not match, you should find out the reason for the mismatch and address it in your test environment.

Finally, if your test runs multiple times as you expect, it is time to check your test results against your application's performance requirements.

As you can now see, the performance test process (test preparation, test development, test execution and test analysis) is iterative.

March 17, 2010

What is the best place to store test data for your automated tests?

You need test data to execute your tests. The test data is used to provide inputs to the application under test and/ or verify the test results. Test data may be used to check if the application works normally with expected inputs, handles incorrect inputs gracefully and (optionally) if the application works with multiple test data values. You can source the test data from an existing data store, create the test data by hand or automate the creation of the test data. First, view my Test Data tutorial. Then read below.

Now, the question arises where should you store test data once you have generated it? There are numerous data stores possible. Go through the table below to see a comparison of the data stores.

Ease of setupMaintainabilityRe-useCost
Text filesGood
(but it is not simple to secure the text files and it is not possible to store images in them)
Poor
(it is easy to make mistakes while creating or editing the test data)
GoodExcellent
SpreadsheetsGood
(since you are likely quite comfortable with spreadsheets)
Average
(since you may end up with test data in multiple sheets of multiple spreadsheets)
GoodAverage
(requires at least the spreadsheet viewer to read the spreadsheet)
RDBMSPoor
(since you first need to design the table structure to store your test data)
Excellent
(due to permanent storage and the availability of tools to view and edit the test data)
Excellent
(if the test data design is generic enough)
Poor
(owing to the possible high cost of the RDBMS)
Test data management toolExcellent
(due to the features provided by the tool)
Average to Excellent
(depending on the features of the tool)
Poor
(possibly, if porting to another test data management tool)
Poor to Excellent
(depending on the cost of the tool license)
XMLAverage
(you need to know XML but it is good
for defining hierarchies)
Average
(debugging test data may be challenging)
GoodExcellent
Application configuration filesGood
(you can take the help of developers to set up your test data)
Poor
(due to the presence of other data that is related to application settings)
Poor
(if porting to other applications under test)
Poor to Excellent
(depending on the cost of the development license)
Now, you know the different types of data stores for your automated tests.

March 16, 2010

What knowledge do you need to have in order to do good or best performance testing?

Performance testing deals with design and execution of tests that examine the performance of the application under test. The application's performance is measured using a number of metrics such as application's response times, application throughput and concurrent users. Many software testers struggle when they begin performance testing/ load testing. This is because performance testing requires familiarity with a number of special concepts as well as proficiency in certain special skills. However, the good news is that you can learn the required concepts, develop the required skills and deliver results in your performance tests successfully. The purpose of this post is not to define the key terms used in performance testing but to introduce them to you. You can search these terms on the web and build your knowledge. Definitely see this video on Load Testing and Performance Testing Questions and Answers.

1. Performance testing tools (commercial, open source and custom) There are numerous performance testing tools available publicly. Some tools are commercial and the others are open source. The full-featured tools provide the functionality to create test scripts, add test data, set up tests, execute the tests and display the results. If the performance testing tool has not been chosen yet, you should evaluate the tools according to your project requirements as per the evaluation process described here.
If you have the time and technical skills, you may even create your own simple tool to help in performance testing.
2. Profilers
Profilers are tools that measure data related to the application's calls or the resources (e.g. memory) used by the application when the application is running.
3. Virtual Users
If your application supports multiple users concurrently (at the same time), you should test your application's performance using multiple users. The users modeled by the tool are called the Virtual Users.
4. Key business transactions Your application may allow the user a large number of work flows or business transactions. All the work flows/ business transactions may not be important in performance testing. Therefore, it is common to test using only the important or key business transactions. Refer or solicit your application's performance requirements for guidance in this regard.
5. Workload Workload is the load on the multi-user applications in terms of virtual users performing different business transactions. For example, in the case of a social networking application, 50 virtual users may be searching contacts, 40 of them may be messaging and 10 may be editing their profiles, all within a period of 30 minutes.
6. Isolation of test environment In order to get results with confidence, it is critical that your test environment is used only for the purpose of the performance test. This means that no other systems or users should be loading the test environment at the time of the performance test. Otherwise, you may have trouble replicating (or even understanding) your test results.
7. Modeling (script and test) You should script the key business transactions as realistically as possible (model) using the performance test tool. You should also design the tests with realistic test settings (e.g. virtual users ramp-up, user network bandwidth, user browser and so on) and the model workload.
8. Test data It is common for the test scripts to be executed numerous times during one performance test. Therefore, you may need a large amount of test data that does not exhaust during the test. Once a performance test finishes, the application may be full of dummy test data entered by the test. You may need a means of cleaning up this data (for example, by re-installing the build).
9. Server configurations You should be aware of and control the server configuration (CPU, memory, hard disk, network bandwidth and so on). This is because the application performance depends on the server resources.
10. Network configurationsYou should know about the protocols used by your application. You should also know about load balancing (in case, multiple servers are used by your application).
11. Client configurations You should know the common client configurations (in terms of CPU, memory, network bandwidth, operating system, browser and so on) used by your users. This would help you model realistic tests.
12. Load generators Depending on the load that you need to generate during the test, you may need one or more load generator machines. One tool may need fewer resources (CPU usage, memory usage etc.) per virtual user and another tool may need more resources per virtual user. You should have sufficient load generation capacity without maxing out your load generator(s) in any way.
13. Performance counters During the test, you should monitor the chosen performance counters (e.g. % Processor time, Average disk queue length) on your load generators as well as the application's servers. You should choose the performance counters so that you may come to know about the depletion (or near depletion) of any important resource.
14. Response time You should be aware that the application response time includes the time it takes for the request to travel from the client to the server, the time it takes the server to create the response and the time it takes for the response to travel back to the client.
15. Monitoring During a performance test, you should be monitoring the test progress, any errors thrown as well as your chosen performance counters on your servers and load generators.
16. Results Analysis After the completion of a performance test, you should spend time on analyzing the test results. You should check if the test created the required virtual users, generated the required load and ran to completion. You should check the errors thrown during the test. If you see any unusual results, you should form a conjecture to explain those results and look at the data carefully to either accept or reject your assumption.
It takes practice to be adept at analyzing performance tests well.
17. Reporting You should be comfortable with reporting performance test results. It is common for the performance test reports to contain present/ past metrics as well as charts.
If you make some effort, it is not difficult to educate yourself with the knowledge required for performance testing.

March 03, 2010

How to do real database testing (10 tips to perform serious database tests)?

Many (but not all) applications under test use one or more databases. The purposes of using a database include long-term storage of data in an accessible and organized form. Many people have only a vague idea about database testing. If you are serious about learning database testing, view the videos, Database Testing and SQL Tutorial for Beginners. Then read on...

Firstly, we need to understand what is database testing? As you would know, a database has two main parts - the data structures (the schema) that store the data AND the data itself. Let us discuss them one by one.

Database testing

The data is stored in the database in tables. However, tables may not be the only objects in the database. A database may have other objects like views, stored procedures and functions. These other objects help the users access the data in required forms. The data itself is stored in the tables. Database testing involves finding out the answers to the following questions:

Questions related to database structure
1. Is the data organized well logically?
2. Does the database perform well?
3. Do the database objects like views, triggers, stored procedures, functions and jobs work correctly?
4. Does the database implement constraints to allow only correct data to be stored in it?
5. Is the data secure from unauthorized access?

Questions related to data
1. Is the data complete?
2. Is all data factually correct i.e. in sync with its source, for example the data entered by a user via the application UI?
3. Is there any unnecessary data present?

Now that we understand database testing, it is important to know about the 5 common challenges seen before or during database testing:

1. Large scope of testing
It is important to identify the test items in database testing. Otherwise, you may not have a clear understanding of what you would test and what you would not test. You could run out of time much before finishing the database test.
Once you have the list of test items, you should estimate the effort required to design the tests and execute the tests for each test item. Depending on their design and data size, some database tests may take a long time to execute. Look at the test estimates in light of the available time. If you do not have enough time, you should select only the important test items for your database test.

2. Incorrect/ scaled-down test databases
You may be given a copy of the development database to test. This database may only have little data (the data required to run the application and some sample data to show in the application UI). Testing the development or test or staging databases may not be sufficient. You should also be testing a copy of the production database.

3. Changes in database schema and data
This is a particularly nasty challenge. You may find that after you design a test (or even after you execute a test), the database structure (the schema) has been changed. This means that you should be aware of the changes made to the database during testing. Once the database structure changes, you should analyze the impact of the changes and modify any impacted tests.
Further, if your test database is being used by other users, you would not be sure about your test results. Therefore, you should ensure that the test database is used for testing purpose only.
You may also see this problem if you run multiple tests at the same time. You should run one test at a time at least for the performance tests. You do not want your database performing multiple tasks and under-reporting performance.

4. Messy testing
Database testing may get complex. You do not want to be executing tests partially or repeating tests unnecessarily. You should create a test plan and proceed accordingly while carefully noting your progress.

5. Lack of skills
The lack of the required skills may really slow things down. In order to perform database testing effectively, you should be comfortable with SQL queries and the required database management tools.

Next, let us discuss the approach for database testing. You should keep the scope of your test as well as the challenges in mind while designing your particular test design and test execution approach. Note the following 10 tips:

1. List all database-specific requirements. You should gather the requirements from all sources, particularly technical requirements. It is quite possible that some requirements are at a high level. Break-down those requirements into the small testable requirements.

2. Create test scenarios for each requirement as suggested below.

3. In order to check the logical database design, ensure that each entity in the application e.g. actors, system configuration are represented in the database. An application entity may be represented in one or tables in the database. The database should contain only those tables that are required to represent the application entities and no more.

4. In order to check the database performance, you may focus on its throughput and response times. For example, if the database is supposed to insert 1000 customer records per minute, you may design a query that inserts 1000 customer records and print/ store the time taken to do so. If the database is supposed to execute a stored procedure in under 5 seconds, you may design a query to execute the stored procedure with sample test data multiple times and note each time.

5. If you wish to test the database objects e.g. stored procedures, you should remember that a stored procedure may be thought of as a simple program that (optionally) accepts certain input(s) and produces some output. You should design test data to exercise the stored procedure in interesting ways and predict the output of the stored procedure for every test data set.

6. In order to check database constraints, you should design invalid test data sets and then try to insert/ update them in the database. An example of an invalid data set is an order for a customer that does not exist. Another example is a customer test data set with an invalid ZIP code.

7. In order to check the database security, you should design tests that mimic unauthorized access. For example, log in to the database as a user with restricted access and check if you can view/ modify/ delete restricted database objects or view or view and update restricted data. It is important to backup your database before executing any database security tests. Otherwise, you may render your database unusable.
You should also check to see that any confidential data in the database e.g. credit card numbers is either encrypted or obfuscated (masked).

8. In order to test data integrity, you should design valid test data sets for each application entity. Insert/ update a valid test data set (for example, a customer) and check that the data has been stored in the correct table(s) in correct columns. Each data in the test data set should have been inserted/ updated in the database. Further, the test data set should be inserted only once and there should not be any other change in the other data.

9. Since your test design would require creating SQL queries, try to keep your queries as simple as possible to prevent defects in them. It is a good idea for someone other than the author to review the queries. You should also dynamically test each query. One way to test your query is to modify it so that it just shows the resultset and does not perform the actual operation e.g. insert, delete. Another way to test your query is to run it for a couple of iteration s and verify the results.

10. If you are going to have a large number of tests, you should pay special attention to organizing them. You should also consider at least partial automation of frequently run tests.

Now you should know what database testing is all about, the problems that you are likely to face while doing database testing and how to design a good database test approach for the scope decided by you.

February 12, 2010

How to select the best automated software testing tool quickly

Well, you know how it is. You have been testing an application for some time now. One fine day, your manager walks over to you. He tells you that the management (or your client) is interested to get the functional tests automated. Since you are very familiar with the application, you should suggest the most suitable automated software testing tool for the purpose. You may know about your application’s technology and about a number of automated testing tools. However, you may not be clear about how to do justice to the evaluation exercise without burning time over this extra work. View the video, How to Select Automated Testing Tools. Then read on.

How to select the best automated software testing tool quickly

Evaluation of automated software testing tools can be a challenging and effort-prone task. Over the years, I have successfully analyzed, compared and evaluated a number of automated functional testing tools for applications written in various technologies. Below, I will describe the process for evaluation that I think is balanced in terms of effort and results.
1. Define your requirements

The primary requirement is that the automated testing tool should work (in other words, be compatible) with the application under test. There may be other requirements such as:
a. Ease of use
b. Available documentation
c. High quality customer support
d. Cost effectiveness (i.e. affordable price of license)

See the more comprehensive list of requirements towards the end of this post. Take your pick and/ or add requirements that are specific to your project, client or organization.

The requirements are rarely equally important. This means that some requirements are more important than others. After you choose your requirements, assign a weightage to each requirement using a scale, say 1 to 10 (1 being the worst and 10 being the best).

2. List the tools

There are many lists of automated functional testing tools on the web. One example is here. It is important to cast your net wide. Otherwise, you may miss a testing tool that beautifully suits your purposes. Later, someone (your manager, your colleague or your client) may ask why you did not consider another “obvious” tool.

3. Create a scorecard

Creating a scorecard need not be complex. Just list each of the requirements on the X-axis and the tools from your list on the Y-axis. See the sample. Add more columns and rows as required.









Look up the documentation on each tool's website. You could also browse the popular software testing/ QA forums e.g. www.sqaforums.com to get a feel of the kind of problems test automators generally face with a particular tool. Fill in the score (along with optional notes) in each cell. Use a consistent scale, say 1 to 5 (1 being the lowest and 5 being the highest), in each cell. You should have the raw data available at this time.

4. Analyze the scorecard

Next, you create the total scores for each tool. In order to get the total score, take the following steps for each row.
a. Multiply each score with the respective requirement weightage.
b. Add up all products in step a.

See the example scorecard with dummy scores. Note that the score for Tool1 is 4*8 + 5*5 + 2*10 = 77.







5. Try the short-listed tools

After you analyze your scorecard, you should see the tools that satisfy your requirements more than the others do. Short-list the top tools. You should select at least the top two tools (in order to give you some choice) e.g. Tool2 and Tool1 in the above table and at most the top three tools (in order to spare yourself too much effort).

It is the norm for tool vendors to offer trial or evaluation copies of the automated testing tools. Download and install the evaluation copy of each of the short-listed tools. Use each tool with your application. Create and run a few tests. Explore each of your requirements (given in your scorecard). Soon, you should have a good handle on:
a. the suitability of each short-listed tool with your application
b. your comfort level while using the tool

6. Present your results

You should create an automation demonstration with each of the short-listed tools. At least, you should create the demo for the top tool so far. Try to cover concepts related to as many requirements (from your scorecard) as you can. For example, if keyword driven testing were a requirement, it would be useful to have a keyword driven test within your demo.

With your scorecard data and your demo, you should be confident in presenting the results of your evaluation to any stakeholders.

Please fee free to use any information in this post for your purpose. Kindly let me know your thoughts on this evaluation process.

Possible requirements from the automated software test tool

• System requirements for tool installation
• System requirements for test creation
• Platforms supported for test creation (including platforms supported by using add-ins)
• Popularity (wide-spread use)
• Ease of use
• Object recognition
• Object management
• Integrated development environment
• Test debugging
• Integration with version control software
• Data driven testing
• Keyword driven testing
• System requirements for test execution
• Platforms supported for test execution
• Error recovery
• Reporting
• Documentation and training
• Customer support
• Load testing capability
• Test management capability
• Cost effectiveness (i.e. affordable price of tool and add-ins licenses)

January 11, 2010

Open Source Software Test Tools

On several forums, I have noticed many testers looking for open source testing tools. Here are two great lists:

1. Open source functional test tools
2. Open source performance test tools