Monday, March 29, 2010

Performance testing using HP LoadRunner software

Dhananjay Kumar sent me an email saying that he wanted to explore HP LoadRunner. He said that he wanted information on HP LoadRunner configuration and on analyzing the HP LoadRunner results.

If you want to learn HP LoadRunner version 12 (current version at this time), please view my full set of LoadRunner Tutorials . You should view these videos in the order given.

There is an excellent white paper titled, "HP LoadRunner software—tips and tricks for configuration, scripting and execution". You can read this white paper that contains numerous tips as well as code samples. There are tips to create a high level performance test plan and tips for scripting, building scenarios, data handling and test execution.

Now let us turn our attention to the analysis of the performance test results. As I have mentioned in my earlier post, What knowledge do you need to have in order to do good or best performance testing?, after you run your test, you should first check if the performance test itself was valid. Initially, you may not be able to realize whether the errors you see are errors thrown due to the limitations of your application/ infrastructure or due to defects in your test environment. Therefore, you should start out with a simple and small test (e.g. by using a small number of virtual users, few transactions per second, short test duration and so on). Examine any errors raised by your test. Try to establish the reason of the most common errors and change your script(s), test or tool/ infrastructure configuration accordingly. However, you should make only one change at a time to your script(s) or test. The benefit of this is that if your change were incorrect, you would be able to rollback your change and try something else.

When you are able to run a simple and small test successfully (with minimal errors), it is time to develop your test. Again, do this incrementally. Run the test once, analyze the results and address any problems with the test environment before you make another change to it. Continue to develop your test until you have your full test available with you.

A test running with no errors or minimal errors may still be incorrect. Look for the supporting data, for example:
1. Does the test show that it has generated the Virtual Users specified by you?
2. Do metrics like transactions per second climb up in line with the ramp-up specified by you?
3. If the test puts a substantial load on your application, does your application slow down during the test?
4. Do your load generators become busier during the test?
5. If the test is supposed to create test data in the application, do you see this data in the application after the test is complete?

Even with a good handle on the errors and supporting data from multiple sources, you should not run your test just once. Run the same test multiple times and see if the results match with each other (more or less). If the results do not match, you should find out the reason for the mismatch and address it in your test environment.

Finally, if your test runs multiple times as you expect, it is time to check your test results against your application's performance requirements.

As you can now see, the performance test process (test preparation, test development, test execution and test analysis) is iterative.