Based on my understanding, the following are the automated software testing metrics that I think are useful. My list is a work in progress so use your own judgment before using these metrics to analyze your progress.
Automation Development
- Number (or %) of test cases feasible to automate out of all selected test cases - You can even replace test cases by steps or expected results for a more granular analysis.
- Number (or %) of test cases automated out of all test cases feasible to automate - As above, you can replace test cases by steps or expected results.
- Average effort spent to automate one test case - You can create a trend of this average effort over the duration of the automation exercise.
- % Defects discovered in unit testing/ reviews/ integration of all discovered defects in the automated test scripts
Automation Execution
- Number (or %) of automated test scripts executed out of all automated test scripts
- Number (or %) of automated test scripts that passed of all executed scripts
- Average time to execute an automated test script - Alternately, you can map test cases to automated test scripts and use the Average time to execute one test case.
- Average time to analyze automated testing results per script
- Defects discovered by automated test execution - As common, you can divide this by severity/ priority/ component and so on.
Hi,
ReplyDeleteIsn't these points more of a meassurement - compared to a metric stating for instance:
Execution must have 95% automation executed.
Jesper
http://www.eurostarconferences.com/blog-posts/2010/9/14/will-we-finish-on-time---jesper-ottosen.aspx
Jesper,
ReplyDeleteAs you have mentioned in your post, measurements and metrics go hand in hand. One has to take the measurements (collect the data about process performance) to produce the metrics. The reason I call the above metrics is that they indicate the performance of automation development/ execution activities. These metrics can help make decisions.
For example, you may not expect only 50% of the selected test cases to be automatable. However, this situation is possible if half of the test cases expect text verification within and image and the automated testing tool does not provide for character recognition natively. In such a case, in order to increase automatability of selected test cases, the team may consider other approaches like:
a. Using another tool for image checks or writing their own tool for this verification
b. Changing the test approach e.g. testing the data that is consumed to create the image of the text
c. any other
I consider the example that you gave, “Execution must have 95% automation executed.” More of a project goal than a project metric.
Regards.
Very informative post. Its really helpful for me and beginner too. Check out this link too its also having a nice post related to this post over the internet which also explained very well about testing metrics...
ReplyDeletehttp://mindstick.com/Articles/6382aedf-6ef2-4644-85e9-7090aeedab1d/?Testing%20Metrics
Thanks
This was informative - Thanks
ReplyDelete