Wednesday, January 19, 2011

Test Automation Metrics

Based on my understanding, the following are the automated software testing metrics that I think are useful. My list is a work in progress so use your own judgment before using these metrics to analyze your progress.

Automation Development
  1. Number (or %) of test cases feasible to automate out of all selected test cases - You can even replace test cases by steps or expected results for a more granular analysis.
  2. Number (or %) of test cases automated out of all test cases feasible to automate - As above, you can replace test cases by steps or expected results.
  3. Average effort spent to automate one test case - You can create a trend of this average effort over the duration of the automation exercise.
  4. % Defects discovered in unit testing/ reviews/ integration of all discovered defects in the automated test scripts
Automation Execution
  1.  Number (or %) of automated test scripts executed out of all automated test scripts
  2. Number (or %) of automated test scripts that passed of all executed scripts
  3. Average time to execute an automated test script - Alternately, you can map test cases to automated test scripts and use the Average time to execute one test case.
  4. Average time to analyze automated testing results per script
  5. Defects discovered by automated test execution - As common, you can divide this by severity/ priority/ component and so on.


  1. Hi,

    Isn't these points more of a meassurement - compared to a metric stating for instance:
    Execution must have 95% automation executed.


  2. Jesper,

    As you have mentioned in your post, measurements and metrics go hand in hand. One has to take the measurements (collect the data about process performance) to produce the metrics. The reason I call the above metrics is that they indicate the performance of automation development/ execution activities. These metrics can help make decisions.
    For example, you may not expect only 50% of the selected test cases to be automatable. However, this situation is possible if half of the test cases expect text verification within and image and the automated testing tool does not provide for character recognition natively. In such a case, in order to increase automatability of selected test cases, the team may consider other approaches like:
    a. Using another tool for image checks or writing their own tool for this verification
    b. Changing the test approach e.g. testing the data that is consumed to create the image of the text
    c. any other
    I consider the example that you gave, “Execution must have 95% automation executed.” More of a project goal than a project metric.


  3. Very informative post. Its really helpful for me and beginner too. Check out this link too its also having a nice post related to this post over the internet which also explained very well about testing metrics...


  4. This was informative - Thanks