Based on my understanding, the following are the automated software testing metrics that I think are useful. My list is a work in progress so use your own judgment before using these metrics to analyze your progress.
- Number (or %) of test cases feasible to automate out of all selected test cases - You can even replace test cases by steps or expected results for a more granular analysis.
- Number (or %) of test cases automated out of all test cases feasible to automate - As above, you can replace test cases by steps or expected results.
- Average effort spent to automate one test case - You can create a trend of this average effort over the duration of the automation exercise.
- % Defects discovered in unit testing/ reviews/ integration of all discovered defects in the automated test scripts
- Number (or %) of automated test scripts executed out of all automated test scripts
- Number (or %) of automated test scripts that passed of all executed scripts
- Average time to execute an automated test script - Alternately, you can map test cases to automated test scripts and use the Average time to execute one test case.
- Average time to analyze automated testing results per script
- Defects discovered by automated test execution - As common, you can divide this by severity/ priority/ component and so on.