Sunday, September 11, 2011

Automation Criteria - guidelines on how to write test cases that will be automated

Everyone knows that a strong house can be built on a strong foundation only, never upon a weak one. This post is in continuation to the earlier post, How to write automatable test cases? Test cases here mean "manual" test cases, the kind that a tester can execute against the application under test by hand. Each of the following guidelines is also applicable to create valid test cases that would be executed, so there is no special effort here to prepare such test cases for automation.

1. The test case must be correct. It is obvious that the test case must be correct with respect to the workflow and expected application behavior. This guideline is especially important for automation because though it is possible for a manual tester to ignore obvious incorrectness in a test case, the automated test script would not be able to do the same. False negatives would be reported every time the automated test script is executed.

2. All the business scenarios must be covered by the test data. This refers to the completeness of the test case. The test case must contain test data for all applicable business scenarios that users would face.

3. The test case should have sufficient detail so it is possible for another person to execute it without asking questions or getting clarifications. Pre-conditions, test steps, test data, expected results and post-conditions are important components of a test case. The test case should be written with the target consumer in mind. If the automation engineer has good knowledge of the application under test, the test case components may be summarized. If not, they should be detailed out.

4. [Important] The test case must be capable of finding bugs in the current release of the application. If a test case has not caught a bug in the last few releases of the application, the likelihood of it doing so now is limited. Extra effort is required to automate test cases. So, why not automate the test cases with a high likelihood of catching bus?

5. The test case should have been accepted by the team. The test case that would be automated should not be in a raw state e.g. just whipped up by someone. It should have been reviewed/ discussed, stored in the correct file and accepted by the team as a valid test case.

6. The test case should be under version control. Placing the test case in the version control repository shows the changes made to it subsequently. Changes to the test case should be propagated to the automated test script, whether the latter is under construction or already built. Therefore, there must be a process to update the automated test script whenever the test case is revised and accepted.

Correct, complete and up-to-date test cases are important assets for any testing team. Due attention is paid to such test cases. Similar attention should also be accorded to the automated test script of such a test case. After all, its the same test case, just written in a different format. Therefore, the automated test script should be reviewed/ discussed, accepted and placed under the version control repository. The results of each execution of the automated test script should be given similar attention.


  1. I can't find anything special for automation in this. These are simple guidelines about how to write test cases. There must be something special and something different in manual testing test cases and automated testing test cases. Like their organisation should be different, the material required in the test cases should be different, they should be short so that it would be easier to automate them etc...

    1. When you say that these are simple guidelines about writing test cases, you just reiterate what is already mentioned in the first paragraph of this post.
      Why should there be something different in the base test cases for manual testing or automated testing? When you suggest that their organization should be different or the material should be different, do you imply that there be two sets of test cases initially - one for manual testing and one set as base test cases marked for automation? Why spend extra effort on creating a new set instead of spending the extra effort to ensure that the existing test cases are really good per the guidelines? Maintaining and keeping the two sets in sync is another drain on the available time.
      Thanks for your comment.