March 31, 2014

What is Traceability Matrix?

In software testing, an important document is called Traceability Matrix (TM) or Requirements Traceability Matrix (RTM). This is the document that connects the requirements to the test cases. The connection or mapping of the requirements to test cases is many-many. This means that one requirement is tested by one or more test cases. Conversely, it is possible to have one test case addressing one or more requirements.

If you don't understand the RTM, view the video, Requirements Traceability Matrix that explains the RTM with an example.
Next, let us see some useful points about the Requirement Traceability Matrix.
  1. A well-designed TM has the Req Ids and Test Case Ids. However, it should not have any text from the requirements or test cases because it is just a mapping. The TM could also contain module/ component/ sub-system Ids against each Req Id (see point no. 9).

  2. A TM can be as simple as Req Ids on one axis and Test Case Ids on the other axis. For example, a TM implemented in MS Excel could have Req Ids in a single column (vertically) and Test Case Ids in multiple columns (horizontally). A symbol could mark which requirement maps to which test case.

  3. The TM should be created as early as possible in the project. It becomes tedious to create if there are already numerous requirements and test cases.

  4. The TM should be updated for every requirement change. New requirement is added or an existing requirement is changed or an existing requirement is deleted.

  5. The TM should be updated when a new test case is written. This update could be the final step of completing the test case. If an existing test case is updated or enhanced, the TM should be reviewed for accuracy. The TM should be updated if any test case is retired.

  6. One should be careful with workflow changes because they can impact multiple requirements and therefore multiple test cases.

  7. It is simpler to update the TM if the requirements and test cases are modular and contain no repetitions.

  8. TM is only a document which can become corrupted. Especially if multiple people write to it in an uncontrolled way. Therefore, the TM should be stored in a revision control system with locking and backup/ restore features.

  9. If the TM contains module/ component/ sub-system Ids, it becomes simpler to identify the impacted modules whenever a requirement changes.

  10. Some project management software or test management software provide automatic generation of TM based on requirements and test cases stored in the system. It is even possible to run the queries against the TM because all its information lives in a database.

Happy testing!

March 19, 2014

Example Test Strategy | Test Plan


Test strategy is the plan (that may exist at any level like project, program, department or organization level) that describes how the test objectives would be met effectively with the help of the available resources. If you have a test strategy, it is easier to focus effort on the most important test activities at the time. Moreover, a test strategy provides clarity on the test approach to the project stakeholders. First, view my Test Strategy video. Then read on.
Many readers have asked me for example software testing strategy document. I requested Varsha, who is a senior member of the Software Testing Space community, to create an example test strategy for a hypothetical agile project. First, view the video, Example Agile Test Strategy, Agile Test Plan. Then read on.
Below is the resulting sample test strategy document. The sections contain much information. Additional guidelines are given in italics. I hope that this sample test strategy document helps you create a really effective test strategy for your own project. - Inder P Singh

Example Test Strategy

Introduction to Agile
Agile is an iterative and incremental (evolutionary) approach to software development that is performed in a highly collaborative manner by self-organizing teams within a control framework. High quality and adaptive software is developed by small teams using the principles of continuous design improvement and testing based on rapid feedback and change. Agile is people-centric, development and testing is performed in an integrated way, self-organizing teams encourage role interchangeability, customer plays a critical role and Project Life-cycle is guided by product features.

How Agile is different from Waterfall model
1. Greater collaboration
2. Shorter work cycle and constant feedback
3. Need to embrace change
4. Greater flexibility
5. Greater discipline
6. The goal should be quality and not just speed
7. Greater stakeholder accountability
8. Greater range of skills
9. Go faster and do more
10. Courage
11. Confidence in design

Purpose of this document
The purpose of this Test Strategy is to create a shared understanding of the overall targets, approach, tools and timing of test activities. Our objective is to achieve higher quality and shorter lead times with minimum overhead, frequent deliveries, close teamwork with team and the customer, continuous integration, short feedback loops and frequent changes of the design. Test strategy guides us through the common obstacles with a clear view of how to evaluate the system. Testing starts with the exploration of the requirements and what the customer really wants by elaborating on the User stories from different perspectives. Testing becomes a continuous and integrated process where all parties in the project are involved. 
Copyright © Software Testing Space

Guiding standards
StandardDescription
Shared ResponsibilityEveryone in the team is responsible for quality.
Data ManagementProduction data must be analyzed before being used for testing.
Test ManagementTest cases, code, documents and data must be treated with the same importance as the production system.
Test AutomationAttempt to automate all types of testing (Unit, Functional, Regression, Performance, Security) as far as feasible.

Requirements strategy
1. Always implement highest priority work items first (Each new work item is prioritized by Product Owner and added to the stack).
2. Work items may be reprioritized at any time or work items may be removed at any time.
3. A module in greater detail should have higher priority than a module in lesser detail.

Quality and Test Objectives
FeatureDescriptionMeasure and TargetPriority
AccuracyFeatures and functions work as proposed (i.e. as per requirements)100% completion of agreed features with open
  • Severity 1 defects = 0
  • Severity 2 defects = 0
  • Severity 3 defects < 5
  • Severity 4 defects < 10
Must Have
IntegrityAbility to prevent unauthorized access, prevent information loss, protect from viruses infection, protect privacy of data entered
  • All access is via HTTPS (over a secured connection).
  • User passwords and session tokens are encrypted.
Must Have
MaintainabilityEase to add features, correct defects or release changes to the system
  • Code Duplication < 5%
  • Code Complexity < 8
  • Unit Test Coverage > 80%
  • Method Length < 20 Lines
Must Have
AvailabilityPercentage of planned up-time that the system is required to operateSystem is available for 99.99% for the time measured through system logs.Should Have
InteroperabilityEase with which the system can exchange information with other systems User interface renders and functions properly on the following (and later) browsers versions:
  1. IE version = 9.0
  2. Firefox version = 18.0
  3. Safari version = 5.0
  4. Chrome version 11.0
Must Have
PerformanceResponsiveness of the system under a given load and the ability to scale to meet growing demand.
  1. Apdex Score > 0.9
  2. Response Time < 200ms
  3. Throughput > 100 pm
Should Have

Test Scope (both business processes and the technical solution)
In Scope
Identify what is included in testing for this particular project. Consider what is new and what has been changed or corrected for this product release.
  • Automated) Unit testing
  • Code analysis (static and dynamic)
  • Integration testing
  • (Automated) Feature and functional testing
  • Data conversion testing
  • System testing
  • (Automated) Security testing
  • Environment testing
  • (Automated) Performance and Availability testing
  • (Automated) Regression testing
  • Acceptance testing
Copyright © Software Testing Space
Out of Scope
Identify what is excluded in testing for this particular project.

Testing Types
Testing typeDefinitionTest tool examples
Remove tools that will not be used.
Unit testingTesting that verifies the implementation of software elements in isolation Xunit test tools (Nunit, Junit), Mocking tools
Code analysis (static and dynamic)Walkthrough and code analysis1. Static code tool -> Java – Checkstyle, Findbugs, Jtest, AgileJ Structure views .Net – FxCop, stypeCop, CodeRush
2. Dynamic code tool ->Avalanche, DynInst, BoundsChecker.
Integration testingTesting in which software elements, hardware elements, or both are combined and tested until the entire system has been integratedVector Cast C/C++
Functional and Feature testingTesting an integrated hardware and software system to verify that the system meets required functionality:
  • 100% requirements coverage
  • 100% coverage of the main flows
  • 100% of the highest risks covered
  • Operational scenarios tested
  • Operational manuals tested
  • All failures are reported
UFT, Selenium WebDriver, Watir, Canoo webtest , SoapUI Pro
System testingTesting the whole system with end to end flowSelenium, QTP, TestComplete
Security testingVerify secure access, transmission and password/ session securityBFB Tester, CROSS, Flowfinder, Wireshark, WebScarab, Wapiti, X5s, Exploit Me, WebSecurify, N-Stalker
Environment testingTesting on each supported platform/ browserGASP, QEMU, KVM,Xen, PS tools
Performance and Availability testingLoad, scalability and endurance testsLoadRunner, JMeter, AgileLoad test, WAPT, LoadUI
Data conversion testingPerformed to verify the correctness of automated or manual conversions and/or loads of data in preparation for implementing the new systemDTM, QuerySurge, PICT, Slacker
Regression testingTesting all the prior features and re-testing previously closed bugsQTP, Selenium WebDriver
Acceptance testingTesting based on acceptance criteria to enable the customer to determine whether or not to accept the system Selenium , Watir, iMacros, Agile Acceptance Test Tool

Test Design strategy
1. Specification based / Black box techniques (Equivalence classes, Boundary value analysis, Decision tables, State Transitions and Use case testing)
2. Structure based / white box techniques (Statement coverage, Decision coverage, Condition coverage and Multi condition coverage)
3. Experience based techniques (Error guessing and Exploratory testing)

Test Environments strategy
NameDescriptionData SetupUsage
DevelopmentThis environment is local and specific to each developer/tester machine. It is based on the version/branch of source code being developed. Integration points are typically impersonated.Data and configuration is populated through setup scripts.Unit, Functional and Acceptance Tests.
Test tools e.g. Xunit test tools (Nunit, Junit), Mocking tools.
Source code management for version control
IntegrationThis environment supports continuous integration of code changes and execution of unit, functional and acceptance tests. Additionally, static code analysis is completed in this environment.Data and configuration is populated through setup scripts.Unit, Functional and Acceptance Tests.
Static code analysis
Continuous Integration tools e.g. Cruise control
StagingThis environment supports exploratory testingPopulated with post-analysis obfuscated production dataExploratory testing
ProductionLive environmentNew instances will contain standard project reference data. Existing instances will have current data migrated into the environmentProduction verification testing

Test Execution strategy
We will keep in mind the following points:
  1. Agile testing must be iterative.
  2. Testers cannot rely on having complete specification.
  3. Testers should be flexible.
  4. They need to be independent and independently empowered in order to effective
  5. Be generalizing specialists.
  6. Be prepared to work closely with developers.
  7. Focus on value added activities.
  8. Be flexible.
  9. Focus on What and Not How to test.
  10. Testers should be embedded in agile team.
  11. Flexible to contribute in any way then can
  12. Have wide range of skills with one or more specialties
  13. Shorter feedback cycles
  14. Focus on sufficient and straightforward situations.
  15. Focus on exploratory testing.
  16. Specify the meaning of "Done” i.e. when activities/tasks performed during the system development can be considered complete.
  17. Define when to continue or stop testing before delivering the system to the customer. Specify which evaluation criteria is to be used (e.g. time, coverage, and quality) and how it will be used.
Additionally, use this section to describe the steps for executing tests in preparation for deployment/release/upgrade of the software. Key execution steps could include:
1. Steps to build the system
2. Steps to execute automated tests
3. Steps to populate environment with reference data
4. Steps to generate test report/code metrics


Test Data Management strategy
Use this section to describe the approach for identifying and managing test data. Consider the following guidelines:
1. System and user acceptance tests – a subset of production data should be used to initialize the test environment.
2. Performance and availability test – full size production files should be used to test the performance and volume aspects of the test.


Test Automation strategy
Adopt a planned approach to developing test automation. Increase the quality of test automation code. Select the test cases for automation based on the following factors:
  • Risk
  • How long it takes to run the tests manually?
  • What is the cost of automating the test?
  • How easy are the test cases to automate?
  • How many times is the test expected to run in project?
Test Management
The Test Plan, test scenarios, test cases and bug report should be in a same system as in Bugzilla, Zira. Any agile tool can be used where User stories, Test Plan, Test scenarios, test cases and bug report can be stored in the same place.

Risks and Assumptions
Risks and assumptions raised in Daily stand up meeting (in front of all team members, scrum master and members) should be logged and addressed immediately.

Defect Management strategy
Ideally, defects are only raised and recorded when they are not going to be fixed immediately. In this case, the conditions under which they occur and the severity needs to be accurately recorded so that the defect can be easily reproduced and then fixed.

Defect Classification
SeverityDescription
CriticalDefect causes critical loss of business functionality or a complete loss of service.
MajorDefect causes major impact to business functionality and there is not an interim workaround available.
MinorDefect causes minor impact to business functionality and there is an interim workaround available.
TrivialDefect is cosmetic only and usability is not impacted.

Defect Lifecycle
StepDescription
Identify DefectEnsure defect can be reproduced. Raise in defect tracking system.
Prioritize DefectBased on severity defect is prioritized in team backlog.
Analyze DefectBased on analysis acceptance criteria and implementation details.
Resolve DefectImplement changes and/or remediate failing tests.
Verify ResolutionExecute tests to verify defect is resolved and no regression is seen.
Close DefectClose in defect tracking system.

Specify the shared defect tracking system.
Copyright © Software Testing Space
Note: This example test strategy has been contributed by Varsha Tomar. Varsha has 9 years experience in both manual and automated software testing. Currently, she works with Vinculum Solutions as Senior Test Lead. Her interests include software testing, test automation, training, testing methodologies and exploring testing tools.

Please put any questions that you have in the comments.