REGRESSION TESTING is a type of software testing that intends to ensure that changes (enhancements or defect fixes) to the software have not adversely affected it. Rerunning of tests can be on both functional and non-functional tests.
- regression testing: A type of change-related testing to detect whether defects have been introduced or uncovered in unchanged areas of the software.
The likelihood of any code change impacting functionalities that are not directly associated with the code is always there and it is essential that regression testing is conducted to make sure that fixing one thing has not broken another thing. During regression testing, new test cases are not created but previously created test cases are re-executed. Regression [noun] literally means the act of going back to a previous place or state; return or reversion.
The need for Regression Testing could arise due to any of the changes below:
- Defect fix
- New feature
- Change in an existing feature
- Code refactoring
- Change in technical design / architecture
- Change in configuration / environment (hardware, software, network)
Regression testing can be performed during any level of testing (Unit, Integration, System, or Acceptance) but it is mostly relevant during System Testing.
During each level, Regression testing is performed after Confirmation Testing.
Regression test automation is specially worthwhile in iterative and incremental development life cycles like Agile where new features, changes to existing features, defect fixes and code refactoring [All within a short cycle of time] result in frequent changes to the software.
Depending on the situation, the scope of regression test can be full or partial:
Full Regression Test
The entire regression test suite (or the entire set of test cases for the product) is run to ensure that the change has not affected ANY part of the software. Though ideal, this is costly in terms of time & effort and may not always be feasible.
Partial Regression Test
When running the entire regression test suite is not feasible, certain test cases are selected and run while others are left out. Selection is normally based on:
- General prioritization: Prioritize test cases based on business impact, critical features, frequently used functionalities, complex implementation and buggy areas of the software.
- Version-specific prioritization: Prioritize test cases based on what changes have been made in the version of the software and the likely areas in the software that might have been impacted due those changes. This requires a sound change impact analysis.
Last Updated on September 7, 2020 by STF