Tuesday 3 June 2014

Regression Testing



Regression testing is also known as validation testing and provides a consistent, repeatable validation of each change to an application under development or being modified. Each time a defect is fixed, the potential exists to inadvertently introduce new errors, problems, and defects. An element of uncertainty is introduced about ability of the application to repeat everything that went right up to the point of failure.

Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. Regression testing is conducted in parallel with other tests and can be viewed as a quality control tool to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the change. It is important to understand that regression testing doesn’t test that a specific defect has been fixed. Regression testing tests that the rest of the application up to the point or repair was not adversely affected by the fix.

Sample Entry and Exit Criteria for Regression Testing

Entry Criteria

  • The defect is repeatable and has been properly documented
  • A change control or defect tracking record was opened to identify and track the regression testing effort
  • A regression test specific to the defect has been created, reviewed, and accepted

Exit Criteria

  • Results of the test show no negative impact to the application

Production verification testing



Production verification testing is a final opportunity to determine if the software is ready for release. Its purpose is to simulate the production cutover as closely as possible and for a period of time simulate real business activity. As a sort of full dress rehearsal, it should identify anomalies or unexpected changes to existing processes introduced by the new application. For mission critical applications the importance of this testing cannot be overstated.

The application should be completely removed from the test environment and then completely reinstalled exactly as it will be in the production implementation. Then mock production runs will verify that the existing business process flows, interfaces, and batch processes continue to run correctly. Unlike parallel testing in which the old and new systems are run side-by-side, mock processing may not provide accurate data handling results due to limitations of the testing database or the source data.

Sample Entry and Exit Criteria for Production Verification Testing

Entry Criteria

  • UAT testing has been completed and approved by all necessary parties
  • Known defects have been documented
  • Migration package documentation has been completed, reviewed, and approved by the production systems manager

Exit Criteria

  • Package migration is complete
  • Installation testing has been performed and documented and the results have been signed off
  • Mock testing has been documented, reviewed, and approved
  • All tests show that the application will not adversely affect the production environment
  • A System Change Record with approvals has been prepared

User Acceptance testing



User Acceptance Testing is also called Beta testing, application testing, and end-user testing. Whatever you choose to call it, it’s where testing moves from the hands of the IT department into those of the business users. Software vendors often make extensive use of Beta testing, some more formally than others, because they can get users to do it for free.

By the time UAT is ready to start, the IT staff has resolved in one way or another all the defects they identified. Regardless of their best efforts, though, they probably don’t find all the flaws in the application. A general rule of thumb is that no matter how bulletproof an application seems when it goes into UAT, a user somewhere can still find a sequence of commands that will produce an error.

To be of real use, UAT cannot be random users playing with the application. A mix of business users with varying degrees of experience and subject matter expertise need to actively participate in a controlled environment. Representatives from the group work with Testing Coordinators to design and conduct tests that reflect activities and conditions seen in normal business usage. Business users also participate in evaluating the results. This insures that the application is tested in real-world situations and that the tests cover the full range of business usage. The goal of UAT is to simulate realistic business activity and processes in the test environment.

A phase of UAT called “Unstructured Testing” will be conducted whether or not it’s in the Test Plan. Also known as guerilla testing, this is when business users bash away at the keyboard to find the weakest parts of the application. In effect, they try to break it. Although it’s a free-form test, it’s important that users who participate understand that they have to be able to reproduce the steps that led to any errors they find. Otherwise it’s of no use.

A common occurrence in UAT is that once the business users start working with the application they find that it doesn’t do exactly what they want it to do or that it does something that, although correct, is not quite optimal. Investigation finds that the root cause is in the Business Requirements, so the users will ask for a change. During UAT is when change control must be most seriously enforced, but change control is beyond the scope of this paper. Suffice to say that scope creep is especially dangerous in this late phase and must be avoided.



Sample Entry and Exit Criteria for User Acceptance Testing

Entry Criteria

  • Integration testing signoff was obtained
  • Business requirements have been met or renegotiated with the Business Sponsor or representative
  • UAT test scripts are ready for execution
  • The testing environment is established
  • Security requirements have been documented and necessary user access obtained

Exit Criteria

  • UAT has been completed and approved by the user community in a transition meeting
  • Change control is managing requested modifications and enhancements
  • Business sponsor agrees that known defects do not impact a production release—no remaining defects are rated 3, 2, or 1

Types Of Testing



ü  Performance Testing – Performance tests are used to evaluate and understand the application’s scalability when, for example, more users are added or the volume of data increases. This is particularly important for identifying bottlenecks in high usage applications. The basic approach is to collect timings of the critical business processes while the test system is under a very low load (a ‘quiet box’ condition) and then collect the same timings with progressively higher loads until the maximum required load is reached. For a data retrieval application, reviewing the performance pattern may show that a change needs to be made in a stored SQL procedure or that an index should be added to the database design.

ü  Stress Testing – Stress Testing is performance testing at higher than normal simulated loads. Stressing runs the system or application beyond the limits of its specified requirements to determine the load under which it fails and how it fails. A gradual performance slow-down leading to a non-catastrophic system halt is the desired result, but if the system will suddenly crash and burn it’s important to know the point where that will happen. Catastrophic failure in production means beepers going off, people coming in after hours, system restarts, frayed tempers, and possible financial losses. This test is arguably the most important test for mission-critical systems.

ü  Load Testing – Load tests are the opposite of stress tests. They test the capability of the application to function properly under expected normal production conditions and measure the response times for critical transactions or processes to determine if they are within limits specified in the business requirements and design documents or that they meet Service Level Agreements. For database applications, load testing must be executed on a current production-size database. If some database tables are forecast to grow much larger in the foreseeable future then serious consideration should be given to testing against a database of the projected size.

Performance, stress, and load testing are all major undertakings and will require substantial input from the business sponsors and IT staff in setting up a test environment and designing test cases that can be accurately executed. Because of this, these tests are sometimes delayed and made part of the User Acceptance Testing phase. Load tests especially must be documented in detail so that the tests are repeatable in case they need to be executed several times to ensure that new releases or changes in database size do not push response times beyond prescribed requirements and Service Level Agreements.

Compatability testing




  Compatibility tests insures that the application works with differently configured systems based on what the users have or may have. When testing a web interface, this means testing for compatibility with different browsers and connection speeds.