Pages

Quality assurance testing lifecycle





QA Testing
Software testing verifies that the software meets its requirements and that it is complete and ready for delivery.
Objectives of QA Testing
  1. Assure the quality of client deliverable. 
  2. Design, assemble, and execute a full testing life cycle. 
  3. Confirm the full functional capabilities of the final product. 
  4. Confirm stability and performance (response time, etc.) of the final product. 
  5. Confirm that deliverable meet client expectations/requirements. 
  6. Report, document and verify code and design defects.
Preparing for QA Testing
Prior to conducting formal software testing, QA develops testing documentation (including test plans, test specifications, and test procedures) and reviews the documentation for completeness and adherence to standards. QA confirms that:
  1. The test cases are testing the software requirements in accordance with test plans. 
  2. The test cases are verifiable. 
  3. The correct or "advertised" version of the software is being tested (by QA monitoring of the Configuration Management activity).
QA then conducts the testing in accordance with procedure, documents and reports defects, and reviews the test reports.
The Key to Productive QA Testing
It is crucial to recognize that all testing will be conducted by comparing the final product to the product’s set requirements; therefore, product requirements must state all functionality of the software and must be updated as changes are made. Any functionality that does not meet the requirements will be recorded as a defect until resolution is delivered.
Twelve Types of QA Testing
1. Unit testing (conducted by Development)
Unit test case design begins after a technical review approves the high level design. The unit test cases shall be designed to test the validity of the program's correctness. White box testing is used to test the modules and procedures that support the modules. The white box testing technique ignores the function of the program under test and focuses only on its code and the structure of that code. To accomplish this, a statement and condition technique shall be used. Test case designers shall generate cases that not only cause each condition to take on all possible values at least once, but that cause each such condition to be executed at least once. In other words:
  1. Each decision statement in the program shall take on a true value and a false value at least once during testing. 
  2. Each condition shall take on each possible outcome at least once during testing. 
2. Configuration Management
  1. The configuration management team prepares the testing environment
3. Build Verification
When a build has met completion criteria and is ready to be tested, the QA team runs an initial battery of basic tests to verify the build.
  1. If the build is not testable at all, then the QA team will reject the build 
  2. If portions of the website are testable and some portions are not yet available, the project manager, technical lead and QA team will reassign the build schedule and deliverable dates. 
  3. If all portions of the build pass for testing, the QA team will proceed with testing 
4. Integration Testing
  1. Integration testing proves that all areas of the system interface with each other correctly and that there are no gaps in the data flow. The final integration test proves that the system works as an integrated unit when all the fixes are complete.
5. Functional Testing
  1. Functional testing assures that each element of the application meets the functional requirements of the business as outlined in the requirements document/functional brief, system design specification, and other functional documents produced during the course of the project (such as records of change requests, feedback, and resolution of issues).
6. Non-functional Testing (Performance Testing)
  1. Non-functional testing proves that the documented performance standards or requirements are met. Examples of testable standards include response time and compatibility with specified browsers and operating systems.
  2. If the system hardware specifications state that the system can handle a specific amount of traffic or data volume, then the system will be tested for those levels as well. 
7. Defect Fix Validation
  1. If any known defects or issues existed during development, QA tests specifically in those areas to validate the fixes.
8. Ad Hoc Testing
  1. This type of testing is conducted to simulate actual user scenarios. QA engineers simulate a user conducting a set of intended actions and behaving as a user would in case of slow response, such as clicking ahead before the page is done loading, etc.
9. Regression Testing
  1. Regression testing is performed after the release of each phase to ensure that there is no impact on previously released software. Regression testing cannot be conducted on the initial build because the test cases are taken from defects found in previous builds.
  2. Regression testing ensures that there is an continual increase in the functionality and stability of the software. 
10. Error Management
  1. During the QA testing workflow, all defects will be reported using the error management workflow.
  2. Regular meetings will take place between QA, system development, interface development and project management to discuss defects, priority of defects, and fixes.
11. QA Reporting
  1. QA states the results of testing, reports outstanding defects/known issues, and makes a recommendation for release into production.
12. Release into production
  1. If the project team decides that the build is acceptable for production, the configuration management team will migrate the build into production.


1 comments:

Pegasie said...

Really nice post shared..This post have some good topics regarding Quality assurance and also have their types..Thanks for sharing such a nice post..

Qa Training

Post a Comment