System Testing & Quality Assurance
Testing
Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the goal will be successfully achieved.
Another reason for system testing is its utility as a user-oriented vehicle for implementation.
Techniques used for system testing
- Online response. Online systems must have a response time that will not cause a hardship to the user. One way to test this is 10 input transactions on as many CRT screens as would normally be used in peak hours and time the response to each online function to establish a true preference level
- Volume. In this test, we create as many records as would normally be produced to verify that the hardware and Software will function correctly. The user is usually asked to provide test data for volume testing
- Stress testing. The purpose of stress testing is to prove that the candidate system does not malfunction under peak loads. Unlike volume testing, where time is not a factor, we subject the system to a high volume of data for a short time period. This simulates an online environment where a high volume of activities occurs in spurts.
- Recovery and security. A forced system failure is induced to test a backup recovery procedure for file integrity. Inaccurate data are entered to see how the system responds in terms of error detection and protection. Related to file integrity is a test to demonstrate that data and programs are secure from unauthorized access.
- Usability documentation and procedure. The usability test verifies the user-friendly nature of the system. This relates to normal operating and error-handling procedures, for example. One aspect of user-friendliness is accurate and complete documentation. The user is asked to use only the documentation and procedures as a guide to determine whether the system can be run smoothly
The Nature of Test Data
The proper choice of test data is as important as the test itself. If test data as input are not valid or representative of the data to be provided by the user, then the reliability of the output is suspect. Test data may be artificial or live.
Activity Network for System Testing
A test plan includes the following activities:
- Prepare test plan.
- Specify conditions for user acceptance testing.
- Prepare test data for program testing.
- Prepare test data for transaction path testing.
- Plan user training.
- Compile /assemble programs.
- Prepare job performance aids.
- Prepare operational documents.
System Testing
The purpose of system testing is to identify and correct errors in the candidate system.
In system testing, performance and acceptance standards are developed. Substandard performance or service interruptions that result in system failure are checked during the test. The following performance criteria are used for system testing:
- Turnaround time is the elapsed time between the receipt of the input and the availability of the output in online systems; high-priority processing is handled during peak hours, while low-priority processing is done later in the day or during the night shift. The objective is to decide on and evaluate all the factors that might have a bearing on the turnaround time for handling all applications.
- Backup relates to procedures to be used when the system is down. Backup plans might call for the use of another computer. The software for the candidate system must be tested for compatibility with a backup computer.
In case of a parallel system breakdown, provisions must be made for dynamic reconfiguration of the system. For example, in an online environment, when the printer breaks down, a provisional plan might call for automatically “dumping” the output on tape until the service is restored.
- File protection pertains to storing files in a separate area for protection against fire, flood or natural disaster. Plans should also be established for reconstructing files damaged through a hardware malfunction.
- The human factor applies to the personnel of the candidate system. During system testing, Lighting, air conditioning, noise and other environmental factors are evaluated with people’s desks, chairs, CRTs, etc. Hardware should be designed to match human comfort. This is referenced to as ergonomics. It is becoming an extremely important issue in system development.
Types of System Tests
System testing consists of the following steps:
- Program(s) testing.
- String testing
- System testing.
- System documentation.
- User acceptance testing.
QUALITY ASSURANCE
Quality assurance defines the objectives of the project and reviews the overall activities so that errors are corrected early in the development process. Steps are taken in each phase to ensure that there are no errors in the final software.
Quality Assurance Goals in the Systems Life Cycle
The software life cycle includes various stages of development and each stage has the goal of quality assurance. The goals and their relevance to the quality assurance of the system are summarized next.
Quality Factors Specifications
The goal of these stages is to define the factors that contribute to the quality of the candidate system. Several factors determine the quality of a system:
- Correctness-the extent to which a program meets system specifications and user objectives.
- Reliability-the degree to which the system performs its intended functions over a time.
- Efficiency-the amount of computer resources required by a program to perform a function.
- Usability-the effort required to learn and operate a system.
- Maintainability-the ease with which program errors are located and corrected.
- Testability-the effort required to test a program to ensure its correct’ performance.
- Portability-the ease of transporting a program from one hardware configuration to another.
- Accuracy-the required precision in input editing, computations, and output.
- Error tolerance-error detection and correction versus error avoidance.
- Expandability-ease of adding or expanding the existing database.
- Access control and audit-control of access to the system and extent to which that access can be audited.
- Communicativeness-how descriptive or useful the inputs and outputs of the system are.
Levels of Quality Assurance
Quality assurance specialists use three levels of quality assurance.
- Testing a system to eliminate errors.
- Validation to check the quality of software in both simulated and live environments.
- Certification that the program or software package is correct and conforms to standards.
Implementation
Implementation is the process of converting a new or revised system design into an operational one. Conversion is one aspect of implementation. The other aspects are the post implementation review and software maintenance.
There are three types of implementation:
- Implementation of a computer system to replace a manual system.
- Implementation of a new computer system to replace an existing one.
- Implementation of a modified application to replace an existing one.
Conversion
Conversion means changing from one system to another. The objective is to put the tested system into operation while holding costs, risks, and personnel irritation to a minimum. It involves
(a) Creating computer-compatible files,
(b) Training the operating staff, and
(c) Installing terminals and hardware.
Activity Network for Conversion
The activities of conversion are as follows.
- Conversion begins with a review of the project plan, the system test documentation, and the implementation plan. The parties involved are the user, the project team, programmers, and operators.
- The conversion portion of the implementation plan is finalized and approved.
- Files are converted.
- Parallel processing between the existing and the new systems is initiated.
- Results of computer runs and operations for the new system are logged on a special form.
- Assuming no problems, parallel processing is discontinued. Implementation results are documented for reference.
- Conversion is completed. Plans for the post-implementation review are prepared.
Following the review, the new system is officially operational.
Post-Implementation Review
It has the objective of evaluating the system is terms of how well performance meets stated objectives. The study begins with the review team, which gathers requests for evaluation. The team prepares a review around the type of evaluation to be done and the time frame for its completion. An overall plan covers the following areas.
- Administrative Plan – Review area objectives, operating costs, actual operating costs, actual operating performance and benefits.
- Personal requirement plan – Review performance objectives and training performance to date.
- 3. Hardware plan – Review of performance specifications.
- Documentation review plan – Review the system development effort.
Maintenance
Maintenance is actually the implementation of the post- implementation review plan.
It can be classified as follows.
- Corrective maintenance means repairing processing or performance failures or making changes because of previously uncorrected problems or false assumptions.
- Adaptive maintenance means changing the program function.
- Perfective maintenance means enhancing the performance or modifying the program(s) to respond to the user’s additional or changing needs.
Out of these three types of maintenance, more time and money are spent on perfective than on corrective and adaptive maintenance together.
One thought on “System Development Life Cycle: Implementation, Post Implementation Review and Maintenance”