Thursday, December 4, 2008

COMPUTERISED SYSTEM VALIDATION - Part 9

Prospective Validation - Development Testing Phase
The system developer should conduct formal development testing prior to the deployment phase. Development testing includes:
• Unit/Module testing.
• Integration testing.
• Interface testing.
• System testing.
For small computerised systems it may be appropriate to consolidate or omit some of the above test activities. The justification for consolidation or omission of any test activities should be documented in the validation plan. Testing is carried out according to pre-approved test plans and test cases. Due account should be taken of any test requirements identified by the validation plan, supplier audit and design review. Testing should be conducted against approved specifications. Progression from one test phase to another should not occur without satisfactory resolution of any adverse test results. Testing should be conducted in controlled test environments that should be documented and verified in order to confirm that the hardware and software used in the testing are appropriate representations of the finally operating environment. Confirmation that the test environment is indeed controlled is usually achieve by conducting an installation qualification (IQ) similar to that conducted as part of the deployment phase. Simulation and test equipment should be calibrated/tested, documented and
verified prior to use. Test data-sets should be reviewed and approved prior to
use. It is important to recognise that computerised systems and software not specifically written by Inhouse should also be tested prior to its use. Development testing in these circumstances in the responsibility of the supplier (system developer). Products that are released by a supplier pending final evaluation (beta testing) should not be used. Initial and subsequent product releases should only be used once they have been proven over a period of time by a user community to be fit for purpose. New releases should be evaluated based on pilot and/or additional system testing based on the premise that it is not fully market tested.
Test Plan
Test plans are used to define and justify the extent and approach to testing. Group or individual test cases are identified with any interdependence. Note: The person who develops the system should not be the person who develops the test and should not be the same person who executes the test. It is important that testing is objective, challenging, and impartial. The requirements traceability matrix (RTM) or equivalent mechanism should be used to map tests to design specifications.
The test approach should:
• Verify the normal operation.
• Challenge the normal operation across the design range.
• Challenge boundary limits.
• Challenge failure modes.
• Include power failure tests.
Test plans may be embedded in validation plans, combined with test cases, or
exist as separate documents. Test plans should be reviewed and approved before the defined testing begins.
Test Case
Test cases provide the methods by which tests are conducted and the acceptance criteria against which the test results are assessed. Test cases should not introduce new requirements or specifications. Each test case should include:
• Objective of test.
• Cross-reference to the part of the system specification that is being tested.
• Any prerequisites such as calibration or test data or other tests that should be completed beforehand.
• A reproducible test procedure.
• Details of data to be recorded during test or test evidence to be appended (e.g. screen prints).
• Acceptance criteria.
The detail of necessary testing will require some judgement. Test cases should provide 100% requirements coverage to the URS/Functional Specification. This can be achieved by showing within the RTM that all URS/Functional specification sections have an associated test case. Full
branch/decision/condition coverage testing within the functionality supporting these requirements is not usually practical. Testing nevertheless should aim to challenge the system and demonstrate it as fit for purpose. It would be expected that all product quality related data input/output and product quality related decision points would be tested.
Test Results
All raw data and derived results obtained should be indelibly documented, reviewed for integrity and against acceptance criteria, and subsequently approved. The use of ticks, crosses, ’ok’ or other abbreviations to indicate acceptance are not generally accepted by regulatory authorities and should be avoided. The testing results should include:
• Actual test results and observations.
• A statement (pass or fail) as to whether or not the acceptance criteria have been met.
• Dated signature for each person performing the test.
• Dated signature for the person approving test results.
• References to the incident log for test failures or observations.
Inconsequential issues raised with the test cases (e.g. typographical errors not affecting the integrity of testing) can be annotated as cosmetic changes, signed and dated, on the test case. Inconsequential issues raised with test cases are not logged as test failures but the nature of the annotation should be registered in the incident log so that the resolution can be tracked.
Test Failures
The person approving the test cases should consider the consequences of a failure on the validity of completed testing. Further testing should be performed/repeated where necessary. All test failures should be recorded in incident logs.
If the analysis of a test failure results in amendment to the test case or associated specification, then the remedial action should be performed under change control. Minor deviations from the acceptance criteria, where there is no risk to GxP, may be accepted with the approval of the User and QA/Validation. Such concessions should be recorded in the incident log and
justified in the test report.
Resolution of test failures associated with low risk functions, if indicated by the requirement categorisation in the URS, may be deferred. GxP requirements are not considered low risk.
Test Report
Test reports should be prepared in order to:
• Collate evidence of testing (i.e. raw data).
• Conclude each phase of testing.
• Authorise any subsequent phase of testing.
The results of testing are analysed and summarised in a test report that should state:
• System identification (program, version configuration).
• Identification of test cases and supporting raw data.
• The actions taken to resolve incident log issues, with justification.
• Whether the results meet the acceptance criteria.
The test report should not exclude any test data. The test report may be combined with the test results.

No comments:

Post a Comment