Book traversal links for 3.5.7 Area 7 – Quality assurance, control and assessment
Step 7.1 – Implement a comprehensive QA programme
Step 7.2 – Establish and monitor QCs
Step 7.3 – Develop an external quality assessment programme
Step 7.4 – Monitor and analyse quality indicators
Step 7.1 – Implement a comprehensive QA programme
A comprehensive QA or quality management programme is needed to ensure the accuracy, reliability and reproducibility of test results. Essential elements of a QA system include:
- SOPs, training and competency assessment (Area 9);
- instrument verification and maintenance (Area 3);
- method validation or verification (Area 2);
- lot-to-lot testing (Area 4);
- internal QC;
- external quality assessment (EQA); and
- quality indicator monitoring and continuous quality improvement.
A comprehensive discussion of the essential elements of a QA system can be found in the Practical Manual on TB laboratory strengthening, 2022 update (36). This section describes QC, EQA and quality indicator monitoring.
Step 7.2 – Establish and monitor QCs
QC monitors activities related to the analytical phase of testing, with the goal of detecting errors due to test failure, environmental conditions or operator performance before results are reported. Internal QC typically involves examining control materials or known substances at the same time and in the same manner as patient specimens, to monitor the accuracy and precision of the analytical process. If QC results are not acceptable (e.g. positive results are obtained on negative controls), patient results must not be reported.
Because of the complexity of the targeted NGS workflow and the need for multiple reagent kits and processes, it is particularly important to conduct quality checks after each of the main steps in the process. The following should be assessed:
- specimens – the source, quantity and quality of the source sample (e.g. sputum specimen);
- DNA extraction – the quality and quantity of the extracted DNA;
- library preparation – the quality and quantity of the generated library;
- sequencing – the quality of the run and base calling;
- sequence assembly and analysis – the proportion of coverage, depth of coverage and quality scores of the mapping; and
- variant calling – the variant call quality score, strand bias and allele frequencies.
Step 7.3 – Develop an EQA programme
An EQA programme includes quality and performance indicator monitoring, proficiency testing, re-checking or making comparisons between laboratories, regular on-site supportive supervision and timely feedback, corrective actions and follow-up. On-site supervision should be prioritized at poorly performing sites identified through proficiency testing, monthly monitoring of performance indicators or site assessments. Failure to enrol in a comprehensive EQA programme is a missed opportunity to identify and correct problems that affect the quality of testing.
The governance structure of an EQA programme at the national and supervisory levels is likely to vary by country. In many countries, implementation of national policies and procedures is coordinated at the central level by the MoH, NTP or NTRL. In some settings, particularly in large countries, these activities may be decentralized to the regional level. Commonly, the central level provides policies, guidance and tools for standardized QA activities, whereas the regional and district levels operationalize and supervise the QA activities and monitor adherence to the procedures. In turn, data collected at the testing sites are reviewed regionally and centrally, and are used to inform and update policies and procedures.
Proficiency testing
For many laboratory tests, the EQA programme includes proficiency testing to determine the quality of the results generated at the testing site. Proficiency testing compares testing site results with a reference result to determine comparability. The purpose of such testing is to identify sites with serious testing deficiencies, target support to the most poorly performing sites and evaluate the proficiency of users following training.
Re-checking of samples
Comparisons between laboratories can also be used as an external assessment of quality. This usually involves the retesting of samples at a higher level laboratory. Many TB laboratories are familiar with this approach because blinded re-checking is a routine method of EQA for AFB smear microscopy.
On-site supervisory visits
On-site supervisory visits are especially critical during the early stages of implementing a new test because they provide motivation and support to staff. Supervisory visits are opportunities to provide refresher training, mentoring, troubleshooting advice and technical updates. On-site assessments should be documented using standardized checklists, to ensure consistency and completeness of information, enable monitoring of trends, and allow follow-up on recommendations and corrective actions. An on-site supervisory programme requires substantial planning and resources (both financial and human).
Step 7.4 – Monitor and analyse quality indicators
Routine monitoring of quality indicators, also known as performance indicators, is a critical element of assuring the quality of any diagnostic test. In addition to the general laboratory quality indicators recommended in the 2022 update to the practical manual on TB laboratory strengthening (36), quality indicators specific to the new diagnostic should be adapted from international guidelines or developed from scratch. Quality indicators for NGS-based DST have been developed and are described in the WHO implementation manual (30). The indicators should be collected using a standardized format and analysed on a monthly or quarterly basis, disaggregated according to tests.
Programmes should establish a baseline for all indicators. Targets should be set for all indicators monitored, and any unexplained change in quality indicators (e.g. an increase in error rates or a change in MTBC positivity) should be documented and investigated. A standard set of quality indicators should be used for all sites conducting a particular test, to allow for comparison between sites.
The continuous quality improvement process is a cyclical, continuous, data-driven approach to improving the quality of diagnostic testing. The process relies on a cycle of monitoring quality indicators, planning interventions to correct or improve performance, and implementing the interventions. Quality indicators should be reviewed by the laboratory manager and must always be linked to corrective actions if any unexpected results or trends are observed. Critical to the process is documentation of corrective actions, and subsequent improvement and normalization of laboratory indicators following the corrective actions.