3.5 Steps and processes for implementing a new diagnostic test

 

Steps and processes for implementing a new diagnostic test

As an initial step in implementing a new diagnostic test, countries should review WHO policies, guidance and reports, as well as any available implementation guide from WHO, the GLI, the Foundation for Innovative New Diagnostics (FIND) and implementing partners. Particular attention should be paid to WHO policies and recommendations for the use of the test, the test's limitations and the interpretation of test results.

The key steps in implementing a new test are listed in Box 3.1. Critical early steps include defining the intended use of the new test, developing a costed implementation plan, building the infrastructure (instruments and facilities) and developing the human resources needed for the new test. The following sections organize the key steps into 10 main areas:

  • Area 1 - Policies, budgeting and planning (Section 3.5.1)
  • Area 2 - Regulatory issues (Section 3.5.2)
  • Area 3 - Equipment (Section 3.5.3)
  • Area 4 - Supply chain (Section 3.5.4)
  • Area 5 - Procedures (Section 3.5.5)
  • Area 6 - Digital data (Section 3.5.6)
  • Area 7 - Quality assurance, control and assessment (Section 3.5.7)
  • Area 8 - Recording and reporting (Section 3.5.8)
  • Area 9 - Human resource training and competency assessment (Section 3.5.9)
  • Area 10 - Monitoring and evaluation (Section 3.5.10).

The rest of this section discusses the steps in each of these areas.

Area 1 – Policies, budgeting and planning

1.1 Establish a technical working group (TWG) and define roles and responsibilities

1.2 Review WHO policies and available technical and implementation guides

1.3 Define immediate and future purposes of the test

1.4 Update national diagnostic algorithm and guidelines

1.5 Perform a situational analysis, including biosafety

1.6 Develop a costed operational plan for phased implementation

Step 1.1 - Establish a TWG and define roles and responsibilities

A TWG comprising representatives from all key stakeholders should be established, to guide the implementation process of the new diagnostic tests and technologies. The TWG's establishment should be led by the MoH, NTP and NTRL. The TWG should be mandated to advise the MoH, NTP and NTRL on test implementation; develop action plans; oversee the test's implementation; and assess the impact and success of the test's introduction. Representatives from the following key stakeholders may be invited to participate:

  • MoH, NTP, NTRL(s) and regional laboratories;
  • research institutes or other organizations with experience using the new diagnostic test;
  • implementing partners, including those outside of TB;
  • peripheral laboratories and clinical facilities that will participate in the testing;
  • regulatory bodies;
  • data management or information technology (IT) experts;
  • specimen transport systems logisticians for centralized or regional testing (TB and non-TB);
  • community representatives; and
  • clinical staff.

A suitably qualified individual should lead the team; for example, a national TB laboratory officer or laboratory focal person from the NTP or NTRL. An integral component of the planning process should be defining the roles and responsibilities of members of the implementation team, and those of external partners and donors.

Step 1.2 - Review WHO policies and available technical and implementation guides

The TWG members should familiarize themselves with the contents of the relevant WHO policies, guidance and reports, as well as any available implementation guides from WHO, GLI, FIND and implementing partners. Particular attention should be paid to WHO policies and recommendations on using the test to aid in the diagnosis of TB or detection of drug resistance, the test's limitations and interpretation of test results.

Step 1.3 - Define immediate and future purposes of the test

Programmes must clearly define the purpose, scope and intended use of the new diagnostic test because that will affect many aspects of the implementation plan. For example, the laboratory system or network needed to provide timely results for patient-care decisions is quite different from that needed to conduct a once-a-year drug-resistance survey.

Step 1.4 - Update national diagnostic algorithm and guidelines

The TWG should lead a review of existing national diagnostic algorithms, taking into consideration the needs of TB patients, clinical needs, country epidemiology, existing testing algorithms, sample referral systems and other operational considerations, and make recommendations to the MoH and NTP. Section 4 provides details on model algorithms for the use of WHO-recommended tests in detail.

The TWG should also lead a review of guidelines for the use of the new diagnostic test results in patient-care decisions. Clinical guidelines should provide clear guidance to clinicians, nurses and health care professionals on the intended use of the new diagnostic test; outline target patient populations; explain how to order the test; and explain how to interpret, use and communicate test results.

Step 1.5 - Perform a situational analysis, including biosafety

To inform plans for implementing the new diagnostic test, a situational analysis of the existing laboratory network and capacities should be conducted. For most tests, key elements to be assessed include regulatory requirements; laboratory and network infrastructure; existing sample transportation system; staff skills, expertise and experience; IT capabilities; diagnostics connectivity; availability and adequacy of SOPs; supply chain; financial resources; and QA systems. The assessment should also determine needs for revision of training, recording and reporting forms, and tools for monitoring and evaluation. Of particular relevance is the specimen referral system. A checklist for evaluating a specimen referral system can be found in the relevant GLI publication (35).⁴⁷

For the prospective testing site, detailed assessments of the laboratory's readiness with respect to physical facilities, staffing and infrastructure will be needed. Because laboratory-acquired TB infection is a well-recognized risk for laboratory workers, undertaking a risk assessment for conducting the new test in the prospective site is critical, to ensure that the required biosafety requirements are in place before the new test is implemented (40).⁴⁸

Step 1.6 - Develop a costed operational plan for phased implementation

The final step in this area is to develop a detailed, costed, prioritized action plan for phased implementation, with targets and timeline. Often, implementation of a new test must overcome potential obstacles such as cost of instruments, ancillary equipment and consumables; requirements for improving or establishing the necessary laboratory and network infrastructure (e.g. a specimen transport system); the need for specialized, skilled and well-trained staff; the need for expert technical assistance; maintenance of confidentiality of patient information; and establishment of a QA system.

Successful implementation of the plan will require financial and human resource commitments from the MoH or NTP, with possible support of implementing partners. A budget should be developed to address activities in collaboration with key partners. Budget considerations are summarized in Annex 1.

Area 2 – Regulatory issues

2.1 Determine importation requirements

2.2 Conduct country validation and verification studies as required

2.3 Complete national regulatory processes

Step 2.1 - Determine importation requirements

National authorities should be consulted to determine relevant processes to be followed for importation. Countries should work closely with manufacturers and authorized service providers of equipment and consumables, to determine importation and registration requirements, and to initiate country verifications, if required.

Step 2.2 - Conduct country validation and verification studies as required

Validation includes conducting large-scale evaluation studies to measure performance of the test if there is any possibility that country-specific factors (e.g. prevalence of different mutations or microorganism strains) may cause performance to deviate substantially from the manufacturer's results or other evaluation studies. Validation is also required before commencing testing of clinical specimens in cases where laboratories perform non-standard or modified methods, use tests outside their intended scope (e.g. specimens for which the test has not been validated) or use methods developed in-house. These studies, in addition to testing a well-characterized panel of known positive and negative samples, may include prospectively testing the current gold standard and the new test in parallel on clinical specimens (41).⁴⁹

Verification includes small-scale method evaluation studies in cases where commercial tests are used according to the manufacturer's intended use. This usually involves testing a well-characterized panel of known positive and negative samples (in a blinded fashion) in line with requirements for national or international accreditation schemes (41).

Validation studies are an essential part of the WHO review process and development of recommendations for the use of a new test. Once large-scale validation studies have been published and a test's target performance characteristics have been established, laboratories that are implementing the method do not need to repeat such large-scale studies. Instead, implementing laboratories should conduct small-scale verification (42)⁵⁰ studies to demonstrate that the laboratory can achieve the same performance characteristics that were obtained during the validation studies when using the test as described in those validation studies, and that the method is suitable for its intended use in the population of patients being tested. Countries must make their own determination on the need for verification, based on national guidelines and accreditation requirements.

Step 2.3 - Complete national regulatory processes

Countries should work closely with the relevant government authorities, manufacturers and authorized service providers to meet the requirements of the national regulatory authority. Sufficient time must be allowed to submit the application and provide any required supplementary evidence.

Area 3 - Equipment

3.1 Select, procure, install and set up equipment

3.2 Instrument verification and maintenance

3.3 Assess site readiness and ensure a safe and functional testing site

Step 3.1 - Select, procure, install and set up equipment

An essential step in the implementation process is selecting appropriate instruments that fit the needs of the clinical or microbiological laboratory and can be used to perform the new diagnostic test. The most suitable instrument for a country will depend on the intended use of the diagnostic test. In general, it is important to choose an instrument that is broadly available and has good manufacturer supply distribution and support.

To bring cost-efficiency to testing services, a priority should be to consider the integration of TB testing on existing platforms, in locations where integrated testing is feasible (43).⁵¹ In settings where TB diagnostic services are standalone and there is a high workload for TB testing, dedicated instruments may be preferred.

Whichever instrument is selected, expert set up will generally be required, with the manufacturer's engineers or authorized service providers performing the installation. Some of the moderate complexity automated NAATs may require infrastructure to be modified to accommodate the instrumentation. Potential setup complexities include power supply and backup options, electrical and network connections, environmental conditions for the laboratory (e.g. maximum temperature), biosafety and ventilation requirements, computing hardware and software, a maintenance plan (e.g. weekly, monthly or pre-run checks), equipment warranty and necessary training.

Step 3.2 - Instrument verification and maintenance

All instruments must be documented as being "fit for purpose" through verification with known positive and negative materials before starting to test clinical specimens. Instrument verification is conducted at installation, after service or calibration, or after moving instruments.

Many tests rely on precision instruments that require regular preventive maintenance, and ad hoc servicing and maintenance. The end-user should perform regular preventive maintenance, to ensure good performance of the instrument. Suppliers or authorized service providers should perform on-request maintenance, as necessary. Countries should take advantage of any available extended warranties or service contracts to ensure continued functioning of the instruments.

Step 3.3 - Assess site readiness and ensure a safe and functional testing site

The NTP or NTRL usually determines which sites will conduct diagnostic testing, based on factors such as TB epidemiology, geographical considerations, testing workload, availability of qualified staff, efficiency of referral networks and patient access to services. Each testing site should be evaluated for readiness using a standardized checklist before testing of clinical specimens at the site begins. In addition, existing testing sites should be assessed regularly for safety and operational functionality.

A functional testing site requires testing instruments to be properly positioned in a clean, secure and suitable location. Most instruments will require an uninterrupted supply of power, and appropriate working and storage conditions (e.g. humidity and temperature controlled). A safe environment requires WHO biosafety recommendations for conducting the diagnostic test to be followed in appropriate containment facilities with adequate ventilation; it also requires appropriate personal protective equipment to be used, and biologic waste to be disposed of safely and in accordance with regulations. Failure to provide a functional and safe work environment can affect the quality and reliability of testing.

Area 4 - Supply chain

4.1 Review forecasting, ordering and distribution procedures

4.2 Develop procedures to monitor reagent quality and shelf life

Step 4.1 - Review forecasting, ordering and distribution procedures

Uninterrupted availability of reagents and disposables at the testing site is essential to ensure that technical capacity is built in the early stages of implementation (avoiding long delays between training and availability of reagents and disposables), and to ensure consistent service during routine use. The following measures will be required to ensure uninterrupted supply of reagents and disposables:

  • ensuring that qualified laboratory staff have input into defining the specifications for reagents, consumables and equipment; and streamlining of importation and in-country distribution procedures to ensure sufficient shelf life of reagents and consumables, once they reach testing sites;
  • careful monitoring of consumption rates, tracking of reagent-specific shelf lives and forecasting to avoid expirations or stock-outs;
  • careful planning to ensure that sites have received training and that equipment has been installed ahead of shipment of reagents;
  • ongoing monitoring of all procurement and supply chain steps, to ensure that delays are minimized and that sites receive correct reagents as per the planned schedule; and
  • regular reassessment of purchasing and distribution strategies, to ensure that they are responsive to needs and the current situation.

Step 4.2 - Develop procedures to monitor reagent quality and shelf life

The shelf life of reagents and their required storage conditions must be considered when designing a procurement and distribution system. Laboratory managers should routinely monitor reagent quality and shelf life to ensure that high-quality test results are generated. Also, the laboratory must establish SOPs for handling the reagents and chemicals used, to ensure both quality and safety.

New-lot testing, also known as lot-to-lot verification, should be performed on new batches of reagents or test kits. Such testing usually involves testing a sample of the new materials and comparing the results to an existing lot of materials with known performance. Preferably, new-lot testing of commercially available test kits is performed at the central (e.g. NTRL) or regional level, thereby ensuring that kits with test failures are not distributed. At the testing site, new-lot testing is needed for reagents prepared at that site; it may also be needed to monitor conditions during transport and storage of test kits within the country. For quality control (QC), WHO recommends using positive and negative controls when testing new batches of reagents.

Area 5 - Procedures

5.1 Develop SOPs

5.2 Update clinical procedures and strengthen the clinical-laboratory interface

Step 5.1 - Develop SOPs

Based on the intended use or uses of the diagnostic test, procedures must be defined, selected, developed or customized for:

  • identifying patients for whom the test should be performed;
  • collecting, processing, storing and transporting specimens to the testing laboratory;
  • laboratory testing;
  • data analysis, security and confidentiality (see Area 6);
  • process controls (internal QC) and external quality assessment (see Area 7);
  • recording and reporting of results (see Area 8); and 
  • waste management.
  • A well-defined, comprehensive set of SOPs that addresses all aspects of the laboratory testing processes - from sample collection to reporting of results - will be essential; in part, because errors at any step can have a significant impact on the quality of testing. Some SOPs will rely on the manufacturer's protocols included with commercial kits whereas others will need to be developed. SOPs must be made readily available for staff and must be updated regularly.

Step 5.2 - Update clinical procedures and strengthen the clinical-laboratory interface

A comprehensive plan to implement a new diagnostic test must address all relevant parts of the diagnostic cascade, not just what happens in the laboratory. In addition to laboratory-related SOPs, clear clinical protocols and guidance will be needed for selecting patients to be tested, ordering tests, interpreting test results, reporting and making patient-care decisions. Before the introduction of a new diagnostic test or any changes in an existing test, all clinical staff involved in diagnosis and management of patients must be informed about the planned changes, and relevant training must be conducted. Information must also be shared with clinical staff at all referral sites through staff training opportunities and through use of standardized educational materials developed by the NTP.

The rate of ordering of the new test must be monitored, to ensure that the test is being used by the clinical staff at all sites offering the test. Clinical staff at sites with a low or unexpectedly high testing rate may need additional training and sensitization.

Area 6 - Digital data

6.1 Develop the use of digital data and diagnostics connectivity

6.2 Develop procedures for data backup, security and confidentiality

Step 6.1 - Develop the use of digital data and diagnostics​ connectivity

Many of the latest testing platforms offer the opportunity to use digital data. The implementation plan should consider software and hardware requirements, to take advantage of digital data. "Diagnostics connectivity" refers to the ability to connect diagnostic test devices that produce results in a digital format, in such a way as to transmit data reliably to a variety of users (44).⁵² Key features of the systems are the ability to monitor performance remotely, conduct QA and manage inventory. With remote monitoring, designated individuals can use any internet-enabled computer to access the software, providing an overview of the facilities, devices and commodities in the network. Software can track consumption and inventory to avoid stock-outs and expiring supplies. It can also identify commodity lots or specific instruments with poor performance or abnormal error rates for QA purposes, and provide a pre-emptive service to avoid instrument failure. This approach is a highly cost-effective way to ensure that a diagnostic device network functions properly; it is also useful for reporting and connecting with treatment sites.

Data, results and information updates can also be transmitted automatically to:

  • clinicians and patients, which allows for faster patient follow-up;
  • laboratory information management systems or electronic registers, reducing staff time and the chance of transcription errors, and greatly facilitating monitoring and evaluation processes; and
  • the NTP, to assist with surveillance of disease trends or resistance patterns and rates, and to enhance the capacity of the NTP to generate the data needed for performance indicators of the End TB Strategy.

Step 6.2 - Develop procedures for data backup, security and confidentiality

With any electronic data system, there is a risk of losing testing data. A SOP for regularly backing up data (e.g. to an external drive) is essential, as is a SOP for data retrieval. Also needed are policies and procedures to ensure the security of laboratory data and confidentiality of patient data, in line with national and international regulations. Antivirus software should be installed and kept up to date. Access restriction should be in place to safeguard confidentiality, protect personal information and prevent data breaches by unauthorized users. Data access and governance policies should be developed and enforced.

Area 7 - Quality assurance, control and assessment

7.1 Implement a comprehensive QA programme

7.2 Establish and monitor QCs

7.3 Develop an external quality assessment programme

7.4 Monitor and analyse quality indicators

Step 7.1 - Implement a comprehensive QA programme

A comprehensive QA or quality management programme is needed to ensure the accuracy, reliability and reproducibility of test results. Essential elements of a QA system include:

  • SOPs, training and competency assessment (Area 9);
  • instrument verification and maintenance (Area 3);
  • method validation or verification (Area 2);
  • lot-to-lot testing (Area 4);
  • internal QC;
  • external quality assessment (EQA); and
  • quality indicator monitoring and continuous quality improvement.

A comprehensive discussion of the essential elements of a QA system can be found in the GLI practical guide to TB laboratory strengthening (41).⁵³ This section describes QC, EQA and quality indicator monitoring.

Step 7.2 - Establish and monitor QCs

QC monitors activities related to the analytical phase of testing; the aim is to detect errors due to test failure, environmental conditions or operator performance before results are reported. Internal QC typically involves examining control materials or known substances at the same time and in the same manner as patient specimens, to monitor the accuracy and precision of the analytical process. If QC results are not acceptable (e.g. positive results are obtained on negative controls), patient results must not be reported.

Step 7.3 - Develop an EQA programme

An EQA programme includes quality and performance indicator monitoring, proficiency testing, re-checking or making comparisons between laboratories, regular on-site supportive supervision and timely feedback, corrective actions and follow-up. On-site supervision should be prioritized at poorly performing sites identified through proficiency testing, monthly monitoring of performance indicators or site assessments. Failure to enrol in a comprehensive EQA programme is a missed opportunity to identify and correct problems that affect the quality of testing.

The governance structure of an EQA programme at the national and supervisory levels is likely to vary by country. In many countries, implementation of national policies and procedures is coordinated at the central level by the MoH, NTP or NTRL. In some settings, particularly in large countries, these activities may be decentralized to the regional level. Commonly, the central level provides policies, guidance and tools for standardized QA activities, whereas the regional and district levels operationalize and supervise the QA activities and monitor adherence to the procedures. In turn, data collected at the testing sites are reviewed regionally and centrally, and are used to inform and update policies and procedures.

Proficiency testing

For many laboratory tests, the EQA programme includes proficiency testing to determine the quality of the results generated at the testing site. Proficiency testing compares testing site results with a reference result to determine comparability. The purpose of such testing is to identify sites with serious testing deficiencies, target support to the most poorly performing sites and evaluate the proficiency of users following training.

Re-checking of samples

Comparisons between laboratories can also be used as an external assessment of quality. This usually involves the retesting of samples at a higher level laboratory. Many TB laboratories are familiar with this approach because blinded re-checking is a routine method of EQA for AFB smear microscopy.

On-site supervisory visits

On-site supervisory visits are especially critical during the early stages of implementing a new test because they provide motivation and support to staff. Supervisory visits are opportunities to provide refresher training, mentoring, troubleshooting advice and technical updates. On-site assessments should be documented using standardized checklists, to ensure consistency and completeness of information, enable monitoring of trends, and allow follow-up on recommendations and corrective actions. An on-site supervisory programme requires substantial planning and resources (both financial and human).

Step 7.4 - Monitor and analyse quality indicators

Routine monitoring of quality indicators, also known as performance indicators, is a critical element of assuring the quality of any diagnostic test. In addition to the general laboratory quality indicators recommended in the relevant GLI guide (41),⁵⁴ quality indicators specific to the new diagnostic should be adapted from international guidelines or developed from scratch. The indicators should be collected using a standardized format and analysed on a monthly or quarterly basis, disaggregated according to tests.

Programmes should establish a baseline for all indicators. Targets should be set for all indicators monitored, and any unexplained change in quality indicators (e.g. an increase in error rates or change in MTBC positivity) should be documented and investigated. A standard set of quality indicators should be used for all sites conducting a particular test, to allow for comparison between sites.

The continuous quality improvement process is a cyclical, continuous, data-driven approach to improving the quality of diagnostic testing. The process relies on a cycle of monitoring quality indicators, planning interventions to correct or improve performance, and implementing the interventions. Quality indicators should be reviewed by the laboratory manager and must always be linked to corrective actions if any unexpected results or trends are observed. Critical to the process is documentation of corrective actions, and subsequent improvement and normalization of laboratory indicators following the corrective actions.

Area 8 - Recording and reporting

8.1 Review and revise request for examination and reporting forms

8.2 Review and revise laboratory and clinical registers

Step 8.1 - Review and revise request for examination and reporting forms

Depending on the format of the country's current test requisition form (i.e. specimen examination request form), it may be necessary to make revisions to accommodate a new diagnostic test. Countries should determine whether an update of the examination forms is needed, considering the cost and time required for such a revision. If a system is not already in place, countries should establish a numbering system to identify repeat samples from the same patient, to monitor the proportion and performance of repeat tests.

Given that patient data (e.g. treatment status) are critical for the correct interpretation of test results, programmes should ensure that the test request form captures such information. In many countries, request forms already contain fields for such data; however, on occasions data may not be entered in some of these fields or may be entered inconsistently. Refresher training for clinical and laboratory staff should be conducted, to ensure that forms are filled out correctly and completed properly.

The forms used for reporting test results must balance the need to convey the test information while also conveying the information that is essential to allow a clinician to interpret the results and act promptly on them. An easy-to-read format is important because there is likely to be a wide range of expertise among the clinicians interpreting test results.

Step 8.2 - Review and revise laboratory and clinical registers

Current laboratory and clinical registers that are based on the WHO reporting framework (11)⁵⁵;may need to be modified to record the results of the diagnostic test being implemented. Forms for laboratory records may also need to be modified. Countries should implement a standardized approach for recording test results in laboratory and clinical registers, and should use the approach consistently across all testing and clinical sites. Countries with electronic laboratory information management systems may need to include news tests in the software package.

Area 9 - Human resource training and competency assessment

9.1 Develop and implement a training curriculum and strategy

9.2 Assess and document the competence of staff

Step 9.1 - Develop and implement a training curriculum and strategy

Training and competency assessment are critical for generating quality-assured test results, and should be offered for the different levels of personnel (e.g. managers, senior technologists, technicians and laboratory assistants). Implementing a diagnostic test requires training beyond the steps required to carry out the test, and the manufacturer-supplied on-site training following installation is often too short to cover QA activities. The testing site manager must ensure that test users are trained in the operation and maintenance of the test instrument, correct performance of the test and associated QA activities.

Clinician training or sensitization must be done in parallel with training of laboratory staff, to ensure that all clinicians involved in the screening and care of TB patients understand the benefits and limitations of the new test and are sensitized to the new testing algorithm, test requisition process, specimen requirements, specimen referral procedures and interpretation of results.

Step 9.2 - Assess and document the competence of staff

Competency assessments should be performed using a standardized template after training and periodically (e.g. annually) thereafter. They should include assessment of the knowledge and skills for performing each of the tasks involved in a diagnostic test. Assessments should be conducted by an experienced test user or trainer, and should include observation of the person being assessed as the person independently conducts each of the required tasks. Proficiency testing panels may be used for competency assessments. The results of competency testing should be recorded in personnel files.

Area 10 - Monitoring and evaluation

10.1 Monitor implementation of the diagnostic test

10.2 Monitor and evaluate impact of the diagnostic test

Step 10.1 - Monitor implementation of the diagnostic test

During the initial planning phase, countries should establish a set of key indicators and milestones that can be used to monitor the implementation process. Once the testing services have been launched, use of the services should be tracked.

Step 10.2 - Monitor and evaluate impact of the diagnostic test

A framework for monitoring and evaluation of the impact of a diagnostic test is essential to inform decision-making. Often, the objective of new or improved TB diagnostic tests is to improve the laboratory confirmation of TB or the detection of drug resistance. Indicators to assess the impact of test objectives should be developed. For each such indicator, programmes should define the purpose, target, data elements and data sources as well as how the indicator is to be calculated, process indicators and corresponding data elements that contribute to the main indicator.

In-depth analyses of the process indicators may be useful as follow-up investigations, to elucidate the test's contribution to the patient's outcome and to identify opportunities for interventions towards increasing impact.

As part of demonstrating a test's impact, and to assist with planning and policy-making, programmes should also consider evaluating the cost-effectiveness and end-user perspective of a test 1 year after its implementation, followed by regular evaluation over the next 3-5 years. The end-user perspective should include acceptability and feasibility aspects of the principal user groups; that is, health workers (e.g. clinicians, nurses and community health workers), laboratory technicians and patients.

⁴⁷ See http://www.stoptb.org/wg/gli/gat.asp

⁴⁸ See https://www.who.int/publications/i/item/9789241504638

⁴⁹ See http://stoptb.org/wg/gli/gat.asp

⁵⁰ See https://www.iso.org/standard/56115.html

⁵¹ See https://www.who.int/publications/i/item/WHO-HTM-TB-2017.06

⁵² See http://www.stoptb.org/WG/gli/assets/documents/gli_connectivity_guide.pdf

⁵³ See http://stoptb.org/wg/gli/gat.asp

⁵⁴ See http://stoptb.org/wg/gli/gat.asp

⁵⁵ See https://apps.who.int/iris/bitstream/handle/10665/79199/9789241505345_eng.pdf?sequence=1

Book navigation