Issues

The contribution of the clinical laboratory to the new medicine paradigm

COMPLIANCE WITH ETHICAL STANDARDS 
Fundings: There were no institutional or private fundings for this article.
Conflict of interests: The authors declare that they have no conflict of interests.
Authors’ contributions: The authors contributed equally to the realization of the manuscript.
Availability of data and materials: The data underlying this manuscript are available in the article.
Ethical approval: N/A.

Our interaction with the clinical analysis laboratory likely began before birth, with blood tests monitoring the evolution of our mother’s pregnancy. However, we know little about what happens to our blood (or other specimens) after it has been drawn and collected in a test tube. We wait for the results, unaware and unconcerned about the path our biological liquids have traveled in the interim. Yet, through almost 200 years of history, the clinical laboratory has become essential to our wellbeing, and, with the advent of precision medicine, the results it produces are contributing not just to diagnosis but also to prevention and prediction.
Whether measuring blood glucose or composing complex genomic maps, the clinical laboratory plays a prominent role in diagnosis. Several scientific studies have shown that test results from clinical labs drive about 70% of clinical decisions yet comprise less than 5% of hospital costs (1).
The lab’s role is inexorably destined to grow, driven by key technological, social, and demographic trends:

– developed countries are reckoning with an aging population that accesses medical services more frequently (the rate of use of emergency medical services by older adults is more than four times that of younger patients (2));

– the population with chronic conditions — those that require periodic monitoring through blood tests — is predicted to rise to 65% by 2030 (3); – populations in developing countries increasingly have access to care and can benefit from key health services
such as immunization, family planning, antiretroviral treatment for HIV, and malaria prevention (4), all of which require laboratory diagnostics; – in the last two decades, the evolution of scientific research has brought to market an impressive number of new laboratory tests, driven by progress in research and technology.

These realities will lead to robust growth in the IVD segment, much greater than what is forecast in strict financials terms (around 4% CAGR until 2021 (5)).

The evolution of clinical labs

Exciting innovations have occurred in clinical labs in the last 50 years, including the discovery of new diagnostic tests and the introduction of evolved, ergonomic analyzers, enabling higher levels of accuracy. Another innovation that has contributed to the advancement of this segment is automation, i.e., the “robots” (or automated devices) that have substituted humans in the routine, but critical, tasks of manipulating and moving samples through the lab’s workflow. In their early stage, lab automation systems were limited to automating certain repetitive tasks (such as decapping) and testing performed for specialty disciplines (e.g., immunology, molecular biology, etc.).
With increased test volumes and lab consolidation, larger automation systems have entered the market. These employ sample-transportation tracks that integrate on a single processing line all routine operations (centrifuging, tube decapping, routing, storage, etc.) and various analyzers and different specialties, thus moving beyond the idea of automation by specialty. These solutions are called Total Laboratory Automation (TLA) systems (figure 1), for which the adjective “Total” perfectly describes the inclusion of all laboratory specialties and the continuum of a process that eliminates human contact with specimen tubes, ensuring a safe and efficient sample journey inside the lab. TLA systems boost lab efficiency by reducing turnaround time, limiting human errors, and enabling more-effective deployment of staff. They also ensure full traceability of patient samples, from check-in to delivery of results, by adding valuable meta-information to the results (e.g., time spent in the testing process, time of the analysis, sample integrity, etc.).
However, the positive attributes of automation can still be compromised by the quality of samples arriving at the laboratory; quality checks on analyzers cannot eliminate the issues caused by, for example, an incorrectly labelled tube loaded onto the TLA system, because total quality is the result of correct performance throughout all steps in the process. For this reason, it is important to extend the benefits of automation and sample control outside the lab to monitor the entire sample journey, beginning at the point of collection. This is the most error-prone phase of laboratory medicine, where two out of three errors originate, according to literature (6). Errors that occur at sample collection are linked to the incorrect matching of three key identifiers: patient ID, the type of tube to be used, and the type of test associated with the request. As in all clinical specialties, the “right diagnosis for the wrong patient” is a risk with frightening consequences; ensuring the association of “the right patient to the right test, with the right tube” is at the very heart of any safe and reliable sample journey.Devices are available to automate the procedures used during the critical collection phase. These devices ensure biometric patient identification, correct tube selection, and flawless labeling of the correct sample containers in the presence of the patient.
Some can also track ancillary events and information related to specimen generation, such as time of collection, condition in which it was collected, author of the draw, and transportation details. The availability of the metadata together with the test results provides the unprecedented capability of understanding the sample conditions and trace the root cause of adverse events, if they occur.
The integration of data produced by these remote systems with data generated by the lab automation enables the full traceability of a sample’s history. The sequence of all steps of the sample’s journey (from prescription to result) is referred to as Total Testing Process (TTP) (7). Gaining full control of the TTP is a reliable and objective contribution to precise outcomes, which is the next challenge for laboratory medicine, especially now that laboratory services extend in multiple directions toward an outpatient approach and collection becomes ubiquitous.
This principle of TTP governance can be extended to anatomic pathology laboratories and bio banks, two areas fueling pharma research, where the reliability of the sample, correct patient identification, and availability of metadata are non-negotiable conditions.

The unparalleled knowledge hidden in labs

The pathology and clinical labs, together with bio banks (that are the safe house for tissue samples, tumor cells, DNA and blood), are the hospital services that own and manage the richest sources of medical data for large living populations, both healthy and those affected by diseases, not to mention historical data. No other entity in healthcare owns such multidimensional data that can add value not just in diagnosis, but also in predictive and personalized medicine. However, only a limited amount is used to generate reports for clinicians. Much information remains idle in large databases and is not used beyond its primary purpose. A sample such as a blood culture might produce tens or even hundreds of data points and generate data of varying formats (images, spectra, categorical, and numerical). Time has come to consider them beyond urgent diagnosis and leverage such an immense archive of health indicators to fuel precision medicine. It is expected that AI and machine-learning technologies will one day better support prognostic decisions. However, AI is in its early stages and must be provided with vast amounts of trustworthy data.
Labs that have TTP control can serve as bridges for this transition by providing reliable patient data and actionable analysis; they are ready to elevate their role in this advanced scenario, evolving from simple producers of test results (i.e., retrospective direction) to masters of data analysis, that is, effectively transforming all the information related to execution of the test into analytical tools for proactive care.

Reliability of data fueling precision medicine

The “full traceability” concept is a quality paradigm for medical research and pharmacology as well, especially with the massive use of health indicators in precision medicine, which develops targeted treatments and diagnostic strategies starting with the study of patient’s genes and environment. The molecular and genomic properties of the patient are typically taken directly from the analysis of representative organic samples; once combined with other healthcare data (e.g., health habits), they offer increased precision in understanding the mechanisms of disease and drug response.
Precision medicine requires the use and integration of an unprecedented amount of patient and population data. This is the so-called “big-data” revolution, and, as L. Pani has stated in this space recently, “This scenario represents a new reality, for which vast areas of knowledge of the past have become obsolete if not frankly inadequate (8).” Pharmacology now has access to a vast amount of complex health-related data, and innovative analysis models are required to detect patterns and relationships inside the data and turn it into knowledge (9). Again, questions and concerns arise regarding data reliability. Since pharmacologists study how drugs work at the molecular level, they need high-quality molecular analysis data that can come only from high-quality samples. Since the quality of a sample depends heavily on the way it is collected, handled, managed, and stored, the garbage-in, garbage-out (GIGO) paradigm has never been so true or critical: the production process matters as much as the result. Notwithstanding the wonders of technology and data science, data quantity cannot overcome the challenges of poor data quality. Since the future of new medicine depends greatly on the development of molecular biomarkers, “the quality of the starting materials — the biospecimens used for analysis —is of primary importance; data soundness and traceability are mandatory requirements (10)”.
The genetic revolution is contributing to establishing in pharmacology the principle of the “right drug to the right patient, in the right dose”.The experience of clinical laboratories shows that today there are tools that can certify that the right sample has gone through a safe and traceable process; however, an intact chain of custody for the physical element (i.e., the sample) is not sufficient. The truly innovative game-changer is the availability of the metadata around the sample, i.e., when/where/how the data originated. Only the combination of both elements will guarantee data provenance and veracity, thus making the entire set of information fully trustworthy. With the rapid evolution of mobile health technologies and the spread of ubiquitous data collection devices, data provenance and veracity will be at the heart of reliable diagnostic information.Advanced smartphones, wearable devices, and other passive sensors are embedded in everyday life and can offer insights into disease evolution by remotely monitoring drug effects and patient outcomes. Likewise, decentralized clinical trials can be executed through telemedicine, mobile health, or local/peripheral healthcare providers using virtual recruitment, investigation products shipped directly to participants, or smartphone-based outcome assessment (11). All these tools and approaches must be subject to the principle of precise, proven data provenance. With diagnostic data collection migrating increasingly out of the hands of healthcare professionals, accurate patient identification becomes critical. Regulatory approaches are needed for these fast-growing applications, and, although digital diagnostics represents a more-distant frontier (requiring much wider and more rigorous clinical testing), patientgenerated health data poses an immediate concern regarding unambiguous patient identification. Here, technology already offers tools for biometric reading (e.g., eye and voice recognition) of a subject’s identity; in clinical laboratories, the patient can already be identified by fingerprint and have his/her ID permanently associated with the specimen, all tracked inside the hospital information system (figure 2).

Open data: the next frontier?

Once data is safely collected, validated, and fully tracked, the next boost for precision medicine will come from the adoption of open-standard data types, i.e., data saved in compliance with open, royalty-free, vendorneutral specifications that will make it shareable among various healthcare stakeholders. Currently, most of our health records are stored in various proprietary formats — if not in paper folders —which limits their accessibility and useful life. With the exponential growth of medical data, an urgent need for integrated informatics platforms is emerging. Many international initiatives have been launched to define and build open, vendor-neutral guidelines, standards and even platforms for electronic health records and IT companies are working together with research centers and medical devices manufacturers. As an example, as far as bio banking, an international sharing model to help harmonize the various national biobanks is under construction, led by the Biobanking and Biomolecular Resources Research Infrastructures Initiative (BBMRI). One of the most consolidated initiative in the area of technology for e-health is the OpenEHR Foundation which was established in 2003. The foundation is working together with researchers, healthcare providers, patients, and vendors to propose actionable standards for clinical data storage and usage (i.e., models and formats for sharing data and information) with the goal of achieving data interoperability.
Although interoperability might have a philosophical flavor, there are some examples in medicine where we can already see it at work. For example, in digital imaging, the DICOM standard has harmonized the way in which data related to diagnostic images (X-ray, CT, MR, PET, etc.) is captured, classified, compressed, shared, and stored. It is not by chance that digital imaging is one of the few specialties in which AI and computer-aided diagnosis have concrete, leading-edge applications (12).
Of course, the introduction and wide adoption of open data standards will increase the sharing of information, although many new ethical and legal questions must be considered, especially in the healthcare area. Using standardized sets of information will increase the possibilities for collaboration and interaction among stakeholders, thus paving the way to process sharing. Recording all the context information and procedural details associated with a research project improves our ability to reproduce results, which is the basis of scientific research. It is imperative that the entire biomedical community address the need for standardized processes to accelerate the delivery of accurate, reproducible, clinically relevant diagnostics for precision medicine.

Conclusions

Pharmacology, standing at the intersection of many disciplines employing big data, is being overwhelmed by the tremendous quantity of new information arriving from multiple fields to support the promise of precision medicine. As in every revolution, promises and risks are converging, and the “garbage-in, garbage-out” paradigm underpins the most challenging risk: the reliability of the original data. The magnitude of available data will only amplify the reality that “flawed samples equals flawed medical decision making.” Drug development requires innovation, and this in turn necessitates patient specimens of high and certified quality. As we have seen, quality is not limited to the way a test or analysis has been carried out, but encompasses the entire sample collection and production process, starting with unquestionable patient identification. In clinical trials, in which patient specimens are used for correlative scientific studies or discovery research, corrupted data can lead to the downstream consequence of irreproducible study results (a huge amount of published biomedical data cannot be reproduced (13)).Not only does diagnostics keep evolving and releasing new kinds of data, but also the new technologies that produce this data (such as image-based analysis, artificial/augmented intelligence, and molecular diagnostic testing) are becoming more widely used. Professionals working in pathology, laboratory, and pharma research need to stay up to date with these changes and become more conscious and vigilant regarding the veracity of the data they are using.
Technology is delivering real, trusted tools — from flawless identification devices to interoperability standards — to monitor and optimize the processes used to generate health data. Physicians and clinicians must learn more about these tools and their implications for the chain of custody for samples, data, and processes.

References

1. Cadogan et al., 2015; Lippi et al., 2016; Plebani, 2015; Rohr et al., 2016; Sarata and Johnson, 2014.
2. https://www.ncbi.nlm.nih.gov/books/NBK215400/.
3. Mathers and Loncar. 2005. Available from: https://www.who.int/management/programme/ncd/Chronicdisease-an-economic-perspective.pdf.
4. https://www.who.int/news-room/detail/13-12-2017-world-bank-and-who-half-the-world-lacks-access-to-essential-health-services-100-million-still-pushed-into-extreme-povertybecause-of-health-expenses.
5. IQVIA. Global outlook on the in vitro diagnostic industry. IQVIA 2018.
6. Carraro, Plebani. Errors in a stat laboratory: types and frequencies 10 years later. Clinical Chemistry. 2007;53:7.
7. Barr, Schumacher. Total testing process applied to therapeutic drug monitoring: impact on patients’ outcomes and economics. Clinical Chemistry. 1998;44:2.
8. Pani L. Radical singularities and future of pharmacology. 2019. Available from: http://www.pharmadvances.com/radical-singularities-and-the-future-of-pharmacology/.
9. EMA. Personalised medicines. Report from a workshop held 14 March 2017. Available from: https://www.ema.europa.eu/en/documents/report/reportpatients-consumers-working-party-pcwp-healthcareprofessionals-working-party-hcpwp-joint_en.pdf.
10. Compton C. Garbage in, garbage out. The Pathologist. 2018 Mar.
11. Sim I. Mobile devices and health. N Eng J Med. 2019;381:956-68.
12. Judita Kinkorová, Biobanks in the era of personalized medicine: objectives, challenges, and innovation. The EPMA Journal (2016) 7:4.
13. NIH. Artificial intelligence enhances MRI scans. Available from:https://www.nih.gov/news-events/nih-researchmatters/artificial-intelligence-enhances-mri-scans.

 

Table of Content: Vol. 2 (No.1) 2020 April