Why the potential of machine learning in the pharmaceutical sector lies in its transparency
Despite the rapid innovation of machine learning, correct protocols to ensure compliance and reliability must be considered.
Machine learning (ML) has the potential to revolutionize drug development and patient care, from accelerating clinical research to supporting proactive clinical decision-making. Because ML is relatively new, complex, and often misunderstood, it is generally considered a somewhat mysterious phenomenon: a “black box” spitting blind conclusions about the data used to produce them. In reality, ML is neither magic nor abstract, but rather a highly logical, data-driven technology. While this mysterious element is what makes ML so fascinating and powerful, it can also be its Achilles’ heel, raising suspicion among its pharmaceutical users accustomed to operating in a highly regulated environment where solid evidence is extremely important.
In an effort to increase confidence and standardize the safe and ethical development of ML technologies, the FDA, Health Canada and the Medicines and Health Products Regulatory Agency (MHRA) have recently introduced a set of known guiding principles under the name Good Machine Learning Practices (GMLP). While this is a promising first step to ensuring ML innovation progresses and adoption improves, we need more than a general set of recommendations on what needs to be done. We need to lift the curtain on the inner workings of the algorithms to demonstrate that these guidelines were followed at every stage of its development.
Keep digital and physical diagnostics at the same level
In light of the rapid development of these technologies, it is more important than ever to ensure that ML algorithms are developed in a safe and ethical manner, and that there is a clear understanding of the desired benefits and potential risks throughout. of the product life cycle. . For example, data security and diversity are among the many factors that influence trust in ML. This includes how personal data is captured, stored and used in a compliant manner, as well as whether the data fed by an algorithm is representative of the intended patient population. If clinicians are not convinced that the technology is safe or can adequately meet the needs of their patients, they are very unlikely to trust the technology and use it in their practice.
Just as pharmaceutical companies must provide vigilant evidence of a drug’s efficacy for the intended patient population, so ML developers must be held to a similar standard. There is a need to thoroughly track and document how an algorithm is constructed, its impact, and its purpose-built use cases. Only then can we build confidence that these tools are built safely and accurately.
Building trust through standardization
To date, regulation around ML development is largely based on the good faith that developers will follow “good science” and ensure that their algorithm is developed using ethical and secure processes. The introduction of GMLP represents an important first step in monitoring this growing sector. It provides solid recommendations, ranging from developing validation datasets independent of training datasets to using models that can be monitored in “real-world” settings, both of which are critical factors for the accuracy of an algorithm. However, developers are not required to adhere to these guiding principles. They are advisory in nature only and are intended to provide a framework for future development with the aim of increasing user confidence and improving product performance. That said, good intentions alone are not enough when the outcome may impact a patient’s care or the future of medical treatment. We need more evidence and careful monitoring of a model to make it trustworthy in the eyes of pharmaceutical sponsors and clinicians.
Creating value through traceability
An effective way to establish trust in a model’s architecture is to implement actionable regulatory standards that ensure traceability. This requires clearly defining which aspects of ML should be transparent. Rather than leaking a model’s proprietary code, the system around it can be more indicative of its quality and accuracy. The workflow of this system, including how we collect data, how an algorithm is trained, what generates a specific output, etc., allows us to understand how each component fits together based on its purpose, design and performance. Since ML continues to evolve and learn over time, close and continuous monitoring of a model’s performance and its refinement if necessary is a crucial part of its safe and ethical growth and development.
While we don’t want to slow the pace of this rapid innovation, we also want to make sure the innovation is meaningful, safe, and delivers what it promises. Encouraging developers to implement traceability protocols and documenting the development of an algorithm will not only provide peace of mind to its healthcare end users, but will also improve an industry-wide understanding. industry best practices for strong ML to continue progress in this nascent field.
Michelle MarlboroughProduct Manager, AiCure, LLC