Article Text

PDF

Regulations for the development of deep technology applications in healthcare urgently needed to prevent abuse of vulnerable patients
  1. Dinesh Visva Gunasekeran
    1. Correspondence to Dr Dinesh Visva Gunasekeran, National University of Singapore, Singapore 119077; dineshvg{at}hotmail.sg

    Statistics from Altmetric.com

    The article ‘First compute no harm’ contributed by Coiera et al 1 adds to the exciting discussion about the immense potential of deep technology such as machine learning to positively transform healthcare.2 This article highlights the growing chasm between today’s innovation and the scope of existing regulations. This is a problem most aggravated in low-income and middle-income countries.3 Development has begun, although the inauguration of holistic regulatory frameworks is lagging. This begets additional concerns about improper conduct of testing,4 manipulation that is difficult to detect without stringent data reporting5 and ethical implications of care recommendations based on probabilistic analyses.6

    ‘Machine learning’ was initially defined as a ‘field of study that gives computers the ability to learn without being explicitly programmed’, and later refined by Tom Mitchell (1997)7 as a program that learns ‘from an experience (E) with respect to some task (T) and some performance measure (P), if its performance on T, as measured by P, improves with experience E’. Machine learning has given rise to many stellar applications in the processing of biological information for tasks (T) that are functional in nature.8 When the task is to open or close fingers, how an algorithm comes to a decision about which nerves to stimulate to achieve muscle activation may not be a concern. However, there are limitations to the interpretation of these probabilistic analyses,6 particularly when applied to non-functional tasks (T) of making health management decisions. This is due to recommendations based on associations6 which are vulnerable to methodological flaws that are not easily detected.1 9

    Consider an artificial intelligence (AI) algorithm that makes a management decision between two drugs, A and B. The drugs have similar efficacy profiles and side effects. Drug A’s parent company made an investment into the AI company that developed this algorithm. Over a period of closed-door meetings and sponsorship of professional development,10 that investment may compel the AI company’s developers to influence the algorithms in favour of drug A at any stage of development, such as data processing or algorithmic training.5 10 Such influence is difficult to detect, even with statistical review techniques such as testing for randomness and data set distributions.5

    Deep technology companies have deployed their point-of-care AI solutions to provide health advice to the lay public.1 The capabilities of these technologies are not yet sufficient for decision-making in healthcare.9 However, providers have begun operations in developing countries, where products are marketed as solutions for anyone without access to healthcare. This is not a new phenomenon—we have seen human experimentation in vulnerable populations in several historic drug trials.10 Use of a deep technology algorithm to direct decision-making in healthcare could constitute a screening measure or intervention if, for example, users are informed that they do not need to seek medical attention for their symptoms. It is impossible to monitor these unofficial pilot trials, to ensure that proper informed consent is taken from patients4 and that patients know the risks of taking health recommendations from algorithms built on imperfect data.9

    Healthcare is a tightly regulated industry, and rightly so with lives and limbs at stake.4 Checks and balances ensure safety when testing new interventions. Furthermore, the act of making a diagnosis or prescribing treatment is permitted only after many tests of competence. This privilege is not taken lightly and comes with it liability for negligence. Currently, there are no safeguards in place11 to support vulnerable patients who are mismanaged by deep technology.

    This is merely the beginning of the ethical concerns regarding deep technology software solutions that are infinitely scalable and difficult to monitor. These issues could perhaps be addressed by mandating detailed statistical reporting,5 where developers have to maintain transparent stepwise reports of their algorithm development cycles as well. Stepwise algorithms should be disclosed to regulatory authorities and independently validated in anonymised data sets from the population that the software is intended.9 Results that cannot be independently reproduced serve as markers for targeted audits.

    Deploying safeguards to facilitate adoption is in the interest of all stakeholders to allow AI to develop and enhance healthcare.2 For instance, applications of unsupervised learning may be particularly useful to classify clinical phenotypes or unearth hidden associations. This can be followed up with basic science investigations to develop new drugs or theories to assess complex issues such as social determinants of health. However, our patient advocacy11 needs to extend beyond the patient sitting across the table to the many unseen patient-subjects before them, to ensure that the vulnerable do not suffer to bring innovations to fruition.10

    Acknowledgments

    Emmanuel Maroye is acknowledged for his expert contributions to the conceptualising of this response to the article, ’First compute no harm'.

    References

    View Abstract

    Footnotes

    • Contributors The authors both contributed to the concepts in this editorial following a panel discussion at the NUS domain expert presentation on Fairness, Accountability and Transparency in Artificial Intelligence.

    • Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

    • Competing interests DVG reports advisory to university-affiliated technology developers/start-ups in Singapore, as well as the Collaborative Ocular Tuberculosis Study (COTS) group. COTS is an international initiative to use big data to better understand the elusive ocular tuberculosis (TB), an early opportunity to address asymptomatic carriage of TB infection.

    • Provenance and peer review Not commissioned; externally peer reviewed.

    • Collaborators Emmanuel Maroye.

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.