Article Text
Abstract
Background Opioids are strong pain medications that can be essential for acute pain. However, opioids are also commonly used for chronic conditions and illicitly where there are well-recognised concerns about the balance of their benefits and harms. Technologies using artificial intelligence (AI) are being developed to examine and optimise the use of opioids. Yet, this research has not been synthesised to determine the types of AI models being developed and the application of these models.
Methods We aimed to synthesise studies exploring the use of AI in people taking opioids. We searched three databases: the Cochrane Database of Systematic Reviews, Embase and Medline on 4 January 2021. Studies were included if they were published after 2010, conducted in a real-life community setting involving humans and used AI to understand opioid use. Data on the types and applications of AI models were extracted and descriptively analysed.
Results Eighty-one articles were included in our review, representing over 5.3 million participants and 14.6 million social media posts. Most (93%) studies were conducted in the USA. The types of AI technologies included natural language processing (46%) and a range of machine learning algorithms, the most common being random forest algorithms (36%). AI was predominately applied for the surveillance and monitoring of opioids (46%), followed by risk prediction (42%), pain management (10%) and patient support (2%). Few of the AI models were ready for adoption, with most (62%) being in preliminary stages.
Conclusions Many AI models are being developed and applied to understand opioid use. However, there is a need for these AI technologies to be externally validated and robustly evaluated to determine whether they can improve the use and safety of opioids.
- Addiction Medicine
- Pain Management
- Substance-Related Disorders
Statistics from Altmetric.com
Summary box
Across the landscape of opioid research, natural language processing models (46%) and ensemble algorithms, particularly random forest algorithms (36%), were the most common types of artificial intelligence (AI) technologies studied.
There were four domains to which AI was applied to assess the use of opioids, including surveillance and monitoring (46%), risk prediction (42%), pain management (10%) and patient support (2%).
The AI technologies were at various stages of development, validation and deployment, with most (62%) models in preliminary stages, 11% required external validation and few models were openly available to access (6%).
Introduction
Opioids are pain medicines related to opium that are deemed essential by the WHO.1 There are over 200 different types of opioids that can be prescribed, purchased over the counter (eg, at pharmacies), purchased online or obtained illicitly.2–4 There are also various conditions that opioids can be used for, including, but not limited to, cancer pain, postoperative pain, chronic non-cancer pain, opioid dependence and withdrawal.5–9 Despite being essential, opioids can cause a number of adverse events from minor (eg, constipation, nausea) to severe (eg, addiction, depression and sleep problems),10 as well as death.
In the USA, 128 lives were lost every day to opioid overdoses in 2018.11 Such opioid-related deaths, widely described as the US opioid epidemic, were linked to the increased prescribing of opioids, opioid misuse and the transition to illicit substances. In the UK, six deaths per day due to a drug overdose involved an opioid in 2018.12 The prescribing of opioids in primary care more than doubled in England between 1998 and 2016.13 Thus, interventions are being developed to examine and improve the use of opioids.
Several approaches have been trialled to improve the use of opioids with partial success. These have included: educational resources14; non-pharmacological therapies, for example, cognitive behavioural therapy, hypnosis, relaxation techniques, mindfulness, acupuncture, and exercise15; the monitoring of prescribing data16; and toolkits to support the review and safe reduction of opioids.17 However, technological advances could help streamline such approaches to improve the use of opioids.
The use of artificial intelligence (AI) technologies in preventative health and medicines optimisation is gaining traction. For example, AI is being used to predict sudden death in heart failure patients and support the selection of appropriate treatments.18 A review on the use of AI interventions to aid in opioid use disorders found 29 unique interventions.19 However, this review only examined the grey literature, which is not quality checked by publishers and peer review.20 Grey literature can include documents such as reports and online content that can provide insight into emerging research. However, the types of AI reported in peer-reviewed literature and across other aspects of opioid research has not been synthesised.
The UK published a National AI strategy in September 202121 and is encouraging the use of AI to drive digital transformation across the National Health Service (NHS).22 NHS organisations (eg, NHSX) are supporting the acceleration of AI technologies through financial awards,23 and the UK’s overprescribing review has recommended the commission of digital tools to tackle and reduce overprescribing.24 However, there are few reviews that examine the use of AI to inform such policies. Therefore, the aim of this review was to explore the use of AI technologies across the landscape of opioid research.
Methods
We designed a narrative review to understand how AI technologies have been used, applied and implemented in research on opioid use.
Search strategy
An information specialist designed and ran the search strategy in three databases: the Cochrane Database of Systematic Reviews, Embase and Medline. Search terms relating to ‘opioids’ and ‘artificial intelligence’ were included (see online supplemental table S1 in appendix 1 for the complete list of terms). The search was initially performed on 27 January 2020 and updated on 4 January 2021. We also searched Google Scholar on 18 November 2020.
Supplemental material
Eligibility criteria
Studies were included if they were published after 2010, had been conducted in a real-life setting in human beings and tested a form of AI to optimise or understand opioid use. AI was defined as ‘computer systems that can perform tasks normally requiring human intelligence’,25 which could include any form of machine learning, deep learning, neural networks and natural language processing. Studies were not restricted by outcomes or settings, and conference abstracts were included to capture all emerging research. Studies were excluded if they were not published in English, did not specifically relate to both opioids and AI, were conducted outside of a real-life setting, for example, in research settings exploring genes, receptor subtypes and modulators, and were not original research. Editorials and commentary were excluded.
Study selection
Titles and abstracts were screened independently using the prespecified eligibility criteria by one review author (SG), followed by the full-text articles. Where the conference abstract and full article were both available, the full article was included.
Data extraction
One review author (SG) extracted data from included studies into a predeveloped spreadsheet. This included: year of study and author names; country; study design; study population and data source; sample size; technology investigated; area of application; outcomes; and stage of development.
Data analysis
The findings were descriptively synthesised by identifying areas of commonality in terms of the AI technology used and the area of application. The types of AI technology were classified based on the method described by Brownlee in the Tour of Machine Learning Algorithms.26 The stages of development were defined based on reported findings in the studies and categorised into seven groups: preliminary research; model development required; model development planned; external validation required; prototype for scale-up developed; local implementation; and openly available.
Results
We screened 389 titles and abstracts and 118 full texts for eligibility (figure 1). There were 81 studies that met our eligibility criteria and were included in the review (table 1). Of the 81 studies, 18 were conference abstracts, which are summarised in online supplemental table S2 appendix 2.
The included studies represented over 5.3 million participants and 14.6 million social media posts. The majority (93%, n=75) of studies were conducted in the USA with the remainder being performed in Bangladesh (n=1), Bulgaria (n=1), Germany (n=1), India (n=1), Israel (n=1) and Italy (n=1). Of the published articles (n=63), most studies used observational designs, including cohort (54%), retrospective observational (22%), case–control (8%), prognostic (6%) and cross-sectional (5%). One study was a case series, one was a pilot study using mixed methods, and there was one retrospective infovelliance study (table 1). The main sources of data for testing AI models were medical records and claims databases (54%) and social media (20%) (figure 2).
The areas where AI technology was being applied and tested broadly fell into four distinct categories: surveillance and monitoring of activity or consequences such as misuse, respiratory depression and HIV outbreaks (46%, n=37); risk prediction of outcomes such as opioid use disorder, dependence or overdose (42%, n=34); pain management (10%, n=8); and patient support technology (2%, n=2) (online supplemental appendix 3). For studies that focused on risk prediction (n=34), 18% specifically investigated prolonged opioid use following surgery (table 2).
Ensemble algorithms (59%, n=48), particularly random forest algorithms (36%, n=29) and natural language processing models (46%, n=37) were the most common types of AI technology researched (table 3). In terms of efficacy measures, several studies (43%, n=35) used the area under the receiver-operating characteristic (AUROC) curve. Other efficacy measures used included the macro averaged F1 score, Brier score, positive predictive value and negative predictive value.
The AI models included in the review were at various stages of development, validation and deployment. The majority (62%, n=50) were at the preliminary stage, 11% (n=9) required external validation, few models were openly available to access (6%, n=5) (figure 3).
Discussion
We identified 81 studies that tested AI in people using opioids. Most research was from the USA, with no studies reported in the UK. While the opioids themselves presented similar risks, the populations studied were very different. There were various types of AI models being used, including a range of machine learning algorithms and models using natural language processing. The majority of included studies examined the use of AI in risk prediction and surveillance and monitoring, two areas that could have significant patient safety benefits to prevent long-term opioid use and opioid dependence.
Most of the studies reviewed focused on developing AI technology to support the identification of factors that could predict the increased risk of developing an adverse opioid-related outcome. The need for preoperative identification of people at risk of sustained opioid use following surgery has been recommended by experts.27 Thus, this is a potential gap that AI technology could fill if studies were robustly designed.
A commonly reported predictive factor across nearly all studies on risk prediction was the concomitant use of sedative and anxiolytic medication such as benzodiazepines. An evidence review by Public Health England identified an increased risk of long-term opioid use in people who had previously used or were currently using benzodiazepines.28 A systematic review into factors associated with high-dose opioids reported that people co-prescribed benzodiazepines is a high priority area for targeted interventions and coordinated strategies.29
Progress of AI development in detecting illegal use also has the potential to significantly reduce the number of opioid-related deaths related to misuse. However, to enable effective interventions to be developed in this area also requires localised real-time intelligence. AI technology focused on detecting illegal use combined with real-time intelligence could support agencies that are currently working in this area to target activity at the time it is detected.
Guidelines and educational resources have been developed that support clinicians with pain management and highlight the problems associated with long-term opioid treatment.14 30 To date, these have had limited impact on improving opioid prescribing, with prescribing of high doses of opioids in some areas showing an increasing trend.16 However, AI technology to improve pain management used in combination with educational resources and guidelines could be a way to prevent unnecessary escalation and long-term use of opioid treatment.
Beaulieu and colleagues19 have reviewed the grey literature on opioid use disorder and AI interventions, identifying 29 unique interventions. Our narrative review expanded on this research to assess the AI technology across all areas of opioid research and includes a wider scope of literature. Similar to our review, Beaulieu and colleagues19 found a lack of scientific evaluation, which was also highlighted by Hossain and colleagues31 in their conference abstract on AI in opioid research and practice. The literature on AI models for diagnosing ischaemic stroke has also illustrated the variation in measuring efficacy and the need for standardisation.32 33
Strengths and limitations
Conference abstracts were included in our review to capture emerging research. However, we did not follow-up on further development and validation of models beyond the reported findings in the publication. Postpublication, several of the models may have undergone further development and external validation and may now be available publicly. Thus, we were limited by the reporting of information in included studies and the infancy of AI research. In particular, it was difficult to evaluate the performance of AI models as various efficacy measures were used.
It was challenging to find a comprehensive way to classify the various types of AI technologies being researched. Various systems have been described that include classification by complexity,34 learning style and algorithm similarity.26 The method described by Brownlee that classified AI technology according to similarity was chosen for the review as it clearly described which algorithms fell into each category.26 Using this system, it was found that natural language processing models and the random forest algorithm were the most commonly researched AI technology used by the studies.
Our review only included studies published in English, which could exclude published research conducted in non-English speaking countries. Finally, we conducted a narrative review that did not involve a quality assessment of the included studies. However, nearly all the studies included in this review were observational. Thus, high-quality research is required to test the efficacy and effectiveness of AI technologies in people using or receiving opioids.
Implications
Public health initiatives have focused on identifying and addressing people that are taking opioids long-term or becoming dependent on opioids,28 35 yet this process has not been systematically embedded in clinical practice. Preventative interventions involving AI early in the care pathway could improve the systematic nature of such initiatives and reduce the number of people taking long-term opioids and potential dependence. AI technology could also support intelligence to reduce illegal and illicitly available opioids to identify the prevalence and local hot spots to target interventions. However, before the adoption of AI models in clinical practice, future research should be conducted to standardise methods and determine whether AI models are superior to current initiatives in clinical practice.
To conduct AI research, large datasets are required. Hence, we found that electronic health records and social media posts were most often used. The NHS holds relevant records and data, on tens of millions of patients, from a huge and ethnically diverse population.36 Limitations around curation, management and secure access to these data could be barrier to adoption of these AI advances in the UK unless developments are made.37 Access to good quality data is also recognised as a current barrier and vital enabler in the UK National AI strategy.21 The strategy makes recommendations to review datasets and their availability to support the development of AI models. However, this access to data needs to be expediated to enable AI advances to have any practical use in the UK soon.
The increased use of AI technology is a key recommendation in the NHS Long-Term Plan,22 thus funding should be allocated to conduct randomised control trials and prospective studies in UK healthcare settings. To enable validation and implementation of AI technology into care pathways, collaboration between many stakeholders is required, including developers, healthcare organisations, clinicians and patients. National guidance on the development, testing and implementation of AI technologies would standardise such processes and help organisations to ensure that patients are protected.
Conclusions
Various AI technologies have been applied to several areas of opioid use, yet this research is still in its infancy. The effectiveness of AI technologies in reducing opioid use and harms cannot be determined until robust randomised and prospective studies are conducted. Therefore, there is a clear need for these AI models to be validated and robustly evaluated. To facilitate the spread and adoption of innovation in this area, collaboration of organisations, developers, funders, researchers, prescribers and patients is required.
Ethics statements
Patient consent for publication
Ethics approval
Not applicable.
Acknowledgments
We acknowledge Anne Gray, Knowledge Specialist at Oxford Academic Health Science Network, for carrying out the literature search.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Twitter @SeemaGadhia, @Richards_G_C
Contributors JR developed idea with SG. AG ran database searches. SG screened studies, extracted data, created the tables and figures and drafted the first manuscript. GCR extracted data on study designs and analysed data. All authors read and contributed to writing the manuscript.
Funding No funding was obtained for this study
Competing interests GCR was financially supported by the National Institute of Health Research School for Primary Care Research, the Naji Foundation and the Rotary Foundation to study for a doctor of philosophy (DPhil/PhD) at the University of Oxford (2017–2020). GCR is an associate editor for BMJ Evidence Based Medicine.
Provenance and peer review Not commissioned; externally peer reviewed.
© Author(s) (or their employer(s)) 2023. No commercial re-use. See rights and permissions. Published by BMJ.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.