Article Text

Download PDFPDF
Development of a practical training method for a healthcare artificial intelligence (AI) chatbot
  1. Julia Lin1,
  2. Todd Joseph2,
  3. Joanna Jean Parga-Belinkie3,
  4. Abigail Mandel2,
  5. Ryan Schumacher1,
  6. Karl Neumann2,
  7. Laura Scalise4,
  8. Jessica Gaulton5,
  9. Lori Christ6,
  10. Kirstin Leitner4,
  11. Roy Rosin1
  1. 1 Center for Health Care Innovation, Penn Medicine, Philadelphia, Pennsylvania, USA
  2. 2 Memora Health, San Francisco, California, USA
  3. 3 Division of Neonatology, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
  4. 4 HUP Obstetrics and Gynecology, Penn Medicine, Philadelphia, Pennsylvania, USA
  5. 5 Department of Neonatology, Jefferson Health – Abington, Abington, Pennsylvania, USA
  6. 6 HUP Neonatology and Newborn, Penn Medicine, Philadelphia, Pennsylvania, USA
  1. Correspondence to Julia Lin, Center for Health Care Innovation, Penn Medicine, Philadelphia, PA 19104-4385, USA; linjulialee{at}gmail.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Summary box

What are the new findings?

  • Penn Medicine and Memora Health developed a practical training method to accelerate the development of an AI chatbot for postpartum care at the Hospital of the University of Pennsylvania

  • Crowdsourced input from Amazon Mechanical Turk led to higher AI accuracy in a much shorter period of time than previous test patient recruitment methods for AI training.

  • Our iterative SEER process showed measurable improvement of chatbot accuracy with targeted training sets

How might it impact on healthcare in the future?

  • We introduce this training process to proactively train healthcare AI chatbots to be more accurate and safer for patients, accelerating the ability to implement AI in healthcare settings

Introduction

The rise of healthcare chatbots using artificial intelligence (AI) to understand unconstrained natural language input and reply with appropriate answers presents an emerging field of research, but few published studies on this topic include structured evaluation of efficacy or safety.1 In the past few months, healthcare has seen COVID-19 accelerate the adoption of digital health solutions to enable more timely care.2–4 Before AI chatbots can be deployed in healthcare applications, they need to be appropriately ‘trained’ on clinically relevant data.5 We will discuss the context that led to the development of a practical training method for a healthcare AI chatbot that efficiently improves chatbot accuracy and patient safety.

The Healing at Home programme at the Hospital of the University of Pennsylvania (HUP) coordinates prioritised discharge and digital access to care for mothers and newborns.6 The American College of Obstetricians and Gynecologists recommends more immediate contact between obstetricians and patients to support postpartum care as an ongoing process, especially during the ‘fourth trimester’ after discharge.7–9 Literature shows many examples of texting interventions improving access to perinatal care.10–15 Healing at Home developed a postpartum support chatbot named ‘Penny’ in a partnership between a multidisciplinary clinical team from HUP, …

View Full Text