Article Text
Abstract
Background Since the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.
Methods This study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.
Results Our model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.
Conclusion This study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.
- COVID-19
- virus diseases
- diagnosis
Data availability statement
Data are available upon reasonable request.
This article is made freely available for use in accordance with BMJ’s website terms and conditions for the duration of the covid-19 pandemic or until otherwise determined by BMJ. You may use, download and print the article for any lawful, non-commercial purpose (including text and data mining) provided that all copyright notices and trade marks are retained.
https://bmj.com/coronavirus/usageStatistics from Altmetric.com
Data availability statement
Data are available upon reasonable request.
Footnotes
HC and AG are joint first authors.
HC and AG contributed equally.
Contributors HC and AG designed and evaluated the CIdeR. BS conceived and supervised the project. LJ contributed to the literature search, manuscript preparation and editing. PT consulted the first authors for the entirety of the project and assisted with the write-up. AB provided advice on the baseline model and helped edit the manuscript.
Funding EPSRC Center for Doctoral Training in High Performance Embedded and Distributed Systems (HiPEDS) (EP/L016796/1). DFG under Agent-based Unsupervised Deep Interactive 0-shot-learning Networks Optimising Machines’ Ontological Understanding of Sound (AUDI0NOMOUS), Reinhart Koselleck Project (442218748). Imperial College London Teaching Scholarship. UK Research and Innovation Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (www.safeandtrustedai.org) (EP/S023356/1).
Disclaimer The University of Cambridge does not bear any responsibility for the analysis or interpretation of the data used herein, which represents the own view of the authors of this communication.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.