Article Text

Download PDFPDF
Original research
Pilot study for the comparison of machine-learning augmented audio-uroflowmetry with standard uroflowmetry in healthy men
  1. Edwin Jonathan Aslim1,
  2. Balamurali B T2,
  3. Yun Shu Lynn Ng1,
  4. Tricia Li Chuen Kuo1,3,
  5. Kheng Sit Lim1,
  6. Jacob Shihan Chen2,
  7. Jer-Ming Chen4,
  8. Lay Guat Ng1
  1. 1 Department of Urology, Singapore General Hospital, Singapore
  2. 2 Information Systems Technology and Design, Singapore University of Technology and Design, Singapore
  3. 3 Department of Urology, Sengkang General Hospital, Singapore
  4. 4 Science, Mathematics and Technology, Singapore University of Technology and Design, Singapore
  1. Correspondence to Dr Edwin Jonathan Aslim, Department of Urology, Singapore General Hospital, Singapore 169608, Singapore; edwin.jonathan.aslim{at}singhealth.com.sg

Abstract

Background Routine assessments of lower urinary tract symptoms (LUTS) include standard uroflowmetry (UF), which is labour and equipment intensive to perform, and stressful and unnatural for patients. An ideal test should be accurate, repeatable, affordable and portable.

Objective To evaluate the accuracy of a machine-learning (ML) augmented audio-uroflowmetry (AF) algorithm in predicting urinary flows.

Subjects and methods This pilot study enrolled 25 healthy men without LUTS, who were asked to void into a gravimetric uroflowmeter. A smartphone recorded the voiding sounds simultaneously. Paired uroflow and audio parameters were used to train an ensemble ML model to predict urinary flows from voiding sounds. Pearson’s correlation coefficient was used to compare UF with AF values. Statistical significance was defined as p<0.05.

Results A total of 52 voiding session were captured, of which n=35 were used for training and n=17 for testing the algorithm. Each voiding session was divided into 0.1 s frames, resulting in >300 analysable datapoints per session. Pearson’s coefficients showed strong correlations for flowtimes (r=0.96, p<0.0001), voided volumes (r=0.83, p<0.0001) and average flowrates (r=0.70, p=0.0019), and moderate correlation for maximal flowrate (r=0.69, p=0.0022). AF predicted flow patterns showed good agreement with UF tracings. The main limitations were the small participants sample size and use of a single smartphone type.

Conclusions ML augmented AF can predict uroflow parameters with a good accuracy, and can be a viable alternative to standard UF. Further work is needed to develop this platform for use in real-life conditions and across genders.

  • uroflowmetry
  • audio
  • machine learning
  • artificial intelligence
  • lower urinary tract symptoms

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Footnotes

  • Contributors EJA: data collection and study planning, manuscript writing. BBT: data analysis and statistics, manuscript writing. YSLN: data collection, study planning. TLCK: study planning, manuscript supervision. KSL: manuscript supervision. JSC: data analysis, app development. J-MC: principal investigator (technical), study and manuscript supervision. LGN: principal investigator (clinical), study and manuscript supervision.

  • Funding This study was funded by SingHealth Surgery ACP-SUTD Technology and Design Multidisciplinary Development Programme (grant TDMD-2016-1).

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval This study is IRB approved (CIRB 2017/2241) and supported by the SingHealth Surgery ACP-SUTD Technology and Design Multidisciplinary Development Programme (grant No. TDMD-2016–1).

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement The data are not publicly available as they contain information that could compromise the privacy of research participants.