Sharib Ali
An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy
Ali, Sharib; Zhou, Felix; Braden, Barbara; Bailey, Adam; Yang, Suhui; Cheng, Guanju; Zhang, Pengyi; Li, Xiaoqiong; Kayser, Maxime; Soberanis-Mukul, Roger D.; Albarqouni, Shadi; Wang, Xiaokang; Wang, Chunqing; Watanabe, Seiryo; Oksuz, Ilkay; Ning, Qingtian; Yang, Shufan; Khan, Mohammad Azam; Gao, Xiaohong W.; Realdon, Stefano; Loshchenov, Maxim; Schnabel, Julia A.; East, James E.; Wagnieres, Georges; Loschenov, Victor B.; Grisan, Enrico; Daul, Christian; Blondel, Walter; Rittscher, Jens
Authors
Felix Zhou
Barbara Braden
Adam Bailey
Suhui Yang
Guanju Cheng
Pengyi Zhang
Xiaoqiong Li
Maxime Kayser
Roger D. Soberanis-Mukul
Shadi Albarqouni
Xiaokang Wang
Chunqing Wang
Seiryo Watanabe
Ilkay Oksuz
Qingtian Ning
Dr Shufan Yang S.Yang@napier.ac.uk
Associate
Mohammad Azam Khan
Xiaohong W. Gao
Stefano Realdon
Maxim Loshchenov
Julia A. Schnabel
James E. East
Georges Wagnieres
Victor B. Loschenov
Enrico Grisan
Christian Daul
Walter Blondel
Jens Rittscher
Abstract
We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the shortcomings of current training strategies and highlight the need for developing new optimal metrics to accurately quantify the clinical applicability of methods.
Journal Article Type | Article |
---|---|
Acceptance Date | Jan 9, 2020 |
Online Publication Date | Feb 17, 2020 |
Publication Date | 2020-12 |
Deposit Date | Feb 26, 2021 |
Publicly Available Date | Feb 26, 2021 |
Journal | Scientific Reports |
Publisher | Nature Publishing Group |
Peer Reviewed | Peer Reviewed |
Volume | 10 |
Issue | 1 |
Article Number | 2748 (2020) |
DOI | https://doi.org/10.1038/s41598-020-59413-5 |
Keywords | Oesophagogastroscopy, Translational research |
Public URL | http://researchrepository.napier.ac.uk/Output/2744676 |
Files
An Objective Comparison Of Detection And Segmentation Algorithms For Artefacts In Clinical Endoscopy
(2.4 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
Copyright Statement
This article is licensed under a Creative Commons Attribution 4.0 International License.
You might also like
Modelling nanoplasmonic device based on an off-shelf hybrid desktop supercomputing platform
(2013)
Presentation / Conference Contribution
Interactive Reading Using Low Cost Brain Computer Interfaces
(2017)
Presentation / Conference Contribution
A single chip system for sensor data fusion based on a Drift-diffusion model
(2018)
Presentation / Conference Contribution
Towards a scalable hardware/software co-design platform for real-time pedestrian tracking based on a ZYNQ-7000 device
(2018)
Presentation / Conference Contribution
A Highly Integrated Hardware/Software Co-Design and Co-Verification Platform
(2018)
Journal Article
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search