Classification of Dental Radiographs Using Deep Learning

Objectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panorami...

Full description

Bibliographic Details
Main Authors: Jose E. Cejudo, Akhilanand Chaurasia, Ben Feldberg, Joachim Krois, Falk Schwendicke
Format: Article
Language:English
Published: MDPI AG 2021-04-01
Series:Journal of Clinical Medicine
Subjects:
Online Access:https://www.mdpi.com/2077-0383/10/7/1496
id doaj-e902a733443740a0bcb5b39497d8ee39
record_format Article
spelling doaj-e902a733443740a0bcb5b39497d8ee392021-04-03T23:02:25ZengMDPI AGJournal of Clinical Medicine2077-03832021-04-01101496149610.3390/jcm10071496Classification of Dental Radiographs Using Deep LearningJose E. Cejudo0Akhilanand Chaurasia1Ben Feldberg2Joachim Krois3Falk Schwendicke4Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, GermanyITU/WHO Focus Group AI on Health, Topic Group Dentistry, 1211 Geneva, SwitzerlandDepartment of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, GermanyDepartment of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, GermanyDepartment of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 14197 Berlin, GermanyObjectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on <i>L</i>, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps. Results: All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (<i>p</i> < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium. Conclusions: Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning.https://www.mdpi.com/2077-0383/10/7/1496artificial intelligenceclassification deep learningdentalmachine learningradiographsteeth
collection DOAJ
language English
format Article
sources DOAJ
author Jose E. Cejudo
Akhilanand Chaurasia
Ben Feldberg
Joachim Krois
Falk Schwendicke
spellingShingle Jose E. Cejudo
Akhilanand Chaurasia
Ben Feldberg
Joachim Krois
Falk Schwendicke
Classification of Dental Radiographs Using Deep Learning
Journal of Clinical Medicine
artificial intelligence
classification deep learning
dental
machine learning
radiographs
teeth
author_facet Jose E. Cejudo
Akhilanand Chaurasia
Ben Feldberg
Joachim Krois
Falk Schwendicke
author_sort Jose E. Cejudo
title Classification of Dental Radiographs Using Deep Learning
title_short Classification of Dental Radiographs Using Deep Learning
title_full Classification of Dental Radiographs Using Deep Learning
title_fullStr Classification of Dental Radiographs Using Deep Learning
title_full_unstemmed Classification of Dental Radiographs Using Deep Learning
title_sort classification of dental radiographs using deep learning
publisher MDPI AG
series Journal of Clinical Medicine
issn 2077-0383
publishDate 2021-04-01
description Objectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on <i>L</i>, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps. Results: All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (<i>p</i> < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium. Conclusions: Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning.
topic artificial intelligence
classification deep learning
dental
machine learning
radiographs
teeth
url https://www.mdpi.com/2077-0383/10/7/1496
work_keys_str_mv AT joseecejudo classificationofdentalradiographsusingdeeplearning
AT akhilanandchaurasia classificationofdentalradiographsusingdeeplearning
AT benfeldberg classificationofdentalradiographsusingdeeplearning
AT joachimkrois classificationofdentalradiographsusingdeeplearning
AT falkschwendicke classificationofdentalradiographsusingdeeplearning
_version_ 1721543511143088128