Digital epidemiology – Calibrating readers to score dental images remotely

Background: There is growing interest to use digital photographs in dental epidemiology. However, the reporting of procedures and metric-based performance outcomes from training to promote data quality prior to actual scoring of digital images has not been optimal. Methods: A training study was unde...

Full description

Bibliographic Details
Main Authors: Dye, B.A (Author), Ellwood, R.P (Author), Goodwin, M. (Author), Pretty, I.A (Author)
Format: Article
Language:English
Published: Elsevier Ltd 2018
Subjects:
Online Access:View Fulltext in Publisher
LEADER 03675nam a2200601Ia 4500
001 10.1016-j.jdent.2018.04.020
008 220706s2018 CNT 000 0 und d
020 |a 03005712 (ISSN) 
245 1 0 |a Digital epidemiology – Calibrating readers to score dental images remotely 
260 0 |b Elsevier Ltd  |c 2018 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1016/j.jdent.2018.04.020 
520 3 |a Background: There is growing interest to use digital photographs in dental epidemiology. However, the reporting of procedures and metric-based performance outcomes from training to promote data quality prior to actual scoring of digital images has not been optimal. Methods: A training study was undertaken to assess training methodology and to select a group of scorers to assess images for dental fluorosis captured during the 2013–2014 National Health and Nutrition Examination Survey (NHANES). Ten examiners and 2 reference examiners assessed dental fluorosis using the Deans Index (DI) and the Thylstrup-Fejerskov (TF) Index. Trainees were evaluated using 128 digital images of upper anterior central incisors at three different periods and with approximately 40 participants during two other periods. Scoring of all digital images was done using a secured, web-based system. Results: When assessing for nominal fluorosis (apparent vs. non-apparent), the unweighted Kappa for DI ranged from 0.68 to 0.77 and when using an ordinal scale, the linear-weighted kappa for DI ranged from 0.43 to 0.69 during the final evaluation. When assessing for nominal fluorosis using TF, the unweighted Kappa ranged from 0.67 to 0.89 and when using an ordinal scale, the linear-weighted kappa for TF ranged from 0.61 to 0.77 during the final evaluation. No examiner improvement was observed when a clinical assessment feature was added during training to assess dental fluorosis using TF, results using DI was less clear. Conclusion: Providing examiners theoretical material and scoring criteria prior to training may be minimally sufficient to calibrate examiners to score digital photographs. There may be some benefit in providing an in-person training to discuss criteria and review previously scored images. Previous experience as a clinical examiner seems to provide a slight advantage at scoring photographs for DI, but minimizing the number of scorers does improve inter-examiner concordance for both DI and TF. © 2018 
650 0 4 |a adolescent 
650 0 4 |a Adolescent 
650 0 4 |a calibration 
650 0 4 |a Calibration 
650 0 4 |a child 
650 0 4 |a Child 
650 0 4 |a Cross-Sectional Studies 
650 0 4 |a cross-sectional study 
650 0 4 |a Data Accuracy 
650 0 4 |a Data quality 
650 0 4 |a Dental Enamel 
650 0 4 |a dental fluorosis 
650 0 4 |a Dental fluorosis 
650 0 4 |a Dental public health 
650 0 4 |a diagnostic imaging 
650 0 4 |a enamel 
650 0 4 |a Epidemiology 
650 0 4 |a Fluorosis, Dental 
650 0 4 |a human 
650 0 4 |a Humans 
650 0 4 |a incisor 
650 0 4 |a Incisor 
650 0 4 |a Inter-examiner reliability 
650 0 4 |a Kappa 
650 0 4 |a measurement accuracy 
650 0 4 |a medical photography 
650 0 4 |a NHANES 
650 0 4 |a nutrition 
650 0 4 |a Nutrition Surveys 
650 0 4 |a observer variation 
650 0 4 |a Observer Variation 
650 0 4 |a Photography, Dental 
650 0 4 |a procedures 
650 0 4 |a reproducibility 
650 0 4 |a Reproducibility of Results 
650 0 4 |a Training 
700 1 |a Dye, B.A.  |e author 
700 1 |a Ellwood, R.P.  |e author 
700 1 |a Goodwin, M.  |e author 
700 1 |a Pretty, I.A.  |e author 
773 |t Journal of Dentistry