32iQ: Panoramic X-Ray Assistant
Problem & Motivation
32iQ is an analytics and computer vision platform that can be used as a diagnostic aid on panoramic x-ray images. In the field of dentistry, radiological errors frequently contribute to misdiagnoses for patients, with perceptual errors accounting for nearly 70% of misdiagnoses - our mission is to enable healthcare professionals to efficiently triage dental ailments sooner, thereby improving outcomes for patients.
Data Source & Data Science Approach
Our dataset was gathered from the Tufts Dental Database, which included 1000 panoramic de-identified panoramic radiographs. These images were randomly selected from a patient database at the Tufts University School of Dental Medicine, and annotated by both an expert and a student. Labels included 5 main classes of abnormality, and more granular sub-classifications within each. All images were in .jpg format 1615 × 840 pixels and included panoramic radiograph, teeth mask overlaid on the panoramic radiograph (below), and maxillomandibular region of interest.
We developed an image classification model to power our platform. Our baseline model was focused on multi-class classification with expert labels as “ground-truth” using a stacked CNN. Our final model leveraged a pretrained AlexNet model where we applied hyperparameter tuning, with the output being a binary prediction along with a confidence score. In parallel, we have interviewed and worked with subject matter experts to design an intuitive front-end application to design an intuitive front-end application for dental professionals to leverage our model predictions.
Evaluation
Final Test Accuracy: 92.07% and Final Test ROC-AUC: 95.48%. In line when compared with other approaches/studies. Our approach to AI-supported diagnosis is able to leverage more widely available, non-invasive panoramic x-rays instead of intraoral which other tools on the market today require to get higher accuracy. Our front-end application was lauded as more user friendly and easier to onboard users onto.
Key Learnings & Impact
Throughout model development, we found that variations in contrast, samples with no teeth or large numbers of implants were making it difficult for our model to learn the relevant features so opted to perform data cleaning to remove such images while also augmenting the dataset size 5x using flips & rotations. Model serving and deployment took longer than expected, but was quite rewarding when the pieces came together.
Acknowledgements
A big thank you to all our subject matter experts in the field of dentistry that made this project possible, especially faculty at the University of Colorado School of Dental Medicine & the Tufts University School of Dental Medicine.