Automatic Colorectal Cancer Screening Using Deep Learning in Spatial Light Interference Microscopy Data
PDF Link
The surgical pathology workflow currently adopted by clinics uses staining to reveal tissue architecture within thin sections. A trained pathologist then conducts a visual examination of these slices and, since the investigation is based on an empirical assessment, a certain amount of subjectivity is unavoidable. Furthermore, the reliance on external contrast agents such as hematoxylin and eosin (H&E), albeit being well-established methods, makes it difficult to standardize color balance, staining strength, and imaging conditions, hindering automated computational analysis. In response to these challenges, we applied spatial light interference microscopy (SLIM), a label-free method that generates contrast based on intrinsic tissue refractive index signatures. Thus, we reduce human bias and make imaging data comparable across instruments and clinics. We applied a mask R-CNN deep learning algorithm to the SLIM data to achieve an automated colorectal cancer screening procedure, i.e., classifying normal vs. cancerous specimens. Our results, obtained on a tissue microarray consisting of specimens from 132 patients, resulted in 91% accuracy for gland detection, 99.71% accuracy in gland-level classification, and 97% accuracy in core-level classification. A SLIM tissue scanner accompanied by an application-specific deep learning algorithm may become a valuable clinical tool, enabling faster and more accurate assessments by pathologists.