(284e) Automated Detection of Apoptotic Bodies in Label-Free Time-Lapse High-Throughput Microscopy Using Deep Convolutional Neural Networks | AIChE

(284e) Automated Detection of Apoptotic Bodies in Label-Free Time-Lapse High-Throughput Microscopy Using Deep Convolutional Neural Networks

Authors 

Wu, K. L. - Presenter, National Taiwan University
Varadarajan, N., University of Houston
Roysam, B., University of Houston
Menon, P., University of Houston
Reichel, K., University of Houston
Deo, S., University of Houston
Martinez-Paniagua, M., University of Houston
Immunotherapy has altered the cancer treatment landscape and enabled clinical responses lasting for decades. Most treatment modalities remove tumor cells by inducing programmed cell death, or apoptosis, which involves the release of apoptotic bodies (ApoBDs) from dying cells. ApoBDs can augment the durability of the immune response as they are efficiently phagocytosed and catalyze an adaptive immunity in the host. Therefore, a robust approach to identifying the induction time of apoptosis in time-lapse assays is necessary for a deeper mechanistic understanding of immunotherapy and further advances in therapeutics. The multiplexing limitations of high-throughput microscopy and the biochemical perturbations caused by fluorescent labels in conventional methods motivated us to investigate label-free detection methods. However, identifying ApoBDs in label-free images using traditional pattern recognition algorithms is challenging due to their small size, complex morphology and variability in size. We present a label-free phase-contrast microscopy based high-throughput method to detect emergences of ApoBDs in human melanoma cells using deep convolutional neural network (CNN) based image analysis with minimized human annotation effort. Reliable analysis of the visual appearance of ApoBDs was made possible by the ResNet50 classifier, providing an accuracy of 90% when detecting ApoBDs in single frames, and frame-based error < 1 frame (5 min) when estimating the ApoBD formation time in image sequences. To reduce the manual annotation labor for training the CNN, we developed a strategy that combines blub detection and gradient-based localization to generate annotations for training the CNNs. The Mask-RCNN trained with an image dataset augmented by a saliency heat map reached an intersection over union (IoU) accuracy of 75% when segmenting individual ApoBDs. Our results provide a robust tool for ApoBD detection and a resource-efficient way to generate strong annotation for image-based assays.