(Proper credits should be given to the speakers if the slides are reproduced or published)


Title: Unsupervised Learning of Localized Texture Patterns of Pulmonary Emphysema on Computed Tomography

Speaker: Andrew Laine, Columbia University

PDF Slides.

Computed tomography (CT) imaging enables in vivo assessment of lung parenchyma and several lung diseases. CT scans are key for the diagnosis of 1) chronic obstructive pulmonary disease (COPD), which is the fourth leading cause of death worldwide, and overlaps with pulmonary emphysema. We propose an original unsupervised approach to learn emphysema-specific radiological texture patterns. We have designed dedicated spatial and texture features and a two-stage learning strategy incorporating clustering and graph partitioning. Learning was performed on a cohort of 2,922 high-resolution CT scans, which included a high prevalence of smokers and COPD subjects. Experiments lead to discovering 10 highly-reproducible spatially-informed lung texture patterns and 6 quantitative emphysema subtypes (QES). Our discovered QES were associated independently with distinct risk of symptoms, physiologic changes, exacerbations and mortality. Genome-wide association studies identified loci associated with four subtypes, one with compelling functional evidence. We then developed a deep-learning network, using unsupervised domain adaptation with adversarial training, to label the QES on cardiac CT scans, which included 2/3rds of the lung. Our proposed method accounted for acquisition differences in CT image quality, and enabled us to study the progression of QES on a cohort of 17,039 longitudinal cardiac CT scans. The discovered QES provide novel emphysema subphenotyping that may facilitate future study of emphysema development, understanding the stages of COPD and the design of personalized therapies for treatment.



Title: Explaining Deep Learning Using Radiologist Defined Semantic Features

Speaker: Dmitry Goldgof, University of South Florida

PDF Slides.

Quantitative features are generated from a tumor phenotype by various data characterization and feature extraction approaches. These features give us information about a nodule e.g., nodule size, pixel intensity, histogram based information, and texture information from wavelets or a convolution kernel. Semantic features, on the other hand, can be generated by an experienced radiologist and consist of the common characteristics of a tumor e.g. location of a tumor, fissure or pleural wall attachment, presence of fibrosis or emphysema, concave cut on nodule surface etc. Semantic features have also shown promise in predicting malignancy. Deep features from images are generally extracted from the last layers before the classification layer of a convolutional neural network (CNN). These networks have strong capability in learning specific patterns and textures from different types of images. However, the features extracted by CNN cannot be easily explained (black box) since they are just column numbers or positions of neurons in the hidden layers. In this talk, we propose a new approach to explanability of deep features via semantic and quantitative features. Specifically, we discuss how traditional quantitative features and semantic features can be used to relate and explain deep features. We also show how twenty-six deep features from the Vgg-S neural network and twelve deep features from our trained CNN could be explained by semantic or traditional quantitative features. The proposed approach, which can be applied to enhance explanability of various medical image applications, shows promise toward transparent, understandable, and explainable decision-making.




Title: Lessons learned from Machine Learning in Google applied to Medical and Biological Imaging

Speaker: Ming Jack Po, Google

In March of 2016, the AlphaGo computer program beat world champion (and human) Lee Sedol at the board game Go. The program's success reflected the significant progress that machine learning research has made in recent years. However, AlphaGo was just one example of what can be achieved with machine learning. This talk will provide an overview of some of the techniques that are being used in machine learning today, as well as some recent and ongoing work by Google's research teams to advance the applications of machine learning, particularly its role in biomedical research. The talk will also discuss some of the unique challenges around applications in healthcare.




Title: Automated Visual Evaluation (AVE) for Cervical Cancer Screening and its Challenges

Speaker: Zhiyun (Jaylene) Xue , National Library of Medicine

PDF Slides.

Cervical cancer is one of the leading gynecologic diseases that affects the life of many women worldwide especially in low- and medium-resource regions (LMRR). Regular screening and early diagnosis and treatment play a critical role in the prevention of cervical cancer. Visual inspection with acetic acid (VIA) is an inexpensive screening approach commonly used in LMRR, but has been shown to be inaccurate. In this talk, we present our work on utilizing deep learning techniques to automatically evaluate the visual appearance of the acetowhitened uterine cervix using mobile devices for the goal of assisting or maybe ultimately replacing VIA. We face well-known challenges that impact use of deep learning for medical imaging applications, such as, having a small amount of labeled data or having a very unbalanced datasets. But, there are also additional practical challenges that need to be addressed for using AVE in real world, such as, the control of image quality and its influence on the AVE algorithm, the robustness of the algorithm across multiple devices, and the adjustment of AVE for variability in cervix appearance in different geographical regions. The talk will describe our findings and challenges that guide next steps for research in the field.




Title: Integrating Imaging, Omics, and Clinical Data towards Improved Outcome Management in Lung Cancer

Speaker: Mu Zhou, SenseBrain AI Technology

PDF Slides.

Growing amounts of medical imaging and molecular data offer new opportunities to better understand cancer biology. In this talk, I will highlight ongoing progress on modeling multi-scale biomedical data for linking imaging, omics, and clinical data to advance our understanding in lung cancer. First, I will present recent works on linking CT images and RNA profiles in non-small cell lung cancer (NSCLC). We propose an image-to-genomics map to identify non-invasive biomarkers with prognostic implications by leveraging public gene expression cohorts. Second, I will discuss how we can develop efficient image-based deep learning classifiers to predict survival outcomes in lung cancer across multiple clinical centers. Ongoing clinical data challenges and opportunities will also be addressed in related areas.




Title: Deep neural ensembles for improved pulmonary abnormality detection in chest radiographs

Speaker: Sivaramakrishnan Rajaraman, National Library of Medicine

PDF Slides.

Cardiopulmonary diseases account for a significant proportion of deaths and disabilities across the world. Chest X-rays are a common diagnostic imaging modality for confirming intra-thoracic cardiopulmonary abnormalities. However, there remains an acute shortage of expert radiologists, particularly in under-resourced settings that results in interpretation delays and could have global health impact. These issues can be mitigated by an artificial intelligence (AI) powered computer-aided diagnostic (CADx) system. Such a system could help supplement decision-making and improve throughput while preserving and possibly improving the standard-of-care. A majority of such AI-based diagnostic tools at present use data-driven deep learning (DL) models that perform automated feature extraction and classification. Convolutional neural networks (CNN), a class of DL models, have gained significant research prominence in tasks related to image classification, detection, and localization. The literature reveals that they deliver promising results that scale impressively with an increasing number of training samples and computational resources. However, the techniques may be adversely impacted due to their sensitivity to high variance or fluctuations in training data. Ensemble learning helps mitigate these by combining predictions and blending intelligence from multiple learning algorithms. Complex non-linear functions constructed within ensembles help improve robustness and generalization. Empirical result predictions have demonstrated superiority over the conventional approach with stand-alone CNN models. In this talk, I will describe example work at the NLM that use model ensembles to improve pulmonary abnormality detection in chest radiographs.




Title: Intel AI Technologies for Efficient Medical Image Analysis

Speaker: Anthony Reina, Intel Corporation

PDF Slides.

Anthony will walk through how to optimize deep learning models using the Intel® Distribution of the OpenVINO toolkit. OpenVINO is an open-sourced software solution that allows developers to optimize their deep learning pipelines and get the maximum performance on Intel hardware. He’ll give an overview of several FDA-cleared deep learning models that are in deployment today in hospitals, give tips on how to design high-throughput, low-latency models, and cover different use cases applied to MRI and X-ray medical images.