Share
Preview
 

Hi ,

Ten years ago, when I first started working on histology applications using machine learning, hand-crafted features and a simple classifier were the way to go. These features could be used to differentiate tissue types or detect cancer, and sometimes they had a bit of predictive power for more complex tasks like predicting outcomes.

These features were easy to understand because they typically replicated properties that pathologists already looked for in characterizing tumor tissue. But they were also limited for this same reason.

Deep learning has now taken over the field, with the vast majority of pathology image analysis papers employing convolutional neural networks.

So how can we interpret these black box models? It’s definitely not as simple as with traditional features, but there are solutions.

This week I’d like to show you three different opportunities for getting explainability from your models.

1) Combine deep learning with hand-crafted features

Deep learning doesn’t need to be used for the whole model. In [1], Diao et al. used deep learning to classify tissue types and individual cell types. From that, they generated many human-interpretable features characterizing cell- and tissue-level properties.

Applying a linear classifier to these features, they were able to predict five different molecular properties with varying degrees of accuracy on five types of cancer.

This combination of a human-interpretable features and simple classifier provided insights into which features can distinguish cancer types.


Another approach to gaining model explainability is to focus on the model outputs instead of the features captured.

Large whole slide images are typically broken into smaller image patches for model training. While a model may be trained to predict some property for the whole slide, it can also produce scores for individual patches. This is often displayed as a heatmap over the slide.

In [2], Pierre Courtiol et al. identified the 10 patches most and least associated with survival. They also identified patches with a good prognosis and their most similar patches that were not predictive. From this, they could determine the tissue characteristics associated with a good or poor prognosis.

 

3) Apply a generative model


The above approach identifies important tissue regions. But, for some classification tasks, the distinction between classes is more subtle.

In [3], Schutte et al. applied SyleGAN -- a type of generative adversarial network -- to increase or decrease tumor probability in an image.

GANs have found a variety of uses in pathology, but I found this one particularly intriguing.

[3] Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images - synopsis

There are a number of other explainability techniques out there, like pixel attribution methods (e.g., GradCAM in the image above). Be sure to consider your use case as you select a method.

Explainability is important for debugging models, but also for ensuring model fairness and identifying potential biases. Interpretable models can even help make new scientific discoveries!

Do you know someone who would be interested in these insights?

Please forward this email along. Go here to sign up

Hope that you’re finding Pathology ML Insights informative. Look out for another edition in two weeks.

Heather
 
Fight cancer & climate change with AI
Distilling the latest research to help R&D teams implement better models and create an impact
Pixel Scientia Labs, LLC, 9650 Strickland Rd., Suite 103-242, Raleigh, NC 27615, United States

Email Marketing by ActiveCampaign