Share
Preview
 
Hi ,

Gathering annotations on histopathology images to train a machine learning model is both time-consuming and costly. Some models can get by with slide- or patch-level annotations, while others require more detailed point- or pixel-level labels.


Annotations for mitosis detection typically take one of two forms: point annotations that label only the centroid or pixel-level annotations that delineate the entire mitosis for training a segmentation model.

Benchmark datasets exist with each of these annotation types, but pixel-level ones tend to be smaller due to the increased labeling efforts required.

Today I want to look at some ways in which both types of annotations can be used, followed by an innovative method for generating mitosis annotations and a better feature representation.

Weak and Strong Mitosis Annotations

Sebai et al. brought centroid and pixel annotations together in a single model that can be trained on both types of datasets [1].

They created two parallel networks: one using the weak labels (mitosis centroid) and the other using the strong (segmentation) labels. The two networks share weights. Both networks perform segmentation but use different loss functions.

For training examples with a centroid label, they applied a concentric loss that ignores pixels in a ring around the centroid because they don't know which pixels are the mitosis. So the loss is applied only to pixels less than a predetermined distance from the centroid and those farther than a given larger distance from the centroid. For samples with segmentation annotations, they used a pixel-wise loss.


Different label types for mitoses. The strong segmentation branch uses pixel-level labels (b), while examples with weak labels follow the concentric segmentation branch (c) [1]

Improving Small Mitoses with Bounding Boxes

Kausar et al. focused on detection instead of segmentation but sought to improve performance for small mitoses [2].

They trained a Faster R-CNN detector using a multi-scale region proposal network with custom anchor sizes. But they needed an accurate bounding box around each mitosis -- something that’s not provided in centroid-only datasets. So they trained a segmentation model on a smaller dataset that has pixel-level annotations and applied it to the centroid annotated mitosis datasets to get the bounding boxes.

Their model improved mitosis detection performance slightly for larger mitoses and significantly for smaller ones!


Detection performance for Kausar et al.’s MS-RCNN vs. Faster-RCNN for different sized mitoses [2]


Mitosis Annotations from PHH3

I’m most intrigued by this paper by Mercan et al., which hypothesizes that learning the mapping between H&E and PHH3 or vice versa would capture features relevant for mitosis detection [3].

PHH3 is a immunohistochemical marker that targets cells undergoing mitosis, making the detection task easier.

They used a Generative Adversarial Network (GAN) to map H&E to PHH3 and vice versa, then trained a CNN for mitosis detection on the synthetic images.

After testing several scenarios, they found that the best one used the H&E to PHH3 GAN. The features learned by this GAN were most beneficial for mitosis detection from H&E.


Mercan et al.’s GAN models to find mitoses in H&E using PHH3 to improve the feature representation [3]

Wrap Up

The lessons learned from the papers above extend beyond mitosis detection. Weak and strong annotations can be used to train a single model. And discriminative models can be augmented by alternative forms of data.

Think outside the box for ways to combine different types of annotations into a single model. And consider bringing in other modalities -- particularly other imaging ones -- to strengthen the representation learned by deep models or to provide an alternative means for annotating.


I'm trying something new this month...

The pandemic has made it more challenging to network and meet others working in the field, so I'm starting up a monthly office hours for graduate students.

I'm happy to chat about our projects, your research, industry trends, career opportunities, or other topics.

The first session will be on Thursday July 15 from 12 to 1 pm EDT. Zoom link is available here.

Dates for future office hours will also be posted at the above link.
 
Fight cancer & climate change with AI
Distilling the latest research to help R&D teams implement better models and create an impact
Pixel Scientia Labs, LLC, 9650 Strickland Rd., Suite 103-242, Raleigh, NC 27615, United States

Email Marketing by ActiveCampaign