Share

Research: Bias


Detecting Melanoma Fairly: Skin Tone Detection and Debiasing for Skin Lesion Classification


How can we make AI less biased to skin color?

Medical AI has a critical problem: bias that can impact diagnostic accuracy. Research has shown that AI trained primarily on lighter skin tones can struggle to detect potential melanomas on darker skin – a significant challenge in healthcare technology.

Researchers Peter J. Bevan and Amir Atapour-Abarghouei have developed a solution to address this issue. By adding a gradient reversal layer to their machine learning network, they created a model that learns to diagnose skin lesions without being influenced by skin tone.

Their approach prevents the model from predicting a patient's skin type while improving its ability to distinguish between melanoma and benign lesions. The result is more consistent diagnostic performance across different skin tones.

This research is an important step towards more equitable healthcare technology. By reducing racial bias in medical imaging, we can work towards ensuring more reliable diagnostic care for all patients.

Podcast: Impact AI


Foundation Model Series: Empowering Drug Discovery with Rick Schneider from Helical


AI is transforming drug discovery by making biological data more accessible and actionable, bridging the gap between complex sequencing data and real-world therapeutic breakthroughs. As Rick Schneider puts it, it's all about leveraging powerful models to “build use cases that matter and bring value.”

In this episode of Impact AI, we hear from the CEO and Co-founder of Helical to find out how bio-foundation models are transforming pharmaceutical research. Rick shares how Helical’s AI platform enables drug discovery by leveraging biological sequencing data without requiring companies to build their own models from scratch. He also reveals the challenges of working with high-dimensional biological data, the power of model specialization for specific therapeutic areas, and the growing role of open-source AI in healthcare innovation.

Whether you're in biotech, AI, or simply curious about the future of medicine, this episode offers invaluable insights into how AI is shaping the next generation of drug discovery. Tune in today!

Research: Bias


Confounders mediate AI prediction of demographics in medical imaging


Medical imaging is more complex than we might think. Different imaging modalities encode demographic information in varying degrees of visibility:

- In dermoscopy, skin tone is an explicit feature
- Chest x-rays often reveal sex-specific characteristics
- Other modalities may contain more subtle demographic signals

The critical concern is that machine learning models might inadvertently leverage these demographic cues instead of focusing on the relevant biological information, potentially introducing unintended bias.

Grant Duffy et al. conducted a revealing study on cardiac ultrasound images, investigating the predictability of demographic characteristics.

They found they could predict age and sex quite accurately, but race less so.

The Bias Hypothesis

The researchers proposed an intriguing mechanism: race prediction might not stem from direct racial markers, but from "shortcutting" through correlated demographic features like sex and age.

To test this theory, they engineered datasets where race was deliberately confounded with age or sex. Their critical observation: as the bias in data distribution increased, the model's ability to predict race approached its performance in predicting the confounding characteristic (in this case, age or sex).

This suggests that apparent race prediction might be an artifact of underlying data distribution patterns rather than a genuine ability to discern racial characteristics.

Implications

The study highlights the importance of:
- Carefully examining training data for hidden biases
- Understanding how machine learning models might exploit subtle correlations
- Developing more robust methods to prevent demographic information from influencing predictive models

Insights: Image Resolution


Resolution Agnosticism in Digital Pathology AI: Balancing Detail and Context


A question from my recent webinar on foundation models for pathology: Are there studies on how resolution agnostic the models are? Would I get better results by working on higher zoom levels with a pre-trained model?

When it comes to AI models in digital pathology, the question of resolution agnosticism is crucial.

Here's what recent research tells us:

1. 𝐌𝐮𝐥𝐭𝐢-𝐑𝐞𝐬𝐨𝐥𝐮𝐭𝐢𝐨𝐧 𝐀𝐝𝐯𝐚𝐧𝐭𝐚𝐠𝐞
Studies show that foundation models trained on diverse magnification levels often outperform single-resolution models. This multi-scale approach provides the model with a more diverse selection of images.

2. 𝐓𝐚𝐬𝐤-𝐃𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧
The ideal resolution depends on your specific task. Some features are only visible at high magnifications, while others require wider context.

3. 𝐓𝐫𝐚𝐝𝐞-𝐨𝐟𝐟𝐬 𝐭𝐨 𝐂𝐨𝐧𝐬𝐢𝐝𝐞𝐫
Higher resolutions offer more detail but increase computational demands and may miss larger structural patterns. Lower resolutions provide broader context but might miss crucial cellular details.

Enjoy this newsletter? Here are more things you might find helpful:


Office Hours -- Are you a student with questions about machine learning for pathology or remote sensing? Do you need career advice? Once a month, I'm available to chat about your research, industry trends, career opportunities, or other topics.
Register for the next session

Did someone forward this email to you, and you want to sign up for more? Subscribe to future emails
This email was sent to _t.e.s.t_@example.com. Want to change to a different address? Update subscription
Want to get off this list? Unsubscribe
My postal address: Pixel Scientia Labs, LLC, PO Box 98412, Raleigh, NC 27624, United States


Email Marketing by ActiveCampaign