Share

Hi ,


Last week, I hosted a webinar diving into the latest computer vision developments from CVPR 2025. I want to ensure you have access to the insights, even if you couldn't attend the live event.


What You'll Learn:
The session covers the rapid evolution from task-specific models to AI agents, including:


  • How foundation models are expanding into specialized domains

  • Why multimodal systems use language as the "glue" between modalities

  • The emergence of multi-agent systems that mimic human expert workflows

  • Real examples from cutting-edge research papers

Watch the full recording here


The computer vision landscape is shifting faster than many realize. This 30-minute session will bring you up to speed on where the field is heading and what it means for practical applications.


Worth your time if you're working with visual data in any capacity.


Heather

Research: Continual Learning


Towards exploring continual learning for toxicologic pathology in pharmaceutical drug discovery


Drug safety testing requires analyzing tissue changes in animal models over months of study progression. Yet most AI models suffer from "catastrophic forgetting" - losing their ability to recognize previously learned features when trained on new data.

Arijit Patra et al. address a critical challenge in pharmaceutical drug discovery: how to build AI systems that continuously learn from new toxicologic pathology data without forgetting what they've already learned.

𝗧𝗵𝗲 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲:
In Investigational New Drug studies, tissue imaging data arrives sequentially over extended periods as new tissue types are analyzed and different animal models are incorporated. Current machine learning approaches require retraining from scratch each time, or suffer dramatic performance drops on previously learned tasks. Data retention is complicated by privacy regulations and storage constraints in pharmaceutical settings.

𝗧𝗵𝗲 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵:
The researchers developed a novel "storage-free" continual learning method that combines two key innovations:
- 𝗟𝗮𝘁𝗲𝗻𝘁 𝗿𝗲𝗽𝗹𝗮𝘆: Instead of storing actual tissue images from previous studies, the system learns statistical representations using Gaussian Mixture Models. This generates synthetic samples that capture past data distributions without privacy concerns.
- 𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻-𝗴𝘂𝗶𝗱𝗲𝗱 𝗿𝗲𝗴𝘂𝗹𝗮𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻: The model preserves "attention embeddings" that capture the most important spatial features from previous tissue analysis tasks, ensuring critical diagnostic information isn't lost.

𝗞𝗲𝘆 𝗥𝗲𝘀𝘂𝗹𝘁𝘀:
Testing on a dataset of 320 whole slide images from nine preclinical rat studies showed significant improvements:
- Traditional fine-tuning led to 40% accuracy loss on previously learned tissues
- The new approach reduced this to just 3.6% loss while maintaining performance on new tissue types
- The method outperformed existing continual learning techniques by substantial margins

𝗜𝗺𝗽𝗮𝗰𝘁 𝗳𝗼𝗿 𝗗𝗿𝘂𝗴 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁:
This addresses a real bottleneck in pharmaceutical workflows where approximately 70% of toxicity-related drug failures occur in preclinical phases. By enabling more robust and adaptable AI analysis of tissue pathology, this could accelerate safety assessments and reduce development timelines.

This work represents an important step toward deploying reliable AI in regulatory toxicologic pathology workflows, where models must continuously adapt to new data streams while maintaining accuracy on established tissue recognition tasks.

Insights: Bias


Uncovering Hidden Biases: The Critical Role of Explainability in Medical Imaging AI


During my recent webinar on bias and batch effects in medical imaging, a thought-provoking question emerged that highlights one of our field's most pressing challenges: How can we identify biases we haven't even thought to look for?

The discussion turned to model explainability as a powerful tool in our quality assurance arsenal. While there are many approaches to explainability, one interesting method involves asking our models to visualize what makes an image "more like" or "less like" a particular diagnosis.

This visualization approach offers a window into our models' decision-making processes. When we examine these visualizations carefully, we can detect whether our algorithms are focusing on clinically meaningful features or inadvertently exploiting artifacts, batch effects, or other spurious correlations.

The value here is twofold: First, explainability techniques can reveal unexpected biases before they impact patient care. Second, and perhaps more intriguingly, they might occasionally highlight legitimate biological patterns that human experts haven't yet formalized—potentially advancing our medical understanding.

As we move toward wider clinical adoption of AI in healthcare, explainability isn't a luxury—it's a necessity. Black-box models that perform well in validation but can't explain their reasoning pose significant risks in high-stakes medical environments.

What explainability techniques have you found most valuable in your medical AI work? Have they ever revealed surprising insights about your models or data?

Enjoy this newsletter? Here are more things you might find helpful:



1 Hour Strategy Session -- What if you could talk to an expert quickly? Are you facing a specific machine learning challenge? Do you have a pressing question? Schedule a 1 Hour Strategy Session now. Ask me anything about whatever challenges you’re facing. I’ll give you no-nonsense advice that you can put into action immediately.
Schedule now

Did someone forward this email to you, and you want to sign up for more? Subscribe to future emails
This email was sent to _t.e.s.t_@example.com. Want to change to a different address? Update subscription
Want to get off this list? Unsubscribe
My postal address: Pixel Scientia Labs, LLC, PO Box 98412, Raleigh, NC 27624, United States


Email Marketing by ActiveCampaign