Share

Hi ,


Don't miss your chance to learn how to avoid the common pitfalls that can derail your computer vision projects. Join me for a FREE webinar at 11 AM EDT on April 2 - tomorrow!


Discover How to Overcome:

  • Inconsistent annotations that skew your model's performance.

  • The lack of baseline models that makes it hard to measure progress.

  • Data leakage that undermines your model's reliability.


Takeaways:

  • Practical strategies to enhance model reliability and performance.

  • Insights from real-world examples and case studies.

  • Opportunities to ask questions and engage with experts.


Last-minute registration is still open! Click the link below to secure your spot and start building more robust computer vision projects today!


Register now


Let's elevate your skills together!


Heather

Research: 3D Pathology


AI-driven 3D Spatial Transcriptomics


The explosion of multimodal foundation models, combined with the increasing availability of larger biomedical datasets, now makes it possible to connect different representations of tissue.

Cristina Almagro-Perez et al. created a model to map tissue architecture between spatial transcriptomics and 3D non-distructive imaging like microCT. From this model, they can predict a 3D spatial transcriptomics view from microCT.

VORTEX was trained in two steps: 1) with disease-specific pairs of 2D (or 3D) tissue images and 2D spatial transcriptomics, and 2) with pairs of 2D (or 3D) tissue images and 2D spatial transcriptomics from the volume of interest.

This approach was validated on prostate, breast, and colorectal cancer datasets.


Demo

Blog

Blog: Pathology Foundation Models


One year of UNI and CONCH


In the past 18 months, foundation models have become the key component to many advancements in diagnostics and precision medicine.

In this article, Faisal Mahmood reflects on two models developed last year by his team at Harvard, UNI and CONCH.

Together, they have already accumulated >760 citations and have been applied to >1.2 million slides for clinical diagnosis, patient stratification, and biomarker discovery.

What comes next?

"We anticipate that moving forward, more diverse, higher quality multi-institutional pretraining cohorts incorporating underrepresented diseases, tissue subtypes, and staining protocols, rather than the sheer volume of training data, will further refine model performance and improve robustness. At the same time, expanding integration of high-resolution imaging with other data modalities (e.g., genomics, imaging biomarkers, text) from the broader biomedical context through pathology image foundation models, will likely unlock an entirely new generation of disease-specific insights and impactful applications in precision medicine. We eagerly look forward to the continued collaboration and breakthroughs that will shape the field in the years to come."

Insights: Foundation Models for Segmentation


Leveraging Foundation Models for Downstream Segmentation Tasks in Digital Pathology


A question from my recent webinar on foundation models for pathology: How can you use foundation models for downstream segmentation tasks?

Foundation models are revolutionizing the field of digital pathology, offering powerful pre-trained encoders for various downstream tasks.

Here's how to effectively use them for segmentation:

1. 𝐄𝐧𝐜𝐨𝐝𝐞𝐫-𝐃𝐞𝐜𝐨𝐝𝐞𝐫 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞
Use a pre-trained foundation model as the encoder. Implement a learnable decoder for pixel-level segmentation, such as a U-net. Fine-tune the decoder and potentially parts of the encoder for your specific task.

2. 𝐒𝐩𝐞𝐜𝐢𝐚𝐥𝐢𝐳𝐞𝐝 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬
The Segment Anything Model (SAM) and its pathology-specific variants show promise. SAM-Path enhances SAM's ability to perform semantic segmentation in digital pathology without human input prompts.

3. 𝐀𝐝𝐚𝐩𝐭𝐚𝐭𝐢𝐨𝐧 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬
Recent research proposes weakly supervised self-training to adapt SAM to new distributions. This approach improves robustness across various segmentation tasks.

Enjoy this newsletter? Here are more things you might find helpful:


Team Workshop: Mastering Distribution Shift in Computer Vision - Ready to transform your computer vision models into robust systems that thrive in real-world conditions? Join me for an exclusive 90 minute workshop designed to empower your team to identify, understand, and address distribution shift—one of the most critical challenges in building AI systems.

Schedule now

Did someone forward this email to you, and you want to sign up for more? Subscribe to future emails
This email was sent to _t.e.s.t_@example.com. Want to change to a different address? Update subscription
Want to get off this list? Unsubscribe
My postal address: Pixel Scientia Labs, LLC, PO Box 98412, Raleigh, NC 27624, United States


Email Marketing by ActiveCampaign