Hi ,
I'm heading to CVPR 2025 in Nashville this week!
Looking through the program, I'm thrilled to see the incredible breadth of innovation happening in computer vision. From foundation models pushing new boundaries to exciting multimodal capabilities, this year's talks are packed with cutting-edge research.
What has me particularly excited:
Multiple sessions on medical computer vision - including foundation models for medical imaging and drug discovery applications
Several workshops focused on earth observation and remote sensing - always fascinating to see CV advancing our understanding of the planet
The convergence of vision, language, and action in embodied AI systems
After presenting at 3 conferences last year, I'm taking a more relaxed approach this time - which means more time for meaningful conversations!
I'm actively planning consulting engagements for fall 2025. If you’re looking to:
Build more robust and generalizable CV models
Streamline model development
Boost investor confidence in your CV/ML initiatives
...let's connect in person at CVPR! It's the perfect opportunity to discuss how to accelerate your computer vision projects.
If you'll be there, just hit reply and let me know - would love to grab coffee and hear about what you're building.
Heather
|
|
|
|
|
 |
Podcast: Carbon Sense Tracking Power Plants from Space Using Satellites and AI with Dr. Heather Couture
I recently had the pleasure of being a guest on Sean Crowell's Carbon Sense Podcast, where I discussed the groundbreaking work being done by Climate TRACE to estimate CO₂ emissions from power plants using satellite data and machine learning.
During our conversation, we explored how these advanced remote sensing and computer vision techniques are creating a new dataset that complements traditional greenhouse gas (GHG) estimation methods. Some of the key topics we covered included:
- How emissions are calculated: By combining power generation capacity, capacity factors, and emission factors.
- The power of AI: Leveraging machine learning to detect when power plants are operating and at what intensity.
- Addressing
uncertainty: Understanding the variables that affect the accuracy of these emissions estimates.
- Integration with other data: How these methods work alongside direct satellite-based GHG measurements for a more comprehensive view.
It was fantastic to connect with Sean and share insights about Climate TRACE’s mission to increase transparency and accountability in global emissions tracking.
If you’re interested in climate science, remote sensing, or AI, I highly recommend checking out both parts of this podcast series—including Part 1 with Aaron Davitt for an introduction to Climate TRACE.
Listen to the full episode Listen to Part 1 about Climate TRACE Explore the data yourself at https://climatetrace.org
|
|
|
|
|
 |
Research: Tissue Thickness Variations Impact of Tissue Thickness on Computational Quantification of Features in Whole Slide Images for Diagnostic Pathology
A difference of just a few micrometers - thinner than a human hair - can dramatically alter how AI algorithms "see" cancer cells.
Background: As computational pathology algorithms integrate into diagnostic workflows, we're confronting an unexpected challenge: variables that pathologists intuitively adjust for during visual assessment can significantly confound automated feature extraction. While pathologists readily compensate for variations in section thickness, staining intensity, and tissue artifacts during interpretation, machine learning models may interpret these pre-analytical variables as biologically meaningful signals.
Recent research from Manav Shah et al. examined a deceptively simple question: does the thickness of
tissue sections affect what AI algorithms can detect? Using 144 thyroid tissue samples cut at thicknesses ranging from 0.5 to 10 micrometers, researchers systematically isolated tissue section thickness as the primary variable while controlling for other factors.
The findings reveal significant impacts: - Visual quality degradation: Thinner sections appeared more transparent with distinct cellular features, while thicker sections became darker with increased artifacts - Contrast variations: WSI contrast increased when moving from thin to thick sections - Algorithm sensitivity: Computational tools using features like Haralick texture analysis showed substantial performance variations across thickness ranges - Nuclei changes: Nuclei size, intensity, and texture changed across the different section thicknesses
Why this matters: Most pathology labs don't standardize tissue section thickness, typically cutting anywhere from 3-6 micrometers
based on technician preference and tissue type. This seemingly minor variation could be introducing systematic biases into AI diagnostic tools without anyone realizing it. As computational pathology moves toward clinical deployment, such "invisible" variables could affect diagnostic accuracy and reproducibility across different laboratories.
Implications for practice: This research suggests that successful AI implementation in pathology may require more standardization than previously recognized. Labs adopting AI tools might need to establish consistent protocols not just for staining and scanning, but for fundamental preparation steps like section thickness. This could impact laboratory workflows, training protocols, and quality assurance procedures.
The work highlights a broader challenge in medical AI: algorithms often detect patterns we didn't know existed, but they're also sensitive to variations we didn't think mattered.
How might these
findings influence your applications?
|
|
|
|
|
 |
Research: Efficient Foundation Model Fine-tuning Fine-tune Smarter, Not Harder: Parameter-Efficient Fine-Tuning for Geospatial Foundation Models
Training a massive AI model can cost millions of dollars. But what if you only needed to update 2% of its parameters to achieve the same performance?
Geospatial foundation models like Prithvi and Clay have shown impressive capabilities for Earth observation tasks, from flood detection to crop monitoring. However, these models are growing larger - some exceeding 600 million parameters - making traditional fine-tuning increasingly expensive and resource-intensive for organizations wanting to adapt them for specific use cases.
Francesc Marti Escofet et al. evaluated Parameter-Efficient Fine-Tuning (PEFT) techniques for geospatial applications. Instead of updating all model weights, PEFT methods like Low-Rank Adaptation (LoRA) modify
only a small subset of parameters while preserving most of the pre-trained knowledge.
Key findings from testing across five Earth observation datasets: - LoRA matches or exceeds full fine-tuning performance while reducing memory requirements - Better geographic generalization: PEFT models maintain performance when applied to new geographic regions not seen during training - Significant resource savings: Only a small number of additional parameters needed compared to full model retraining
Practical implications: This research addresses a real barrier to AI adoption in Earth observation. Smaller organizations, research institutions, and developing nations can more easily adapt state-of-the-art models for local applications without requiring massive computational infrastructure. The techniques have been integrated into the open-source TerraTorch toolkit, making them accessible to the broader community.
The work demonstrates that
efficiency and performance aren't mutually exclusive in geospatial AI - a finding that could accelerate the deployment of foundation models for environmental monitoring, disaster response, and sustainable development applications worldwide.
How might parameter-efficient approaches change AI adoption in your field?
Code
|
|
|
|
|
 |
Podcast: Impact AI Advancing Breast Cancer Screening with Nico Karssemeijer from ScreenPoint Medical
What role can artificial intelligence play in detecting breast cancer earlier, when it's most treatable? In this episode of Impact AI, we hear from Nico Karssemeijer, Chief Science Officer of ScreenPoint Medical, about how his team is using AI to transform breast cancer screening. Drawing on more than four decades of experience in medical imaging, Nico shares how ScreenPoint’s AI tools assist radiologists by analyzing mammograms, highlighting suspicious areas, and even learning from years of patient data. The conversation explores what it takes to build trustworthy medical AI, overcome challenges with data diversity and device bias, and the importance of clinical validation. To find out how AI is being integrated into real-world healthcare to
improve outcomes (and what goes into building a successful AI-powered medical company), tune in today!
|
|
|
|
|
 |
Insights: Bias Tackling Site-Specific Bias in Pathology Foundation Models
Text🔬 𝐓𝐚𝐜𝐤𝐥𝐢𝐧𝐠 𝐒𝐢𝐭𝐞-𝐒𝐩𝐞𝐜𝐢𝐟𝐢𝐜 𝐁𝐢𝐚𝐬 𝐢𝐧 𝐏𝐚𝐭𝐡𝐨𝐥𝐨𝐠𝐲 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬
A question from my recent webinar on foundation models for pathology: What should I do if the foundation model highly encodes the site features?
A common challenge in computational pathology is dealing with foundation models that encode site-specific features.
This is problematic because:
1. 𝐁𝐢𝐚𝐬𝐞𝐝 𝐩𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐨𝐧𝐬: Models may learn to associate certain features with specific sites rather than true biological characteristics, leading to inaccurate diagnoses or prognoses.
2. 𝐋𝐢𝐦𝐢𝐭𝐞𝐝 𝐠𝐞𝐧𝐞𝐫𝐚𝐥𝐢𝐳𝐚𝐛𝐢𝐥𝐢𝐭𝐲: Models trained on data from one site may perform poorly when applied to images from different hospitals or labs.
3. 𝐎𝐯𝐞𝐫𝐨𝐩𝐭𝐢𝐦𝐢𝐬𝐭𝐢𝐜 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞
𝐞𝐬𝐭𝐢𝐦𝐚𝐭𝐞𝐬: When models are trained and validated on data from the same sites, their reported accuracy may be artificially inflated, not reflecting real-world performance.
4. 𝐏𝐨𝐭𝐞𝐧𝐭𝐢𝐚𝐥 𝐟𝐨𝐫 𝐝𝐞𝐦𝐨𝐠𝐫𝐚𝐩𝐡𝐢𝐜 𝐛𝐢𝐚𝐬: Site-specific features can correlate with patient demographics, potentially leading to unfair or biased outcomes across different populations.
Here's what you need to know about foundation models and batch effects:
1. 𝐏𝐞𝐫𝐬𝐢𝐬𝐭𝐞𝐧𝐭 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 Recent research confirms that even advanced foundation models can inadvertently encode
lab-specific characteristics.
2. 𝐀𝐰𝐚𝐫𝐞𝐧𝐞𝐬𝐬 𝐢𝐬 𝐊𝐞𝐲 Recognizing this issue is the crucial first step in addressing it.
3. 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐞 𝐘𝐨𝐮𝐫 𝐃𝐚𝐭𝐚 Assess whether site-specific encoding affects your particular dataset and use case. This can be done with t-SNE or UMAP plots, or by training a classifier on the embeddings to predict the site.
4. 𝐀𝐝𝐯𝐞𝐫𝐬𝐚𝐫𝐢𝐚𝐥 𝐓𝐞𝐜𝐡𝐧𝐢𝐪𝐮𝐞𝐬 Domain adversarial networks can be effective in mitigating site-specific bias.
5. 𝐌𝐮𝐥𝐭𝐢𝐦𝐨𝐝𝐚𝐥
𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡𝐞𝐬 Consider using multimodal foundation models that integrate various data types, potentially reducing reliance on site-specific visual features.
6. 𝐃𝐚𝐭𝐚 𝐃𝐢𝐯𝐞𝐫𝐬𝐢𝐭𝐲 Training on diverse datasets from multiple sites can help reduce site-specific encoding.
𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐩𝐚𝐭𝐡𝐨𝐥𝐨𝐠𝐲 𝐀𝐈 𝐜𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲: What strategies have you found effective in mitigating site-specific biases in foundation models?
|
|
|
|
|
Enjoy this newsletter? Here are more things you might find helpful:
Team Workshop: Harnessing the Power of Foundation Models for Pathology - Ready to unlock new possibilities for your pathology AI product development? Join me for an exclusive 90 minute workshop designed to catapult your team’s model development.
Schedule now
|
|
Did someone forward this email to you, and you want to sign up for more? Subscribe to future emails This email was sent to _t.e.s.t_@example.com. Want to change to a different address? Update subscription Want to get off this list? Unsubscribe My postal address: Pixel Scientia Labs, LLC, PO Box 98412, Raleigh, NC 27624, United States
|
|
|
|
|