 |
Research: EO Time Series Foundation Models Dargana: fine-tuning EarthPT for dynamic tree canopy mapping from space
Accurate tree canopy mapping has traditionally required massive datasets and computing resources, but what if we could achieve excellent results with just a fraction of the data?
Michael J. Smith et al. developed Dargana, a specialized variant of the EarthPT time-series foundation model that efficiently maps tree canopies at 10m resolution, distinguishing between conifer and broadleaved trees. It achieves this using less than 3% of EarthPT's pre-training data volume and only 5% of its pre-training compute resources.
Using Cornwall, UK as their test case, Dargana achieved impressive results with a pixel-level ROC-AUC of 0.98 and PR-AUC of 0.83 on new satellite imagery. The model can identify fine structures like hedgerows and small
coppices that were below the minimum mapping unit of the training data and track temporal changes in forest cover over time.
This approach demonstrates how foundation models can be efficiently specialized for monitoring natural resources. By leveraging representations learned during pre-training on multi-spectral optical and SAR observations, Dargana can continuously update canopy classification as new satellite data becomes available.
The implications for environmental monitoring are substantial, potentially enabling more efficient tracking of forest establishment, loss, and health at scales from local to continental - all without the enormous computing and data requirements typically associated with such detailed mapping.
|
|
|
|
|
 |
Podcast: Impact AI Radiology Tools for Precision Medicine with Ángel Alberich-Bayarri from Quibim
How can we harness medical imaging and artificial intelligence to shift healthcare from reactive to predictive? In this episode, I sit down with Ángel Alberich-Bayarri to discuss how artificial intelligence is revolutionizing radiology and precision medicine. Ángel is the CEO of Quibim, a company recognized globally for its AI-powered tools that turn radiological scans into predictive biomarkers, enabling more precise diagnoses and personalized treatments.
In our conversation, we hear how his early work in radiology and engineering led to the founding of Quibim and how the company’s AI-based technology transforms medical images into predictive biomarkers. We unpack the challenges of data heterogeneity, how Quibim tackles image
harmonization using self-supervised learning, and why accounting for regulations is critical when building healthcare AI products. Ángel also shares his perspective on the value of model explainability, the concept of digital twins, and the future of preventative imaging. Join us to discover how AI is disrupting clinical decision-making and preventive healthcare with Ángel Alberich-Bayarri.
|
|
|
|
|
 |
Research: Bias Ethical and Bias Considerations in Artificial Intelligence/Machine Learning
Ever wonder why your smartphone's facial recognition works better for some people than others? Or why certain job applicants might be systematically filtered out by AI screening tools? These aren't just technical glitches—they're manifestations of bias in AI systems.
Matthew G. Hanna et al. reviewed the current state of ethical and bias considerations for pathology.
The field of AI ethics has emerged precisely because ML systems are fundamentally human creations that reflect and sometimes amplify our biases. These systems learn patterns from historical data, which often contains implicit prejudices across race, gender, age, and other factors.
What makes this challenge particularly complex is that bias can enter at multiple
stages: in the data collection process, through sampling methods that under-represent certain groups; in feature selection, where the variables we choose to measure may inherently favor certain outcomes; and in the algorithmic design itself, where mathematical formulations can inadvertently create unfair results.
The good news is that researchers and practitioners are developing robust frameworks for identifying and mitigating these biases—from technical approaches like adversarial debiasing to organizational practices such as diverse development teams and regular algorithmic audits.
As AI becomes increasingly embedded in critical systems that impact healthcare, employment, criminal justice, and financial access, addressing these ethical challenges isn't just a technical nicety—it's essential for building systems that work fairly for everyone.
|
|
|
|
|
 |
Insights: Batch Effects The Hidden Challenge of Site Prediction in Medical AI Models
Are your medical imaging AI models truly generalizable, or are they quietly exploiting hidden correlations? This critical question emerged during my recent webinar on bias and batch effects in medical imaging.
One attendee posed a fascinating question: If your model generalizes well in a leave-sites-out k-fold validation paradigm but can still predict which site an image came from, should you be concerned?
The short answer: Site prediction itself isn't necessarily problematic, but it requires vigilance.
The subtle danger lies in what site prediction might enable your model to predict. Even when a model performs consistently across demographics like race, age, sex, and other clinical variables, there may be unknown confounders
lurking in site-specific patterns that your model leverages.
This is where validation becomes both art and science. While we can't test for every possible confounder, thorough validation across all variables we can identify, combined with continuous monitoring after deployment, represents our best defense against hidden biases.
The reality: Sometimes "good enough now" with comprehensive checks is the best we can do until research reveals new variables we haven't considered.
|
|
|
|
|
Enjoy this newsletter? Here are more things you might find helpful:
1 Hour Strategy Session -- What if you could talk to an expert quickly? Are you facing a specific machine learning challenge? Do you have a pressing question? Schedule a 1 Hour Strategy Session now. Ask me anything about whatever challenges you’re facing. I’ll give you no-nonsense advice that you can put into action immediately. Schedule now
|
|
Did someone forward this email to you, and you want to sign up for more? Subscribe to future emails This email was sent to _t.e.s.t_@example.com. Want to change to a different address? Update subscription Want to get off this list? Unsubscribe My postal address: Pixel Scientia Labs, LLC, PO Box 98412, Raleigh, NC 27624, United States
|
|
|
|
|