Search

Home > Digital Pathology Podcast > 148: Statistics of Generative and Non-Generative AI – 7-Part Livestream 4/7
Podcast: Digital Pathology Podcast
Episode:

148: Statistics of Generative and Non-Generative AI – 7-Part Livestream 4/7

Category: Science & Medicine
Duration: 00:36:24
Publish Date: 2025-08-11 12:00:00
Description:

Send us a text

You might be using AI models in pathology without even knowing if they’re giving you reliable results.
 

Let that sink in for a second—because today, we’re fixing that.

In this episode, I walk you through the real statistics that power—and sometimes fail—AI in digital pathology. It's episode 4 of our AI series, and we’re demystifying the metrics behind both generative and non-generative AI. Why does this matter? Because accuracy isn't enough. And not every model metric tells you the whole story.

If you’ve ever been impressed by a model’s "99% accuracy," you need to hear why that might actually be a red flag. I share personal stories (yes, including my early days in Germany when I didn’t even know what a "training set" was), and we break down confusing metrics like perplexity, SSIM, FID, and BLEU scores—so you can truly understand what your models are doing and how to evaluate them correctly.

Together, we’ll uncover how model evaluation works for:

  • Predictive Analytics (non-generative AI)
  • Generative AI (text/image generating models)
  • Regression vs. Classification use cases
  • Why confusion matrix metrics like sensitivity and specificity still matter—and when they don’t.

Whether you're a pathologist, a scientist, or someone leading a digital transformation team—you need this knowledge to avoid misleading data, flawed models, and missed opportunities.

Total Play: 0