<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 11 Apr 2026 02:23:45 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Vanishing Gradients - Episodes Tagged with “Generative Ai”</title>
    <link>https://vanishinggradients.fireside.fm/tags/generative%20ai</link>
    <pubDate>Wed, 13 Aug 2025 01:00:00 +1000</pubDate>
    <description>A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>a data podcast with hugo bowne-anderson</itunes:subtitle>
    <itunes:author>Hugo Bowne-Anderson</itunes:author>
    <itunes:summary>A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>data science, machine learning, AI</itunes:keywords>
    <itunes:owner>
      <itunes:name>Hugo Bowne-Anderson</itunes:name>
      <itunes:email>hugobowne@hey.com</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Episode 55: From Frittatas to Production LLMs: Breakfast at SciPy</title>
  <link>https://vanishinggradients.fireside.fm/55</link>
  <guid isPermaLink="false">c9edf577-79bc-4743-9b23-847d48a991ea</guid>
  <pubDate>Wed, 13 Aug 2025 01:00:00 +1000</pubDate>
  <author>Hugo Bowne-Anderson</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c9edf577-79bc-4743-9b23-847d48a991ea.mp3" length="54930830" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Hugo Bowne-Anderson</itunes:author>
  <itunes:subtitle>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  </itunes:subtitle>
  <itunes:duration>38:08</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"/>
  <description>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  
You’ll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!)
We talk through:  
• The three personas — and the blind spots each has when shipping AI systems  
• Why “perfect” tests can be a sign you’re testing the wrong thing  
• Development vs. production observability loops — and why you need both  
• How curiosity about failing data separates good builders from great ones  
• Ways large organizations can create space for experimentation without losing delivery focus  
If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you.
LINKS
Eric' Website (https://ericmjl.github.io/)
More about the workshops Eric and Hugo taught at SciPy (https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks)
Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
🎓 Learn more:
Hugo's course: Building LLM Applications for Data Scientists and Software Engineers (https://maven.com/s/course/d56067f338) — https://maven.com/s/course/d56067f338 ($600 off early bird discount for November cohort availiable until August 16)
</description>
  <itunes:keywords>LLM, generative AI, data science, machine learning, SciPy</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  </p>

<p>You’ll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!)</p>

<p>We talk through:<br><br>
• The three personas — and the blind spots each has when shipping AI systems<br><br>
• Why “perfect” tests can be a sign you’re testing the wrong thing<br><br>
• Development vs. production observability loops — and why you need both<br><br>
• How curiosity about failing data separates good builders from great ones<br><br>
• Ways large organizations can create space for experimentation without losing delivery focus  </p>

<p>If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you.</p>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://ericmjl.github.io/" rel="nofollow">Eric&#39; Website</a></li>
<li><a href="https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks" rel="nofollow">More about the workshops Eric and Hugo taught at SciPy</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
</ul>

<p>🎓 Learn more:</p>

<ul>
<li><strong>Hugo&#39;s course:</strong> <a href="https://maven.com/s/course/d56067f338" rel="nofollow">Building LLM Applications for Data Scientists and Software Engineers</a> — <a href="https://maven.com/s/course/d56067f338" rel="nofollow">https://maven.com/s/course/d56067f338</a> ($600 off early bird discount for November cohort availiable until August 16)</li>
</ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  </p>

<p>You’ll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!)</p>

<p>We talk through:<br><br>
• The three personas — and the blind spots each has when shipping AI systems<br><br>
• Why “perfect” tests can be a sign you’re testing the wrong thing<br><br>
• Development vs. production observability loops — and why you need both<br><br>
• How curiosity about failing data separates good builders from great ones<br><br>
• Ways large organizations can create space for experimentation without losing delivery focus  </p>

<p>If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you.</p>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://ericmjl.github.io/" rel="nofollow">Eric&#39; Website</a></li>
<li><a href="https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks" rel="nofollow">More about the workshops Eric and Hugo taught at SciPy</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
</ul>

<p>🎓 Learn more:</p>

<ul>
<li><strong>Hugo&#39;s course:</strong> <a href="https://maven.com/s/course/d56067f338" rel="nofollow">Building LLM Applications for Data Scientists and Software Engineers</a> — <a href="https://maven.com/s/course/d56067f338" rel="nofollow">https://maven.com/s/course/d56067f338</a> ($600 off early bird discount for November cohort availiable until August 16)</li>
</ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
