<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 11 Apr 2026 02:23:04 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Vanishing Gradients - Episodes Tagged with “Llm”</title>
    <link>https://vanishinggradients.fireside.fm/tags/llm</link>
    <pubDate>Wed, 13 Aug 2025 01:00:00 +1000</pubDate>
    <description>A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>a data podcast with hugo bowne-anderson</itunes:subtitle>
    <itunes:author>Hugo Bowne-Anderson</itunes:author>
    <itunes:summary>A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>data science, machine learning, AI</itunes:keywords>
    <itunes:owner>
      <itunes:name>Hugo Bowne-Anderson</itunes:name>
      <itunes:email>hugobowne@hey.com</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Episode 55: From Frittatas to Production LLMs: Breakfast at SciPy</title>
  <link>https://vanishinggradients.fireside.fm/55</link>
  <guid isPermaLink="false">c9edf577-79bc-4743-9b23-847d48a991ea</guid>
  <pubDate>Wed, 13 Aug 2025 01:00:00 +1000</pubDate>
  <author>Hugo Bowne-Anderson</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c9edf577-79bc-4743-9b23-847d48a991ea.mp3" length="54930830" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Hugo Bowne-Anderson</itunes:author>
  <itunes:subtitle>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  </itunes:subtitle>
  <itunes:duration>38:08</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"/>
  <description>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  
You’ll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!)
We talk through:  
• The three personas — and the blind spots each has when shipping AI systems  
• Why “perfect” tests can be a sign you’re testing the wrong thing  
• Development vs. production observability loops — and why you need both  
• How curiosity about failing data separates good builders from great ones  
• Ways large organizations can create space for experimentation without losing delivery focus  
If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you.
LINKS
Eric' Website (https://ericmjl.github.io/)
More about the workshops Eric and Hugo taught at SciPy (https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks)
Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
🎓 Learn more:
Hugo's course: Building LLM Applications for Data Scientists and Software Engineers (https://maven.com/s/course/d56067f338) — https://maven.com/s/course/d56067f338 ($600 off early bird discount for November cohort availiable until August 16)
</description>
  <itunes:keywords>LLM, generative AI, data science, machine learning, SciPy</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  </p>

<p>You’ll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!)</p>

<p>We talk through:<br><br>
• The three personas — and the blind spots each has when shipping AI systems<br><br>
• Why “perfect” tests can be a sign you’re testing the wrong thing<br><br>
• Development vs. production observability loops — and why you need both<br><br>
• How curiosity about failing data separates good builders from great ones<br><br>
• Ways large organizations can create space for experimentation without losing delivery focus  </p>

<p>If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you.</p>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://ericmjl.github.io/" rel="nofollow">Eric&#39; Website</a></li>
<li><a href="https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks" rel="nofollow">More about the workshops Eric and Hugo taught at SciPy</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
</ul>

<p>🎓 Learn more:</p>

<ul>
<li><strong>Hugo&#39;s course:</strong> <a href="https://maven.com/s/course/d56067f338" rel="nofollow">Building LLM Applications for Data Scientists and Software Engineers</a> — <a href="https://maven.com/s/course/d56067f338" rel="nofollow">https://maven.com/s/course/d56067f338</a> ($600 off early bird discount for November cohort availiable until August 16)</li>
</ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Traditional software expects 100% passing tests. In LLM-powered systems, that’s not just unrealistic — it’s a feature, not a bug. Eric Ma leads research data science in Moderna’s data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades.  </p>

<p>You’ll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!)</p>

<p>We talk through:<br><br>
• The three personas — and the blind spots each has when shipping AI systems<br><br>
• Why “perfect” tests can be a sign you’re testing the wrong thing<br><br>
• Development vs. production observability loops — and why you need both<br><br>
• How curiosity about failing data separates good builders from great ones<br><br>
• Ways large organizations can create space for experimentation without losing delivery focus  </p>

<p>If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you.</p>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://ericmjl.github.io/" rel="nofollow">Eric&#39; Website</a></li>
<li><a href="https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks" rel="nofollow">More about the workshops Eric and Hugo taught at SciPy</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
</ul>

<p>🎓 Learn more:</p>

<ul>
<li><strong>Hugo&#39;s course:</strong> <a href="https://maven.com/s/course/d56067f338" rel="nofollow">Building LLM Applications for Data Scientists and Software Engineers</a> — <a href="https://maven.com/s/course/d56067f338" rel="nofollow">https://maven.com/s/course/d56067f338</a> ($600 off early bird discount for November cohort availiable until August 16)</li>
</ul>]]>
  </itunes:summary>
</item>
<item>
  <title>Episode 54: Scaling AI: From Colab to Clusters — A Practitioner’s Guide to Distributed Training and Inference</title>
  <link>https://vanishinggradients.fireside.fm/54</link>
  <guid isPermaLink="false">151b5251-bd41-4528-87bf-763165b8ccc7</guid>
  <pubDate>Sat, 19 Jul 2025 02:00:00 +1000</pubDate>
  <author>Hugo Bowne-Anderson</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/151b5251-bd41-4528-87bf-763165b8ccc7.mp3" length="59469240" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>1</itunes:season>
  <itunes:author>Hugo Bowne-Anderson</itunes:author>
  <itunes:subtitle>Colab is cozy. But production won’t fit on a single GPU. Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software.</itunes:subtitle>
  <itunes:duration>41:17</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"/>
  <description>Colab is cozy. But production won’t fit on a single GPU.
Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software.
We talk through:
    • From Colab to clusters: why scaling isn’t just about training massive models, but serving agents, handling load, and speeding up iteration
    • Zero-to-two GPUs: how to get started without Kubernetes, Slurm, or a PhD in networking
    • Scaling tradeoffs: when to care about interconnects, which infra bottlenecks actually matter, and how to avoid chasing performance ghosts
    • The GPU middle class: strategies for training and serving on a shoestring, with just a few cards or modest credits
    • Local experiments, global impact: why learning distributed systems—even just a little—can set you apart as an engineer
If you’ve ever stared at a Hugging Face training script and wondered how to run it on something more than your laptop: this one’s for you.
LINKS
Zach on LinkedIn (https://www.linkedin.com/in/zachary-mueller-135257118/)
Hugo's blog post on Stop Buliding AI Agents (https://www.linkedin.com/posts/hugo-bowne-anderson-045939a5_yesterday-i-posted-about-stop-building-ai-activity-7346942036752613376-b8-t/)
Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Hugo's recent newsletter about upcoming events and more! (https://hugobowne.substack.com/p/stop-building-agents)
🎓 Learn more:
Hugo's course: Building LLM Applications for Data Scientists and Software Engineers (https://maven.com/s/course/d56067f338) — https://maven.com/s/course/d56067f338
Zach's course (45% off for VG listeners!): Scratch to Scale: Large-Scale Training in the Modern World (https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39) -- https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39
📺 Watch the video version on YouTube: YouTube link (https://youtube.com/live/76NAtzWZ25s?feature=share) 
</description>
  <itunes:keywords>AI, LLM, compute, GenAI</itunes:keywords>
  <content:encoded>
    <![CDATA[<p><strong>Colab is cozy. But production won’t fit on a single GPU.</strong><br>
Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software.</p>

<p>We talk through:<br>
    • From Colab to clusters: why scaling isn’t just about training massive models, but serving agents, handling load, and speeding up iteration<br>
    • Zero-to-two GPUs: how to get started without Kubernetes, Slurm, or a PhD in networking<br>
    • Scaling tradeoffs: when to care about interconnects, which infra bottlenecks actually matter, and how to avoid chasing performance ghosts<br>
    • The GPU middle class: strategies for training and serving on a shoestring, with just a few cards or modest credits<br>
    • Local experiments, global impact: why learning distributed systems—even just a little—can set you apart as an engineer</p>

<p>If you’ve ever stared at a Hugging Face training script and wondered how to run it on something more than your laptop: this one’s for you.</p>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://www.linkedin.com/in/zachary-mueller-135257118/" rel="nofollow">Zach on LinkedIn</a></li>
<li><a href="https://www.linkedin.com/posts/hugo-bowne-anderson-045939a5_yesterday-i-posted-about-stop-building-ai-activity-7346942036752613376-b8-t/" rel="nofollow">Hugo&#39;s blog post on Stop Buliding AI Agents</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
<li><a href="https://hugobowne.substack.com/p/stop-building-agents" rel="nofollow">Hugo&#39;s recent newsletter about upcoming events and more!</a></li>
</ul>

<p>🎓 Learn more:</p>

<ul>
<li><strong>Hugo&#39;s course:</strong> <a href="https://maven.com/s/course/d56067f338" rel="nofollow">Building LLM Applications for Data Scientists and Software Engineers</a> — <a href="https://maven.com/s/course/d56067f338" rel="nofollow">https://maven.com/s/course/d56067f338</a></li>
<li><strong>Zach&#39;s course (45% off for VG listeners!):</strong> <a href="https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39" rel="nofollow">Scratch to Scale: Large-Scale Training in the Modern World</a> -- <a href="https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39" rel="nofollow">https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39</a></li>
</ul>

<p>📺 <strong>Watch the video version on YouTube:</strong> <a href="https://youtube.com/live/76NAtzWZ25s?feature=share" rel="nofollow">YouTube link</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p><strong>Colab is cozy. But production won’t fit on a single GPU.</strong><br>
Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software.</p>

<p>We talk through:<br>
    • From Colab to clusters: why scaling isn’t just about training massive models, but serving agents, handling load, and speeding up iteration<br>
    • Zero-to-two GPUs: how to get started without Kubernetes, Slurm, or a PhD in networking<br>
    • Scaling tradeoffs: when to care about interconnects, which infra bottlenecks actually matter, and how to avoid chasing performance ghosts<br>
    • The GPU middle class: strategies for training and serving on a shoestring, with just a few cards or modest credits<br>
    • Local experiments, global impact: why learning distributed systems—even just a little—can set you apart as an engineer</p>

<p>If you’ve ever stared at a Hugging Face training script and wondered how to run it on something more than your laptop: this one’s for you.</p>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://www.linkedin.com/in/zachary-mueller-135257118/" rel="nofollow">Zach on LinkedIn</a></li>
<li><a href="https://www.linkedin.com/posts/hugo-bowne-anderson-045939a5_yesterday-i-posted-about-stop-building-ai-activity-7346942036752613376-b8-t/" rel="nofollow">Hugo&#39;s blog post on Stop Buliding AI Agents</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
<li><a href="https://hugobowne.substack.com/p/stop-building-agents" rel="nofollow">Hugo&#39;s recent newsletter about upcoming events and more!</a></li>
</ul>

<p>🎓 Learn more:</p>

<ul>
<li><strong>Hugo&#39;s course:</strong> <a href="https://maven.com/s/course/d56067f338" rel="nofollow">Building LLM Applications for Data Scientists and Software Engineers</a> — <a href="https://maven.com/s/course/d56067f338" rel="nofollow">https://maven.com/s/course/d56067f338</a></li>
<li><strong>Zach&#39;s course (45% off for VG listeners!):</strong> <a href="https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39" rel="nofollow">Scratch to Scale: Large-Scale Training in the Modern World</a> -- <a href="https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39" rel="nofollow">https://maven.com/walk-with-code/scratch-to-scale?promoCode=hugo39</a></li>
</ul>

<p>📺 <strong>Watch the video version on YouTube:</strong> <a href="https://youtube.com/live/76NAtzWZ25s?feature=share" rel="nofollow">YouTube link</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
