<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Mon, 06 Apr 2026 09:14:14 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Vanishing Gradients - Episodes Tagged with “Mlops”</title>
    <link>https://vanishinggradients.fireside.fm/tags/mlops</link>
    <pubDate>Thu, 16 Oct 2025 14:00:00 +1100</pubDate>
    <description>A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>a data podcast with hugo bowne-anderson</itunes:subtitle>
    <itunes:author>Hugo Bowne-Anderson</itunes:author>
    <itunes:summary>A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>data science, machine learning, AI</itunes:keywords>
    <itunes:owner>
      <itunes:name>Hugo Bowne-Anderson</itunes:name>
      <itunes:email>hugobowne@hey.com</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Episode 61: The AI Agent Reliability Cliff: What Happens When Tools Fail in Production</title>
  <link>https://vanishinggradients.fireside.fm/61</link>
  <guid isPermaLink="false">66d8da7e-5291-4273-8a87-c956fdf2f784</guid>
  <pubDate>Thu, 16 Oct 2025 14:00:00 +1100</pubDate>
  <author>Hugo Bowne-Anderson</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/66d8da7e-5291-4273-8a87-c956fdf2f784.mp3" length="55333020" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Hugo Bowne-Anderson</itunes:author>
  <itunes:subtitle>Most AI teams find their multi-agent systems devolving into chaos, but ML Engineer Alex Strick van Linschoten argues they are ignoring the production reality. In this episode, he draws on insights from the LLM Ops Database (750+ real-world deployments then; now nearly 1,000!) to systematically measure and engineer constraint, turning unreliable prototypes into robust, enterprise-ready AI.</itunes:subtitle>
  <itunes:duration>28:04</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"/>
  <description>Most AI teams find their multi-agent systems devolving into chaos, but ML Engineer Alex Strick van Linschoten argues they are ignoring the production reality. In this episode, he draws on insights from the LLM Ops Database (750+ real-world deployments then; now nearly 1,000!) to systematically measure and engineer constraint, turning unreliable prototypes into robust, enterprise-ready AI.
Drawing from his work at Zen ML, Alex details why success requires scaling down and enforcing MLOps discipline to navigate the unpredictable "Agent Reliability Cliff". He provides the essential architectural shifts, evaluation hygiene techniques, and practical steps needed to move beyond guesswork and build scalable, trustworthy AI products.
We talk through:
- Why "shoving a thousand agents" into an app is the fastest route to unmanageable chaos
- The essential MLOps hygiene (tracing and continuous evals) that most teams skip
- The optimal (and very low) limit for the number of tools an agent can reliably use
- How to use human-in-the-loop strategies to manage the risk of autonomous failure in high-sensitivity domains
- The principle of using simple Python/RegEx before resorting to costly LLM judges
LINKS
The LLMOps Database: 925 entries as of today....submit a use case to help it get to 1K! (https://www.zenml.io/llmops-database)
Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Watch the podcast video on YouTube (https://youtu.be/-YQjKH3wRvc)
🎓 Learn more:
Join the final cohort of our Building AI Applications course starting March 10, 2026 (25% off for listeners) (https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs): https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs 
</description>
  <itunes:keywords>ai, agents, mlops, machine learning</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Most AI teams find their multi-agent systems devolving into chaos, but ML Engineer Alex Strick van Linschoten argues they are ignoring the production reality. In this episode, he draws on insights from the LLM Ops Database (750+ real-world deployments then; now nearly 1,000!) to systematically measure and engineer constraint, turning unreliable prototypes into robust, enterprise-ready AI.</p>

<p>Drawing from his work at Zen ML, Alex details why success requires scaling down and enforcing MLOps discipline to navigate the unpredictable &quot;Agent Reliability Cliff&quot;. He provides the essential architectural shifts, evaluation hygiene techniques, and practical steps needed to move beyond guesswork and build scalable, trustworthy AI products.</p>

<p>We talk through:</p>

<ul>
<li>Why &quot;shoving a thousand agents&quot; into an app is the fastest route to unmanageable chaos</li>
<li>The essential MLOps hygiene (tracing and continuous evals) that most teams skip</li>
<li>The optimal (and very low) limit for the number of tools an agent can reliably use</li>
<li>How to use human-in-the-loop strategies to manage the risk of autonomous failure in high-sensitivity domains</li>
<li>The principle of using simple Python/RegEx before resorting to costly LLM judges</li>
</ul>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://www.zenml.io/llmops-database" rel="nofollow">The LLMOps Database: 925 entries as of today....submit a use case to help it get to 1K!</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
<li><a href="https://youtu.be/-YQjKH3wRvc" rel="nofollow">Watch the podcast video on YouTube</a></li>
</ul>

<p>🎓 Learn more:</p>

<p><a href="https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs" rel="nofollow">Join the final cohort of our Building AI Applications course starting March 10, 2026 (25% off for listeners)</a>: <a href="https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs" rel="nofollow">https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Most AI teams find their multi-agent systems devolving into chaos, but ML Engineer Alex Strick van Linschoten argues they are ignoring the production reality. In this episode, he draws on insights from the LLM Ops Database (750+ real-world deployments then; now nearly 1,000!) to systematically measure and engineer constraint, turning unreliable prototypes into robust, enterprise-ready AI.</p>

<p>Drawing from his work at Zen ML, Alex details why success requires scaling down and enforcing MLOps discipline to navigate the unpredictable &quot;Agent Reliability Cliff&quot;. He provides the essential architectural shifts, evaluation hygiene techniques, and practical steps needed to move beyond guesswork and build scalable, trustworthy AI products.</p>

<p>We talk through:</p>

<ul>
<li>Why &quot;shoving a thousand agents&quot; into an app is the fastest route to unmanageable chaos</li>
<li>The essential MLOps hygiene (tracing and continuous evals) that most teams skip</li>
<li>The optimal (and very low) limit for the number of tools an agent can reliably use</li>
<li>How to use human-in-the-loop strategies to manage the risk of autonomous failure in high-sensitivity domains</li>
<li>The principle of using simple Python/RegEx before resorting to costly LLM judges</li>
</ul>

<p><strong>LINKS</strong></p>

<ul>
<li><a href="https://www.zenml.io/llmops-database" rel="nofollow">The LLMOps Database: 925 entries as of today....submit a use case to help it get to 1K!</a></li>
<li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li>
<li><a href="https://youtu.be/-YQjKH3wRvc" rel="nofollow">Watch the podcast video on YouTube</a></li>
</ul>

<p>🎓 Learn more:</p>

<p><a href="https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs" rel="nofollow">Join the final cohort of our Building AI Applications course starting March 10, 2026 (25% off for listeners)</a>: <a href="https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs" rel="nofollow">https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgfs</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
