Vanishing Gradients
00:00:00
/
00:27:42

Episode 50: A Field Guide to Rapidly Improving AI Products -- With Hamel Husain

June 17th, 2025

If we want AI systems that actually work, we need to get much better at evaluating them, not just building more pipelines, agents, and frameworks.

In this episode, Hugo talks with Hamel Hussain (ex-Airbnb, GitHub, DataRobot) about how teams can improve AI products by focusing on error analysis, data inspection, and systematic iteration. The conversation is based on Hamel’s blog post A Field Guide to Rapidly Improving AI Products, which he joined Hugo’s class to discuss.

They cover:
🔍 Why most teams struggle to measure whether their systems are actually improving

📊 How error analysis helps you prioritize what to fix (and when to write evals)

🧮 Why evaluation isn’t just a metric — but a full development process

⚠️ Common mistakes when debugging LLM and agent systems

🛠️ How to think about the tradeoffs in adding more evals vs. fixing obvious issues

👥 Why enabling domain experts — not just engineers — can accelerate iteration

If you’ve ever built an AI system and found yourself unsure how to make it better, this conversation is for you.

LINKS


🎓 Learn more:

📺 Watch the video version on YouTube: YouTube link