Most AI teams find "evals" frustrating, but ML Engineer Hamel Husain argues they’re just using the wrong playbook. In this episode, he lays out a data-centric approach to systematically measure and improve AI, turning unreliable prototypes into robust, production-ready systems.
Drawing from his experience getting countless teams unstuck, Hamel explains why the solution requires a "revenge of the data scientists." He details the essential mindset shifts, error analysis techniques, and practical steps needed to move beyond guesswork and build AI products you can actually trust.
We talk through:
- The 10(+1) critical mistakes that cause teams to waste time on evals
- Why "hallucination scores" are a waste of time (and what to measure instead)
- The manual review process that finds major issues in hours, not weeks
- A step-by-step method for building LLM judges you can actually trust
- How to use domain experts without getting stuck in endless review committees
- Guest Bryan Bischof's "Failure as a Funnel" for debugging complex AI agents
If you're tired of ambiguous "vibe checks" and want a clear process that delivers real improvement, this episode provides the definitive roadmap.
LINKS
- Hamel's website and blog
- Hugo speaks with Philip Carter (Honeycomb) about aligning your LLM-as-a-judge with your domain expertise
- Hamel Husain on Lenny's pocast, which includes a live demo of error analysis
- The episode of VG in which Hamel and Hugo talk about Hamel's "data consulting in Vegas" era
- Upcoming Events on Luma
- Watch the podcast video on YouTube
- Hamel's AI evals course, which he teaches with Shreya Shankar (UC Berkeley): starts Oct 6 and this link gives 35% off! https://maven.com/parlance-labs/evals?promoCode=GOHUGORGOHOME
🎓 Learn more: