{"version":"https://jsonfeed.org/version/1","title":"Vanishing Gradients","home_page_url":"https://vanishinggradients.fireside.fm","feed_url":"https://vanishinggradients.fireside.fm/json","description":"A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.\r\n\r\nIt's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.","_fireside":{"subtitle":"a data podcast with hugo bowne-anderson","pubdate":"2025-01-17T08:00:00.000+11:00","explicit":false,"copyright":"2025 by Hugo Bowne-Anderson","owner":"Hugo Bowne-Anderson","image":"https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"},"items":[{"id":"ff9906ad-8576-40c7-9e0f-26dff301e52c","title":"Episode 43: Tales from 400+ LLM Deployments: Building Reliable AI Agents in Production","url":"https://vanishinggradients.fireside.fm/43","content_text":"Hugo speaks with Alex Strick van Linschoten, Machine Learning Engineer at ZenML and creator of a comprehensive LLMOps database documenting over 400 deployments. Alex's extensive research into real-world LLM implementations gives him unique insight into what actually works—and what doesn't—when deploying AI agents in production.\n\nIn this episode, we dive into:\n\n\nThe current state of AI agents in production, from successes to common failure modes\nPractical lessons learned from analyzing hundreds of real-world LLM deployments\nHow companies like Anthropic, Klarna, and Dropbox are using patterns like ReAct, RAG, and microservices to build reliable systems\nThe evolution of LLM capabilities, from expanding context windows to multimodal applications\nWhy most companies still prefer structured workflows over fully autonomous agents\n\n\nWe also explore real-world case studies of production hurdles, including cascading failures, API misfires, and hallucination challenges. Alex shares concrete strategies for integrating LLMs into your pipelines while maintaining reliability and control.\n\nWhether you're scaling agents or building LLM-powered systems, this episode offers practical insights for navigating the complex landscape of LLMOps in 2025.\n\nLINKS\n\n\nThe podcast livestream on YouTube\nThe LLMOps database\nAll blog posts about the database\nAnthropic's Building effective agents essay\nAlex on LinkedIn\nHugo on twitter\nVanishing Gradients on twitter\nVanishing Gradients on YouTube\nVanishing Gradients on Twitter\nVanishing Gradients on Lu.ma\n","content_html":"\u003cp\u003eHugo speaks with Alex Strick van Linschoten, Machine Learning Engineer at ZenML and creator of a comprehensive LLMOps database documenting over 400 deployments. Alex\u0026#39;s extensive research into real-world LLM implementations gives him unique insight into what actually works—and what doesn\u0026#39;t—when deploying AI agents in production.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, we dive into:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eThe current state of AI agents in production, from successes to common failure modes\u003c/li\u003e\n\u003cli\u003ePractical lessons learned from analyzing hundreds of real-world LLM deployments\u003c/li\u003e\n\u003cli\u003eHow companies like Anthropic, Klarna, and Dropbox are using patterns like ReAct, RAG, and microservices to build reliable systems\u003c/li\u003e\n\u003cli\u003eThe evolution of LLM capabilities, from expanding context windows to multimodal applications\u003c/li\u003e\n\u003cli\u003eWhy most companies still prefer structured workflows over fully autonomous agents\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eWe also explore real-world case studies of production hurdles, including cascading failures, API misfires, and hallucination challenges. Alex shares concrete strategies for integrating LLMs into your pipelines while maintaining reliability and control.\u003c/p\u003e\n\n\u003cp\u003eWhether you\u0026#39;re scaling agents or building LLM-powered systems, this episode offers practical insights for navigating the complex landscape of LLMOps in 2025.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/-8Gr9fVVX9g?feature=share\" rel=\"nofollow\"\u003eThe podcast livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.zenml.io/llmops-database\" rel=\"nofollow\"\u003eThe LLMOps database\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.zenml.io/category/llmops\" rel=\"nofollow\"\u003eAll blog posts about the database\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.anthropic.com/research/building-effective-agents\" rel=\"nofollow\"\u003eAnthropic\u0026#39;s Building effective agents essay\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.linkedin.com/in/strickvl/\" rel=\"nofollow\"\u003eAlex on LinkedIn\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/hugobowne\" rel=\"nofollow\"\u003eHugo on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003eVanishing Gradients on Lu.ma\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Alex Strick van Linschoten, Machine Learning Engineer at ZenML and creator of a comprehensive LLMOps database documenting over 400 deployments. Alex's extensive research into real-world LLM implementations gives him unique insight into what actually works—and what doesn't—when deploying AI agents in production.","date_published":"2025-01-17T08:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/ff9906ad-8576-40c7-9e0f-26dff301e52c.mp3","mime_type":"audio/mpeg","size_in_bytes":58615769,"duration_in_seconds":3663}]},{"id":"6af2e172-b72b-418b-baa6-369299f37b8b","title":"Episode 42: Learning, Teaching, and Building in the Age of AI","url":"https://vanishinggradients.fireside.fm/42","content_text":"In this episode of Vanishing Gradients, the tables turn as Hugo sits down with Alex Andorra, host of Learning Bayesian Statistics. Hugo shares his journey from mathematics to AI, reflecting on how Bayesian inference shapes his approach to data science, teaching, and building AI-powered applications.\n\nThey dive into the realities of deploying LLM applications, overcoming “proof-of-concept purgatory,” and why first principles and iteration are critical for success in AI. Whether you’re an educator, software engineer, or data scientist, this episode offers valuable insights into the intersection of AI, product development, and real-world deployment.\n\nLINKS\n\n\nThe podcast on YouTube\nThe original podcast episode\nAlex Andorra on LinkedIn\nHugo on LinkedIn\nHugo on twitter\nVanishing Gradients on twitter\nHugo's \"Building LLM Applications for Data Scientists and Software Engineers\" course\n","content_html":"\u003cp\u003eIn this episode of Vanishing Gradients, the tables turn as Hugo sits down with Alex Andorra, host of Learning Bayesian Statistics. Hugo shares his journey from mathematics to AI, reflecting on how Bayesian inference shapes his approach to data science, teaching, and building AI-powered applications.\u003c/p\u003e\n\n\u003cp\u003eThey dive into the realities of deploying LLM applications, overcoming “proof-of-concept purgatory,” and why first principles and iteration are critical for success in AI. Whether you’re an educator, software engineer, or data scientist, this episode offers valuable insights into the intersection of AI, product development, and real-world deployment.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/watch?v=BRIYytbqtP0\" rel=\"nofollow\"\u003eThe podcast on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://learnbayesstats.com/episode/122-learning-and-teaching-in-the-age-of-ai-hugo-bowne-anderson\" rel=\"nofollow\"\u003eThe original podcast episode\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.linkedin.com/in/alex-andorra/\" rel=\"nofollow\"\u003eAlex Andorra on LinkedIn\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/\" rel=\"nofollow\"\u003eHugo on LinkedIn\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/hugobowne\" rel=\"nofollow\"\u003eHugo on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://maven.com/s/course/d56067f338\" rel=\"nofollow\"\u003eHugo\u0026#39;s \u0026quot;Building LLM Applications for Data Scientists and Software Engineers\u0026quot; course\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"The tables turn as Hugo sits down with Alex Andorra, host of Learning Bayesian Statistics. Hugo shares his journey from mathematics to AI, reflecting on how Bayesian inference shapes his approach to data science, teaching, and building AI-powered applications.","date_published":"2025-01-04T14:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/6af2e172-b72b-418b-baa6-369299f37b8b.mp3","mime_type":"audio/mpeg","size_in_bytes":76860106,"duration_in_seconds":4803}]},{"id":"695d8cc9-b111-4f1d-9871-82962ae023f4","title":"Episode 41: Beyond Prompt Engineering: Can AI Learn to Set Its Own Goals?","url":"https://vanishinggradients.fireside.fm/41","content_text":"Hugo Bowne-Anderson hosts a panel discussion from the MLOps World and Generative AI Summit in Austin, exploring the long-term growth of AI by distinguishing real problem-solving from trend-based solutions. If you're navigating the evolving landscape of generative AI, productionizing models, or questioning the hype, this episode dives into the tough questions shaping the field.\n\nThe panel features: \n\n\nBen Taylor (Jepson) – CEO and Founder at VEOX Inc., with experience in AI exploration, genetic programming, and deep learning.\nJoe Reis – Co-founder of Ternary Data and author of Fundamentals of Data Engineering.\nJuan Sequeda – Principal Scientist and Head of AI Lab at Data.World, known for his expertise in knowledge graphs and the semantic web.\n\n\nThe discussion unpacks essential topics such as: \n\n\nThe shift from prompt engineering to goal engineering—letting AI iterate toward well-defined objectives.\nWhether generative AI is having an electricity moment or more of a blockchain trajectory.\nThe combinatorial power of AI to explore new solutions, drawing parallels to AlphaZero redefining strategy games.\nThe POC-to-production gap and why AI projects stall.\nFailure modes, hallucinations, and governance risks—and how to mitigate them.\nThe disconnect between executive optimism and employee workload.\n\n\nHugo also mentions his upcoming workshop on escaping Proof-of-Concept Purgatory, which has evolved into a Maven course \"Building LLM Applications for Data Scientists and Software Engineers\" launching in January. Vanishing Gradient listeners can get 25% off the course (use the code VG25), with $1,000 in Modal compute credits included.\n\nA huge thanks to Dave Scharbach and the Toronto Machine Learning Society for organizing the conference and to the audience for their thoughtful questions.\n\nAs we head into the new year, this conversation offers a reality check amidst the growing AI agent hype. \n\nLINKS\n\n\nHugo on twitter\nHugo on LinkedIn\nVanishing Gradients on twitter\n\"Building LLM Applications for Data Scientists and Software Engineers\" course.\n","content_html":"\u003cp\u003eHugo Bowne-Anderson hosts a panel discussion from the MLOps World and Generative AI Summit in Austin, exploring the long-term growth of AI by distinguishing real problem-solving from trend-based solutions. If you\u0026#39;re navigating the evolving landscape of generative AI, productionizing models, or questioning the hype, this episode dives into the tough questions shaping the field.\u003c/p\u003e\n\n\u003cp\u003eThe panel features: \u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.linkedin.com/in/jepsontaylor/\" rel=\"nofollow\"\u003e\u003cstrong\u003eBen Taylor (Jepson)\u003c/strong\u003e\u003c/a\u003e – CEO and Founder at VEOX Inc., with experience in AI exploration, genetic programming, and deep learning.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.linkedin.com/in/josephreis/\" rel=\"nofollow\"\u003e\u003cstrong\u003eJoe Reis\u003c/strong\u003e\u003c/a\u003e – Co-founder of Ternary Data and author of \u003cem\u003eFundamentals of Data Engineering\u003c/em\u003e.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.linkedin.com/in/juansequeda/\" rel=\"nofollow\"\u003e\u003cstrong\u003eJuan Sequeda\u003c/strong\u003e\u003c/a\u003e – Principal Scientist and Head of AI Lab at Data.World, known for his expertise in knowledge graphs and the semantic web.\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eThe discussion unpacks essential topics such as: \u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eThe shift from \u003cstrong\u003eprompt engineering\u003c/strong\u003e to \u003cstrong\u003egoal engineering\u003c/strong\u003e—letting AI iterate toward well-defined objectives.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003eWhether generative AI is having an \u003cstrong\u003eelectricity moment\u003c/strong\u003e or more of a \u003cstrong\u003eblockchain trajectory\u003c/strong\u003e.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003eThe \u003cstrong\u003ecombinatorial power of AI\u003c/strong\u003e to explore new solutions, drawing parallels to AlphaZero redefining strategy games.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003eThe \u003cstrong\u003ePOC-to-production gap\u003c/strong\u003e and why AI projects stall.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eFailure modes, hallucinations, and governance risks\u003c/strong\u003e—and how to mitigate them.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003eThe disconnect between executive optimism and employee workload.\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eHugo also mentions his upcoming workshop on \u003cstrong\u003eescaping Proof-of-Concept Purgatory\u003c/strong\u003e, \u003ca href=\"https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0\u0026utm_medium=partner\u0026utm_source=instructor\" rel=\"nofollow\"\u003ewhich has evolved into a \u003cstrong\u003eMaven course \u0026quot;Building LLM Applications for Data Scientists and Software Engineers\u0026quot; launching in January\u003c/strong\u003e\u003c/a\u003e. Vanishing Gradient listeners can get 25% off the course (use the code VG25), with $1,000 in Modal compute credits included.\u003c/p\u003e\n\n\u003cp\u003eA huge thanks to \u003cstrong\u003eDave Scharbach and the Toronto Machine Learning Society\u003c/strong\u003e for organizing the conference and to the audience for their thoughtful questions.\u003c/p\u003e\n\n\u003cp\u003eAs we head into the new year, this conversation offers a reality check amidst the growing AI agent hype. \u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/hugobowne\" rel=\"nofollow\"\u003eHugo on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/\" rel=\"nofollow\"\u003eHugo on LinkedIn\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0\u0026utm_medium=partner\u0026utm_source=instructor\" rel=\"nofollow\"\u003e\u0026quot;Building LLM Applications for Data Scientists and Software Engineers\u0026quot; course\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo Bowne-Anderson hosts a panel discussion from the MLOps World and Generative AI Summit in Austin, exploring the long-term growth of AI by distinguishing real problem-solving from trend-based solutions. If you're navigating the evolving landscape of generative AI, productionizing models, or questioning the hype, this episode dives into the tough questions shaping the field.","date_published":"2024-12-31T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/695d8cc9-b111-4f1d-9871-82962ae023f4.mp3","mime_type":"audio/mpeg","size_in_bytes":42114740,"duration_in_seconds":2631}]},{"id":"b1b66484-5fd0-4bcb-91cb-8bf7201a5ded","title":"Episode 40: What Every LLM Developer Needs to Know About GPUs","url":"https://vanishinggradients.fireside.fm/40","content_text":"Hugo speaks with Charles Frye, Developer Advocate at Modal and someone who really knows GPUs inside and out. If you’re a data scientist, machine learning engineer, AI researcher, or just someone trying to make sense of hardware for LLMs and AI workflows, this episode is for you. \n\nCharles and Hugo dive into the practical side of GPUs—from running inference on large models, to fine-tuning and even training from scratch. They unpack the real pain points developers face, like figuring out: \n\n\nHow much VRAM you actually need.\nWhy memory—not compute—ends up being the bottleneck.\nHow to make quick, back-of-the-envelope calculations to size up hardware for your tasks.\nAnd where things like fine-tuning, quantization, and retrieval-augmented generation (RAG) fit into the mix.\n\n\nOne thing Hugo really appreciate is that Charles and the Modal team recently put together the GPU Glossary—a resource that breaks down GPU internals in a way that’s actually useful for developers. We reference it a few times throughout the episode, so check it out in the show notes below. \n\n🔧 Charles also does a demo during the episode—some of it is visual, but we talk through the key points so you’ll still get value from the audio. If you’d like to see the demo in action, check out the livestream linked below.\n\nThis is the \"Building LLM Applications for Data Scientists and Software Engineers\" course that Hugo is teaching with Stefan Krawczyk (ex-StitchFix) in January. Charles is giving a guest lecture at on hardware for LLMs, and Modal is giving all students $1K worth of compute credits (use the code VG25 for $200 off).\n\nLINKS\n\n\nThe livestream on YouTube\nThe GPU Glossary by the Modal team\nWhat We’ve Learned From A Year of Building with LLMs by Charles and friends\nCharles on twitter\nHugo on twitter\nVanishing Gradients on twitter\n","content_html":"\u003cp\u003eHugo speaks with \u003cstrong\u003eCharles Frye\u003c/strong\u003e, Developer Advocate at Modal and someone who really knows GPUs inside and out. If you’re a data scientist, machine learning engineer, AI researcher, or just someone trying to make sense of \u003cstrong\u003ehardware for LLMs and AI workflows\u003c/strong\u003e, this episode is for you. \u003c/p\u003e\n\n\u003cp\u003eCharles and Hugo dive into the \u003cstrong\u003epractical side of GPUs\u003c/strong\u003e—from \u003cstrong\u003erunning inference\u003c/strong\u003e on large models, to \u003cstrong\u003efine-tuning\u003c/strong\u003e and even \u003cstrong\u003etraining from scratch.\u003c/strong\u003e They unpack the \u003cstrong\u003ereal pain points\u003c/strong\u003e developers face, like figuring out: \u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eHow much VRAM you actually need.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003eWhy memory—not compute—ends up being the bottleneck.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003eHow to make quick, \u003cstrong\u003eback-of-the-envelope calculations\u003c/strong\u003e to size up hardware for your tasks.\u003cbr\u003e\u003c/li\u003e\n\u003cli\u003eAnd where things like \u003cstrong\u003efine-tuning, quantization, and retrieval-augmented generation (RAG)\u003c/strong\u003e fit into the mix.\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eOne thing Hugo really appreciate is that Charles and the Modal team recently put together the \u003cstrong\u003eGPU Glossary\u003c/strong\u003e—a resource that breaks down GPU internals in a way that’s actually useful for developers. We reference it a few times throughout the episode, so check it out in the show notes below. \u003c/p\u003e\n\n\u003cp\u003e🔧 \u003cstrong\u003eCharles also does a demo during the episode\u003c/strong\u003e—some of it is visual, but we talk through the key points so you’ll still get value from the audio. If you’d like to see the demo in action, check out the livestream linked below.\u003c/p\u003e\n\n\u003cp\u003e\u003ca href=\"https://maven.com/s/course/d56067f338\" rel=\"nofollow\"\u003eThis is the \u0026quot;Building LLM Applications for Data Scientists and Software Engineers\u0026quot; course that Hugo is teaching with Stefan Krawczyk (ex-StitchFix) in January\u003c/a\u003e. Charles is giving a guest lecture at on hardware for LLMs, and Modal is giving all students $1K worth of compute credits (use the code VG25 for $200 off).\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/live/INryb8Hjk3c?si=0cbb0-Nxem1P987d\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://modal.com/gpu-glossary\" rel=\"nofollow\"\u003eThe GPU Glossary\u003c/a\u003e by the Modal team\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://applied-llms.org/\" rel=\"nofollow\"\u003eWhat We’ve Learned From A Year of Building with LLMs\u003c/a\u003e by Charles and friends\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/charles_irl\" rel=\"nofollow\"\u003eCharles on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/hugobowne\" rel=\"nofollow\"\u003eHugo on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with **Charles Frye**, Developer Advocate at Modal and someone who really knows GPUs inside and out. If you’re a data scientist, machine learning engineer, AI researcher, or just someone trying to make sense of **hardware for LLMs and AI workflows**, this episode is for you. \r\n\r\nCharles and Hugo dive into the **practical side of GPUs**—from **running inference** on large models, to **fine-tuning** and even **training from scratch.** ","date_published":"2024-12-24T15:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/b1b66484-5fd0-4bcb-91cb-8bf7201a5ded.mp3","mime_type":"audio/mpeg","size_in_bytes":99441605,"duration_in_seconds":6214}]},{"id":"bf5453c0-4aa2-4abb-b323-20334f787512","title":"Episode 39: From Models to Products: Bridging Research and Practice in Generative AI at Google Labs","url":"https://vanishinggradients.fireside.fm/39","content_text":"Hugo speaks with Ravin Kumar,*Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility.\n\nIn this episode, we dive into:\n • Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline.\n • How to build generative AI systems that are scalable, reliable, and aligned with user needs.\n • Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI.\n • The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind.\n\nWe also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy.\n\nLINKS\n\n\nThe livestream on YouTube\nGoogle Labs\nRavin's GenAI Handbook\nBreadboard: A library for prototyping generative AI applications\n\n\nAs mentioned in the episode, Hugo is teaching a four-week course, Building LLM Applications for Data Scientists and SWEs, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q\u0026amp;As, and guest lectures from industry experts.\n\nListeners of Vanishing Gradients can get 25% off the course using this special link or by applying the code VG25 at checkout.","content_html":"\u003cp\u003eHugo speaks with Ravin Kumar,*Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, we dive into:\u003cbr\u003e\n • Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline.\u003cbr\u003e\n • How to build generative AI systems that are scalable, reliable, and aligned with user needs.\u003cbr\u003e\n • Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI.\u003cbr\u003e\n • The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind.\u003c/p\u003e\n\n\u003cp\u003eWe also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/live/ffS6NWqoo_k\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://labs.google/\" rel=\"nofollow\"\u003eGoogle Labs\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://ravinkumar.com/GenAiGuidebook/book_intro.html\" rel=\"nofollow\"\u003eRavin\u0026#39;s GenAI Handbook\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://breadboard-ai.github.io/breadboard/\" rel=\"nofollow\"\u003eBreadboard: A library for prototyping generative AI applications\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAs mentioned in the episode, Hugo is teaching a four-week course, \u003cstrong\u003eBuilding LLM Applications for Data Scientists and SWEs\u003c/strong\u003e, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q\u0026amp;As, and guest lectures from industry experts.\u003c/p\u003e\n\n\u003cp\u003eListeners of Vanishing Gradients can get 25% off the course using \u003ca href=\"https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=VG25\" rel=\"nofollow\"\u003ethis special link\u003c/a\u003e or by applying the code VG25 at checkout.\u003c/p\u003e","summary":"From building rockets at SpaceX to advancing generative AI at Google Labs, Ravin Kumar has carved a unique path through the world of technology. In this episode, we explore how to build scalable, reliable AI systems, the skills needed to work across the AI/ML pipeline, and the real-world impact of tools like open-weight models such as Gemma. Ravin also shares insights into designing AI tools like Notebook LM with the user journey at the forefront.","date_published":"2024-11-26T03:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/bf5453c0-4aa2-4abb-b323-20334f787512.mp3","mime_type":"audio/mpeg","size_in_bytes":99346310,"duration_in_seconds":6208}]},{"id":"c1a5c8d1-777a-41b7-a123-6b06861dbc35","title":"Episode 38: The Art of Freelance AI Consulting and Products: Data, Dollars, and Deliverables","url":"https://vanishinggradients.fireside.fm/38","content_text":"Hugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions.\n\nThis episode is a bit of an experiment. Instead of our usual technical deep dives, we’re focusing on the world of AI consulting and freelancing. We explore Jason’s consulting playbook, covering how he structures contracts to maximize value, strategies for moving from hourly billing to securing larger deals, and the mindset shift needed to align incentives with clients. We’ll also discuss the challenges of moving from deterministic software to probabilistic AI systems and even do a live role-playing session where Jason coaches me on client engagement and pricing pitfalls.\n\nLINKS\n\n\nThe livestream on YouTube\nJason's Upcoming course: AI Consultant Accelerator: From Expert to High-Demand Business\nHugo's upcoming course: Building LLM Applications for Data Scientists and Software Engineers\nJason's website\nJason's indie consulting newsletter\nYour AI Product Needs Evals by Hamel Husain\nWhat We’ve Learned From A Year of Building with LLMs\nDear Future AI Consultant by Jason\nAlex Hormozi's books\nThe Burnout Society by Byung-Chul Han\nJason on Twitter\nVanishing Gradients on Twitter\nHugo on Twitter\nVanishing Gradients' lu.ma calendar\nVanishing Gradients on YouTube\n","content_html":"\u003cp\u003eHugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions.\u003c/p\u003e\n\n\u003cp\u003eThis episode is a bit of an experiment. Instead of our usual technical deep dives, we’re focusing on the world of AI consulting and freelancing. We explore Jason’s consulting playbook, covering how he structures contracts to maximize value, strategies for moving from hourly billing to securing larger deals, and the mindset shift needed to align incentives with clients. We’ll also discuss the challenges of moving from deterministic software to probabilistic AI systems and even do a live role-playing session where Jason coaches me on client engagement and pricing pitfalls.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/9CFs06UDbGI?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://maven.com/indie-consulting/ai-consultant-accelerator?utm_campaign=9532cc\u0026utm_medium=partner\u0026utm_source=instructor\" rel=\"nofollow\"\u003eJason\u0026#39;s Upcoming course: AI Consultant Accelerator: From Expert to High-Demand Business\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://maven.com/s/course/d56067f338\" rel=\"nofollow\"\u003eHugo\u0026#39;s upcoming course: Building LLM Applications for Data Scientists and Software Engineers\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://jxnl.co/\" rel=\"nofollow\"\u003eJason\u0026#39;s website\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://indieconsulting.podia.com/\" rel=\"nofollow\"\u003eJason\u0026#39;s indie consulting newsletter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://hamel.dev/blog/posts/evals/\" rel=\"nofollow\"\u003eYour AI Product Needs Evals by Hamel Husain\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://applied-llms.org/\" rel=\"nofollow\"\u003eWhat We’ve Learned From A Year of Building with LLMs\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://jxnl.co/writing/#dear-future-ai-consultant\" rel=\"nofollow\"\u003eDear Future AI Consultant by Jason\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.acquisition.com/books\" rel=\"nofollow\"\u003eAlex Hormozi\u0026#39;s books\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.sup.org/books/theory-and-philosophy/burnout-society\" rel=\"nofollow\"\u003eThe Burnout Society by Byung-Chul Han\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/jxnlco\" rel=\"nofollow\"\u003eJason on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003eVanishing Gradients\u0026#39; lu.ma calendar\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/@vanishinggradients\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions.","date_published":"2024-11-05T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c1a5c8d1-777a-41b7-a123-6b06861dbc35.mp3","mime_type":"audio/mpeg","size_in_bytes":80443270,"duration_in_seconds":5027}]},{"id":"eadec2c4-f8f9-45b0-ae7e-5867f7201801","title":"Episode 37: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 2","url":"https://vanishinggradients.fireside.fm/37","content_text":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.\n\nThis is Part 2 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.\n\nIn this episode, we cover:\n\n\nThe Prompt Report: A comprehensive survey on prompting techniques, agents, and generative AI, including advanced evaluation methods for assessing these techniques.\nSecurity Risks and Prompt Hacking: A detailed exploration of the security concerns surrounding prompt engineering, including Sander’s thoughts on its potential applications in cybersecurity and military contexts.\nAI’s Impact Across Fields: A discussion on how generative AI is reshaping various domains, including the social sciences and security.\nMultimodal AI: Updates on how large language models (LLMs) are expanding to interact with images, code, and music.\nCase Study - Detecting Suicide Risk: A careful examination of how prompting techniques are being used in important areas like detecting suicide risk, showcasing the critical potential of AI in addressing sensitive, real-world challenges.\n\n\nThe episode concludes with a reflection on the evolving landscape of LLMs and multimodal AI, and what might be on the horizon.\n\nIf you haven’t yet, make sure to check out Part 1, where we discuss the history of NLP, prompt engineering techniques, and Sander’s development of the Learn Prompting initiative.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Prompt Report: A Systematic Survey of Prompting Techniques\nLearn Prompting: Your Guide to Communicating with AI\nVanishing Gradients on Twitter\nHugo on Twitter\nVanishing Gradients' lu.ma calendar\nVanishing Gradients on YouTube\n","content_html":"\u003cp\u003eHugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.\u003c/p\u003e\n\n\u003cp\u003eThis is Part 2 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, we cover:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003cp\u003e\u003cstrong\u003eThe Prompt Report:\u003c/strong\u003e A comprehensive survey on prompting techniques, agents, and generative AI, including advanced evaluation methods for assessing these techniques.\u003c/p\u003e\u003c/li\u003e\n\u003cli\u003e\u003cp\u003e\u003cstrong\u003eSecurity Risks and Prompt Hacking:\u003c/strong\u003e A detailed exploration of the security concerns surrounding prompt engineering, including Sander’s thoughts on its potential applications in cybersecurity and military contexts.\u003c/p\u003e\u003c/li\u003e\n\u003cli\u003e\u003cp\u003e\u003cstrong\u003eAI’s Impact Across Fields:\u003c/strong\u003e A discussion on how generative AI is reshaping various domains, including the social sciences and security.\u003c/p\u003e\u003c/li\u003e\n\u003cli\u003e\u003cp\u003e\u003cstrong\u003eMultimodal AI:\u003c/strong\u003e Updates on how large language models (LLMs) are expanding to interact with images, code, and music.\u003c/p\u003e\u003c/li\u003e\n\u003cli\u003e\u003cp\u003e\u003cstrong\u003eCase Study - Detecting Suicide Risk:\u003c/strong\u003e A careful examination of how prompting techniques are being used in important areas like detecting suicide risk, showcasing the critical potential of AI in addressing sensitive, real-world challenges.\u003c/p\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eThe episode concludes with a reflection on the evolving landscape of \u003cstrong\u003eLLMs\u003c/strong\u003e and multimodal AI, and what might be on the horizon.\u003c/p\u003e\n\n\u003cp\u003eIf you haven’t yet, make sure to check out \u003cstrong\u003ePart 1\u003c/strong\u003e, where we discuss the history of NLP, prompt engineering techniques, and Sander’s development of the Learn Prompting initiative.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/FreXovgG-9A?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://arxiv.org/abs/2406.06608\" rel=\"nofollow\"\u003eThe Prompt Report: A Systematic Survey of Prompting Techniques\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://learnprompting.org/\" rel=\"nofollow\"\u003eLearn Prompting: Your Guide to Communicating with AI\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003eVanishing Gradients\u0026#39; lu.ma calendar\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/@vanishinggradients\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.","date_published":"2024-10-08T17:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/eadec2c4-f8f9-45b0-ae7e-5867f7201801.mp3","mime_type":"audio/mpeg","size_in_bytes":48585166,"duration_in_seconds":3036}]},{"id":"acd8aaec-1788-459d-a4e9-10feae67a19a","title":"Episode 36: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 1","url":"https://vanishinggradients.fireside.fm/36","content_text":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.\n\nThis is Part 1 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.\n\nIn this first part, \n\n\nwe’ll explore the critical role of prompt engineering, \n\u0026amp; diving into adversarial techniques like prompt hacking and \nthe challenges of evaluating these techniques. \nwe’ll examine the impact of few-shot learning and \nthe groundbreaking taxonomy of prompting techniques from the Prompt Report.\n\n\nAlong the way, \n\n\nwe’ll uncover the rich history of natural language processing (NLP) and AI, showing how modern prompting techniques evolved from early rule-based systems and statistical methods. \nwe’ll also hear how Sander’s experimentation with GPT-3 for diplomatic tasks led him to develop Learn Prompting, and \nhow Dennis highlights the accessibility of AI through prompting, which allows non-technical users to interact with AI without needing to code.\n\n\nFinally, we’ll explore the future of multimodal AI, where LLMs interact with images, code, and even music creation. Make sure to tune in to Part 2, where we dive deeper into security risks, prompt hacking, and more.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Prompt Report: A Systematic Survey of Prompting Techniques\nLearn Prompting: Your Guide to Communicating with AI\nVanishing Gradients on Twitter\nHugo on Twitter\nVanishing Gradients' lu.ma calendar\nVanishing Gradients on YouTube\n","content_html":"\u003cp\u003eHugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.\u003c/p\u003e\n\n\u003cp\u003eThis is Part 1 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.\u003c/p\u003e\n\n\u003cp\u003eIn this first part, \u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003ewe’ll explore the critical role of prompt engineering, \u003c/li\u003e\n\u003cli\u003e\u0026amp; diving into adversarial techniques like prompt hacking and \u003c/li\u003e\n\u003cli\u003ethe challenges of evaluating these techniques. \u003c/li\u003e\n\u003cli\u003ewe’ll examine the impact of few-shot learning and \u003c/li\u003e\n\u003cli\u003ethe groundbreaking taxonomy of prompting techniques from the Prompt Report.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAlong the way, \u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003ewe’ll uncover the rich history of natural language processing (NLP) and AI, showing how modern prompting techniques evolved from early rule-based systems and statistical methods. \u003c/li\u003e\n\u003cli\u003ewe’ll also hear how Sander’s experimentation with GPT-3 for diplomatic tasks led him to develop Learn Prompting, and \u003c/li\u003e\n\u003cli\u003ehow Dennis highlights the accessibility of AI through prompting, which allows non-technical users to interact with AI without needing to code.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eFinally, we’ll explore the future of multimodal AI, where LLMs interact with images, code, and even music creation. Make sure to tune in to Part 2, where we dive deeper into security risks, prompt hacking, and more.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/FreXovgG-9A?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://arxiv.org/abs/2406.06608\" rel=\"nofollow\"\u003eThe Prompt Report: A Systematic Survey of Prompting Techniques\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://learnprompting.org/\" rel=\"nofollow\"\u003eLearn Prompting: Your Guide to Communicating with AI\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003eVanishing Gradients\u0026#39; lu.ma calendar\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/@vanishinggradients\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.","date_published":"2024-09-30T18:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/acd8aaec-1788-459d-a4e9-10feae67a19a.mp3","mime_type":"audio/mpeg","size_in_bytes":61232193,"duration_in_seconds":3826}]},{"id":"feeeecc8-a170-48c7-ae4c-8dd64484c64c","title":"Episode 35: Open Science at NASA -- Measuring Impact and the Future of AI","url":"https://vanishinggradients.fireside.fm/35","content_text":"Hugo speaks with Dr. Chelle Gentemann, Open Science Program Scientist for NASA’s Office of the Chief Science Data Officer, about NASA’s ambitious efforts to integrate AI across the research lifecycle. In this episode, we’ll dive deeper into how AI is transforming NASA’s approach to science, making data more accessible and advancing open science practices. We explore\n\n\nMeasuring the Impact of Open Science: How NASA is developing new metrics to evaluate the effectiveness of open science, moving beyond traditional publication-based assessments.\nThe Process of Scientific Discovery: Insights into the collaborative nature of research and how breakthroughs are achieved at NASA.\n** AI Applications in NASA’s Science:** From rats in space to exploring the origins of the universe, we cover how AI is being applied across NASA’s divisions to improve data accessibility and analysis.\nAddressing Challenges in Open Science: The complexities of implementing open science within government agencies and research environments.\nReforming Incentive Systems: How NASA is reconsidering traditional metrics like publications and citations, and starting to recognize contributions such as software development and data sharing.\nThe Future of Open Science: How open science is shaping the future of research, fostering interdisciplinary collaboration, and increasing accessibility.\n\n\nThis conversation offers valuable insights for researchers, data scientists, and those interested in the practical applications of AI and open science. Join us as we discuss how NASA is working to make science more collaborative, reproducible, and impactful.\n\nLINKS\n\n\nThe livestream on YouTube\nNASA's Open Science 101 course \u0026lt;-- do it to learn and also to get NASA Swag!\nScience Cast\nNASA and IBM Openly Release Geospatial AI Foundation Model for NASA Earth Observation Data\nJake VanderPlas' daily conundrum tweet from 2013\nReplit, \"an AI-powered software development \u0026amp; deployment platform for building, sharing, and shipping software fast.\"\n","content_html":"\u003cp\u003eHugo speaks with Dr. Chelle Gentemann, Open Science Program Scientist for NASA’s Office of the Chief Science Data Officer, about NASA’s ambitious efforts to integrate AI across the research lifecycle. In this episode, we’ll dive deeper into how AI is transforming NASA’s approach to science, making data more accessible and advancing open science practices. We explore\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eMeasuring the Impact of Open Science:\u003c/strong\u003e How NASA is developing new metrics to evaluate the effectiveness of open science, moving beyond traditional publication-based assessments.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eThe Process of Scientific Discovery:\u003c/strong\u003e Insights into the collaborative nature of research and how breakthroughs are achieved at NASA.\u003c/li\u003e\n\u003cli\u003e** AI Applications in NASA’s Science:** From rats in space to exploring the origins of the universe, we cover how AI is being applied across NASA’s divisions to improve data accessibility and analysis.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eAddressing Challenges in Open Science:\u003c/strong\u003e The complexities of implementing open science within government agencies and research environments.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eReforming Incentive Systems:\u003c/strong\u003e How NASA is reconsidering traditional metrics like publications and citations, and starting to recognize contributions such as software development and data sharing.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eThe Future of Open Science:\u003c/strong\u003e How open science is shaping the future of research, fostering interdisciplinary collaboration, and increasing accessibility.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eThis conversation offers valuable insights for researchers, data scientists, and those interested in the practical applications of AI and open science. Join us as we discuss how NASA is working to make science more collaborative, reproducible, and impactful.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/VJDg3ZbkNOE?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://openscience101.org/\" rel=\"nofollow\"\u003eNASA\u0026#39;s Open Science 101 course \u0026lt;-- do it to learn and also to get NASA Swag!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://sciencecast.org/\" rel=\"nofollow\"\u003eScience Cast\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.earthdata.nasa.gov/news/impact-ibm-hls-foundation-model\" rel=\"nofollow\"\u003eNASA and IBM Openly Release Geospatial AI Foundation Model for NASA Earth Observation Data\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/jakevdp/status/408678764705378304\" rel=\"nofollow\"\u003eJake VanderPlas\u0026#39; daily conundrum tweet from 2013\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://replit.com/\" rel=\"nofollow\"\u003eReplit, \u0026quot;an AI-powered software development \u0026amp; deployment platform for building, sharing, and shipping software fast.\u0026quot;\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Dr. Chelle Gentemann, Open Science Program Scientist for NASA’s Office of the Chief Science Data Officer, about NASA’s ambitious efforts to integrate AI across the research lifecycle. In this episode, we’ll dive deeper into how AI is transforming NASA’s approach to science, making data more accessible and advancing open science practices.","date_published":"2024-09-19T17:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/feeeecc8-a170-48c7-ae4c-8dd64484c64c.mp3","mime_type":"audio/mpeg","size_in_bytes":55905303,"duration_in_seconds":3493}]},{"id":"8c18d59e-9b79-4682-8e3c-ba682daf1c1c","title":"Episode 34: The AI Revolution Will Not Be Monopolized","url":"https://vanishinggradients.fireside.fm/34","content_text":"Hugo speaks with Ines Montani and Matthew Honnibal, the creators of spaCy and founders of Explosion AI. Collectively, they've had a huge impact on the fields of industrial natural language processing (NLP), ML, and AI through their widely-used open-source library spaCy and their innovative annotation tool Prodigy. These tools have become essential for many data scientists and NLP practitioners in industry and academia alike.\n\nIn this wide-ranging discussion, we dive into:\n\n• The evolution of applied NLP and its role in industry\n• The balance between large language models and smaller, specialized models\n• Human-in-the-loop distillation for creating faster, more data-private AI systems\n• The challenges and opportunities in NLP, including modularity, transparency, and privacy\n• The future of AI and software development\n• The potential impact of AI regulation on innovation and competition\n\nWe also touch on their recent transition back to a smaller, more independent-minded company structure and the lessons learned from their journey in the AI startup world.\n\nInes and Matt offer invaluable insights for data scientists, machine learning practitioners, and anyone interested in the practical applications of AI. They share their thoughts on how to approach NLP projects, the importance of data quality, and the role of open-source in advancing the field.\n\nWhether you're a seasoned NLP practitioner or just getting started with AI, this episode offers a wealth of knowledge from two of the field's most respected figures. Join us for a discussion that explores the current landscape of AI development, with insights that bridge the gap between cutting-edge research and real-world applications.\n\nLINKS\n\n\nThe livestream on YouTube\nHow S\u0026amp;P Global is making markets more transparent with NLP, spaCy and Prodigy\nA practical guide to human-in-the-loop distillation\nLaws of Tech: Commoditize Your Complement\nspaCy: Industrial-Strength Natural Language Processing\nLLMs with spaCy\nExplosion, building developer tools for AI, Machine Learning and Natural Language Processing\nBack to our roots: Company update and future plans, by Matt and Ines\nMatt's detailed blog post: back to our roots\nInes on twitter\nMatt on twitter\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nCheck out and subcribe to our lu.ma calendar for upcoming livestreams!","content_html":"\u003cp\u003eHugo speaks with Ines Montani and Matthew Honnibal, the creators of spaCy and founders of Explosion AI. Collectively, they\u0026#39;ve had a huge impact on the fields of industrial natural language processing (NLP), ML, and AI through their widely-used open-source library spaCy and their innovative annotation tool Prodigy. These tools have become essential for many data scientists and NLP practitioners in industry and academia alike.\u003c/p\u003e\n\n\u003cp\u003eIn this wide-ranging discussion, we dive into:\u003c/p\u003e\n\n\u003cp\u003e• The evolution of applied NLP and its role in industry\u003cbr\u003e\n• The balance between large language models and smaller, specialized models\u003cbr\u003e\n• Human-in-the-loop distillation for creating faster, more data-private AI systems\u003cbr\u003e\n• The challenges and opportunities in NLP, including modularity, transparency, and privacy\u003cbr\u003e\n• The future of AI and software development\u003cbr\u003e\n• The potential impact of AI regulation on innovation and competition\u003c/p\u003e\n\n\u003cp\u003eWe also touch on their recent transition back to a smaller, more independent-minded company structure and the lessons learned from their journey in the AI startup world.\u003c/p\u003e\n\n\u003cp\u003eInes and Matt offer invaluable insights for data scientists, machine learning practitioners, and anyone interested in the practical applications of AI. They share their thoughts on how to approach NLP projects, the importance of data quality, and the role of open-source in advancing the field.\u003c/p\u003e\n\n\u003cp\u003eWhether you\u0026#39;re a seasoned NLP practitioner or just getting started with AI, this episode offers a wealth of knowledge from two of the field\u0026#39;s most respected figures. Join us for a discussion that explores the current landscape of AI development, with insights that bridge the gap between cutting-edge research and real-world applications.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/-6o5-3cP0ik?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://explosion.ai/blog/sp-global-commodities\" rel=\"nofollow\"\u003eHow S\u0026amp;P Global is making markets more transparent with NLP, spaCy and Prodigy\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://explosion.ai/blog/human-in-the-loop-distillation\" rel=\"nofollow\"\u003eA practical guide to human-in-the-loop distillation\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://gwern.net/complement\" rel=\"nofollow\"\u003eLaws of Tech: Commoditize Your Complement\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://spacy.io/\" rel=\"nofollow\"\u003espaCy: Industrial-Strength Natural Language Processing\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://spacy.io/usage/large-language-models\" rel=\"nofollow\"\u003eLLMs with spaCy\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://explosion.ai/\" rel=\"nofollow\"\u003eExplosion, building developer tools for AI, Machine Learning and Natural Language Processing\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://explosion.ai/blog/back-to-our-roots-company-update\" rel=\"nofollow\"\u003eBack to our roots: Company update and future plans, by Matt and Ines\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://honnibal.dev/blog/back-to-our-roots\" rel=\"nofollow\"\u003eMatt\u0026#39;s detailed blog post: back to our roots\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/_inesmontani\" rel=\"nofollow\"\u003eInes on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/honnibal\" rel=\"nofollow\"\u003eMatt on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eCheck out and subcribe to our \u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003elu.ma calendar\u003c/a\u003e for upcoming livestreams!\u003c/p\u003e","summary":"Hugo speaks with Ines Montani and Matthew Honnibal, the creators of spaCy and founders of Explosion AI. Collectively, they've had a huge impact on the fields of industrial natural language processing (NLP), ML, and AI through their widely-used open-source library spaCy and their innovative annotation tool Prodigy.","date_published":"2024-08-22T17:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/8c18d59e-9b79-4682-8e3c-ba682daf1c1c.mp3","mime_type":"audio/mpeg","size_in_bytes":98751972,"duration_in_seconds":6171}]},{"id":"9cae0a8b-259a-4b01-a0f4-e5958297542b","title":"Episode 33: What We Learned Teaching LLMs to 1,000s of Data Scientists","url":"https://vanishinggradients.fireside.fm/33","content_text":"Hugo speaks with Dan Becker and Hamel Husain, two veterans in the world of data science, machine learning, and AI education. Collectively, they’ve worked at Google, DataRobot, Airbnb, Github (where Hamel built out the precursor to copilot and more) and they both currently work as independent LLM and Generative AI consultants.\n\nDan and Hamel recently taught a course on fine-tuning large language models that evolved into a full-fledged conference, attracting over 2,000 participants. This experience gave them unique insights into the current state and future of AI education and application.\n\nIn this episode, we dive into:\n\n\nThe evolution of their course from fine-tuning to a comprehensive AI conference\nThe unexpected challenges and insights gained from teaching LLMs to data scientists\nThe current state of AI tooling and accessibility compared to a decade ago\nThe role of playful experimentation in driving innovation in the field\nThoughts on the economic impact and ROI of generative AI in various industries\nThe importance of proper evaluation in machine learning projects\nFuture predictions for AI education and application in the next five years\nWe also touch on the challenges of using AI tools effectively, the potential for AI in physical world applications, and the need for a more nuanced understanding of AI capabilities in the workplace.\n\n\nDuring our conversation, Dan mentions an exciting project he's been working on, which we couldn't showcase live due to technical difficulties. However, I've included a link to a video demonstration in the show notes that you won't want to miss. In this demo, Dan showcases his innovative AI-powered 3D modeling tool that allows users to create 3D printable objects simply by describing them in natural language.\n\nLINKS\n\n\nThe livestream on YouTube\nEducational resources from Dan and Hamel's LLM course\nUpwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence\nEpisode 29: Lessons from a Year of Building with LLMs (Part 1)\nEpisode 30: Lessons from a Year of Building with LLMs (Part 2)\nDan's demo: Creating Physical Products with Generative AI\nBuild Great AI, Dan's boutique consulting firm helping clients be successful with large language models\nParlance Labs, Hamel's Practical consulting that improves your AI\nHamel on Twitter\nDan on Twitter\nVanishing Gradients on Twitter\nHugo on Twitter\n","content_html":"\u003cp\u003eHugo speaks with Dan Becker and Hamel Husain, two veterans in the world of data science, machine learning, and AI education. Collectively, they’ve worked at Google, DataRobot, Airbnb, Github (where Hamel built out the precursor to copilot and more) and they both currently work as independent LLM and Generative AI consultants.\u003c/p\u003e\n\n\u003cp\u003eDan and Hamel recently taught a course on fine-tuning large language models that evolved into a full-fledged conference, attracting over 2,000 participants. This experience gave them unique insights into the current state and future of AI education and application.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, we dive into:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eThe evolution of their course from fine-tuning to a comprehensive AI conference\u003c/li\u003e\n\u003cli\u003eThe unexpected challenges and insights gained from teaching LLMs to data scientists\u003c/li\u003e\n\u003cli\u003eThe current state of AI tooling and accessibility compared to a decade ago\u003c/li\u003e\n\u003cli\u003eThe role of playful experimentation in driving innovation in the field\u003c/li\u003e\n\u003cli\u003eThoughts on the economic impact and ROI of generative AI in various industries\u003c/li\u003e\n\u003cli\u003eThe importance of proper evaluation in machine learning projects\u003c/li\u003e\n\u003cli\u003eFuture predictions for AI education and application in the next five years\u003c/li\u003e\n\u003cli\u003eWe also touch on the challenges of using AI tools effectively, the potential for AI in physical world applications, and the need for a more nuanced understanding of AI capabilities in the workplace.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eDuring our conversation, Dan mentions an exciting project he\u0026#39;s been working on, which we couldn\u0026#39;t showcase live due to technical difficulties. However, I\u0026#39;ve included a link to a video demonstration in the show notes that you won\u0026#39;t want to miss. In this demo, Dan showcases his innovative AI-powered 3D modeling tool that allows users to create 3D printable objects simply by describing them in natural language.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/hDmnwtjktsc?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://parlance-labs.com/education/\" rel=\"nofollow\"\u003eEducational resources from Dan and Hamel\u0026#39;s LLM course\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://investors.upwork.com/news-releases/news-release-details/upwork-study-finds-employee-workloads-rising-despite-increased-c\" rel=\"nofollow\"\u003eUpwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://vanishinggradients.fireside.fm/29\" rel=\"nofollow\"\u003eEpisode 29: Lessons from a Year of Building with LLMs (Part 1)\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://vanishinggradients.fireside.fm/30\" rel=\"nofollow\"\u003eEpisode 30: Lessons from a Year of Building with LLMs (Part 2)\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/U5J5RUOuMkI?si=_7cYLYOU1iwweQeO\" rel=\"nofollow\"\u003eDan\u0026#39;s demo: Creating Physical Products with Generative AI\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://buildgreat.ai/\" rel=\"nofollow\"\u003eBuild Great AI, Dan\u0026#39;s boutique consulting firm helping clients be successful with large language models\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://parlance-labs.com/\" rel=\"nofollow\"\u003eParlance Labs, Hamel\u0026#39;s Practical consulting that improves your AI\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/HamelHusain\" rel=\"nofollow\"\u003eHamel on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/dan_s_becker\" rel=\"nofollow\"\u003eDan on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Dan Becker and Hamel Husain, two veterans in the world of data science, machine learning, and AI education. Collectively, they’ve worked at Google, DataRobot, Airbnb, Github (where Hamel built out the pre-cursor to copilot and more). And they both currently work as independent LLM and Generative AI consultants.\r\n\r\nDan and Hamel recently taught a course on fine-tuning large language models that evolved into a full-fledged conference, attracting over 2,000 participants. \r\n\r\nIn this episode, we dive deep into their experience and the unique insights it gave them into the current state and future of AI education and application.","date_published":"2024-08-12T18:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/9cae0a8b-259a-4b01-a0f4-e5958297542b.mp3","mime_type":"audio/mpeg","size_in_bytes":81774888,"duration_in_seconds":5110}]},{"id":"3aa4ba58-30aa-4a85-a139-e9057629171c","title":"Episode 32: Building Reliable and Robust ML/AI Pipelines","url":"https://vanishinggradients.fireside.fm/32","content_text":"Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.\n\nIn this episode, we dive deep into the world of LLMs and the critical challenges of building reliable AI pipelines. We'll explore:\n\n\nThe fascinating journey from classic machine learning to the current LLM revolution\nWhy Shreya believes most ML problems are actually data management issues\nThe concept of \"data flywheels\" for LLM applications and how to implement them\nThe intriguing world of evaluating AI systems - who validates the validators?\nShreya's work on SPADE and EvalGen, innovative tools for synthesizing data quality assertions and aligning LLM evaluations with human preferences\nThe importance of human-in-the-loop processes in AI development\nThe future of low-code and no-code tools in the AI landscape\n\n\nWe'll also touch on the potential pitfalls of over-relying on LLMs, the concept of \"Habsburg AI,\" and how to avoid disappearing up our own proverbial arseholes in the world of recursive AI processes.\n\nWhether you're a seasoned AI practitioner, a curious data scientist, or someone interested in the human side of AI development, this conversation offers valuable insights into building more robust, reliable, and human-centered AI systems.\n\nLINKS\n\n\nThe livestream on YouTube\nShreya's website\nShreya on Twitter\nData Flywheels for LLM Applications\nSPADE: Synthesizing Data Quality Assertions for Large Language Model Pipelines\nWhat We’ve Learned From A Year of Building with LLMs\nWho Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences\nOperationalizing Machine Learning: An Interview Study\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nIn the podcast, Hugo also mentioned that this was the 5th time he and Shreya chatted publicly. which is wild!\n\nIf you want to dive deep into Shreya's work and related topics through their chats, you can check them all out here:\n\n\nOuterbounds' Fireside Chat: Operationalizing ML -- Patterns and Pain Points from MLOps Practitioners\nThe Past, Present, and Future of Generative AI\nLLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering\nLessons from a Year of Building with LLMs\n\n\nCheck out and subcribe to our lu.ma calendar for upcoming livestreams!","content_html":"\u003cp\u003eHugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya\u0026#39;s work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, we dive deep into the world of LLMs and the critical challenges of building reliable AI pipelines. We\u0026#39;ll explore:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eThe fascinating journey from classic machine learning to the current LLM revolution\u003c/li\u003e\n\u003cli\u003eWhy Shreya believes most ML problems are actually data management issues\u003c/li\u003e\n\u003cli\u003eThe concept of \u0026quot;data flywheels\u0026quot; for LLM applications and how to implement them\u003c/li\u003e\n\u003cli\u003eThe intriguing world of evaluating AI systems - who validates the validators?\u003c/li\u003e\n\u003cli\u003eShreya\u0026#39;s work on SPADE and EvalGen, innovative tools for synthesizing data quality assertions and aligning LLM evaluations with human preferences\u003c/li\u003e\n\u003cli\u003eThe importance of human-in-the-loop processes in AI development\u003c/li\u003e\n\u003cli\u003eThe future of low-code and no-code tools in the AI landscape\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eWe\u0026#39;ll also touch on the potential pitfalls of over-relying on LLMs, the concept of \u0026quot;Habsburg AI,\u0026quot; and how to avoid disappearing up our own proverbial arseholes in the world of recursive AI processes.\u003c/p\u003e\n\n\u003cp\u003eWhether you\u0026#39;re a seasoned AI practitioner, a curious data scientist, or someone interested in the human side of AI development, this conversation offers valuable insights into building more robust, reliable, and human-centered AI systems.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/hKV6xSJZkB0?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.sh-reya.com/\" rel=\"nofollow\"\u003eShreya\u0026#39;s website\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/sh_reya\" rel=\"nofollow\"\u003eShreya on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.sh-reya.com/blog/ai-engineering-flywheel/\" rel=\"nofollow\"\u003eData Flywheels for LLM Applications\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://arxiv.org/abs/2401.03038\" rel=\"nofollow\"\u003eSPADE: Synthesizing Data Quality Assertions for Large Language Model Pipelines\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://applied-llms.org/\" rel=\"nofollow\"\u003eWhat We’ve Learned From A Year of Building with LLMs\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://arxiv.org/abs/2404.12272\" rel=\"nofollow\"\u003eWho Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://arxiv.org/abs/2209.09125\" rel=\"nofollow\"\u003eOperationalizing Machine Learning: An Interview Study\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eIn the podcast, Hugo also mentioned that this was the 5th time he and Shreya chatted publicly. which is wild!\u003c/p\u003e\n\n\u003cp\u003eIf you want to dive deep into Shreya\u0026#39;s work and related topics through their chats, you can check them all out here:\u003c/p\u003e\n\n\u003col\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/watch?v=7zB6ESFto_U\" rel=\"nofollow\"\u003eOuterbounds\u0026#39; Fireside Chat: Operationalizing ML -- Patterns and Pain Points from MLOps Practitioners\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/q0A9CdGWXqc?si=XmaUnQmZiXL2eagS\" rel=\"nofollow\"\u003eThe Past, Present, and Future of Generative AI\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/live/MTJHvgJtynU?si=Ncjqn5YuFBemvOJ0\" rel=\"nofollow\"\u003eLLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/c0gcsprsFig?feature=share\" rel=\"nofollow\"\u003eLessons from a Year of Building with LLMs\u003c/a\u003e\u003c/li\u003e\n\u003c/ol\u003e\n\n\u003cp\u003eCheck out and subcribe to our \u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003elu.ma calendar\u003c/a\u003e for upcoming livestreams!\u003c/p\u003e","summary":"Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.","date_published":"2024-07-27T13:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/3aa4ba58-30aa-4a85-a139-e9057629171c.mp3","mime_type":"audio/mpeg","size_in_bytes":72173111,"duration_in_seconds":4510}]},{"id":"455d1587-7ba6-4850-920e-360d8cbe33d3","title":"Episode 31: Rethinking Data Science, Machine Learning, and AI","url":"https://vanishinggradients.fireside.fm/31","content_text":"Hugo speaks with Vincent Warmerdam, a senior data professional and machine learning engineer at :probabl, the exclusive brand operator of scikit-learn. Vincent is known for challenging common assumptions and exploring innovative approaches in data science and machine learning.\n\nIn this episode, they dive deep into rethinking established methods in data science, machine learning, and AI. We explore Vincent's principled approach to the field, including:\n\n\nThe critical importance of exposing yourself to real-world problems before applying ML solutions\nFraming problems correctly and understanding the data generating process\nThe power of visualization and human intuition in data analysis\nQuestioning whether algorithms truly meet the actual problem at hand\nThe value of simple, interpretable models and when to consider more complex approaches\nThe importance of UI and user experience in data science tools\nStrategies for preventing algorithmic failures by rethinking evaluation metrics and data quality\nThe potential and limitations of LLMs in the current data science landscape\nThe benefits of open-source collaboration and knowledge sharing in the community\n\n\nThroughout the conversation, Vincent illustrates these principles with vivid, real-world examples from his extensive experience in the field. They also discuss Vincent's thoughts on the future of data science and his call to action for more knowledge sharing in the community through blogging and open dialogue.\n\nLINKS\n\n\nThe livestream on YouTube\nVincent's blog\nCalmCode\nscikit-lego\nVincent's book Data Science Fiction (WIP)\nThe Deon Checklist, an ethics checklist for data scientists\nOf oaths and checklists, by DJ Patil, Hilary Mason and Mike Loukides\nVincent's Getting Started with NLP and spaCy Course course on Talk Python\nVincent on twitter\n:probabl. on twitter\nVincent's PyData Amsterdam Keynote \"Natural Intelligence is All You Need [tm]\"\nVincent's PyData Amsterdam 2019 talk: The profession of solving (the wrong problem) \nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nCheck out and subcribe to our lu.ma calendar for upcoming livestreams!","content_html":"\u003cp\u003eHugo speaks with Vincent Warmerdam, a senior data professional and machine learning engineer at :probabl, the exclusive brand operator of scikit-learn. Vincent is known for challenging common assumptions and exploring innovative approaches in data science and machine learning.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, they dive deep into rethinking established methods in data science, machine learning, and AI. We explore Vincent\u0026#39;s principled approach to the field, including:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eThe critical importance of exposing yourself to real-world problems before applying ML solutions\u003c/li\u003e\n\u003cli\u003eFraming problems correctly and understanding the data generating process\u003c/li\u003e\n\u003cli\u003eThe power of visualization and human intuition in data analysis\u003c/li\u003e\n\u003cli\u003eQuestioning whether algorithms truly meet the actual problem at hand\u003c/li\u003e\n\u003cli\u003eThe value of simple, interpretable models and when to consider more complex approaches\u003c/li\u003e\n\u003cli\u003eThe importance of UI and user experience in data science tools\u003c/li\u003e\n\u003cli\u003eStrategies for preventing algorithmic failures by rethinking evaluation metrics and data quality\u003c/li\u003e\n\u003cli\u003eThe potential and limitations of LLMs in the current data science landscape\u003c/li\u003e\n\u003cli\u003eThe benefits of open-source collaboration and knowledge sharing in the community\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eThroughout the conversation, Vincent illustrates these principles with vivid, real-world examples from his extensive experience in the field. They also discuss Vincent\u0026#39;s thoughts on the future of data science and his call to action for more knowledge sharing in the community through blogging and open dialogue.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/-CD66CI1pEo?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://koaning.io/\" rel=\"nofollow\"\u003eVincent\u0026#39;s blog\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://calmcode.io/\" rel=\"nofollow\"\u003eCalmCode\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://koaning.github.io/scikit-lego/\" rel=\"nofollow\"\u003escikit-lego\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://calmcode.io/book\" rel=\"nofollow\"\u003eVincent\u0026#39;s book Data Science Fiction (WIP)\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://deon.drivendata.org/\" rel=\"nofollow\"\u003eThe Deon Checklist, an ethics checklist for data scientists\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/radar/of-oaths-and-checklists/\" rel=\"nofollow\"\u003eOf oaths and checklists, by DJ Patil, Hilary Mason and Mike Loukides\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://training.talkpython.fm/courses/getting-started-with-spacy\" rel=\"nofollow\"\u003eVincent\u0026#39;s Getting Started with NLP and spaCy Course course on Talk Python\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/fishnets88\" rel=\"nofollow\"\u003eVincent on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/probabl_ai\" rel=\"nofollow\"\u003e:probabl. on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/watch?v=C9p7suS-NGk\" rel=\"nofollow\"\u003eVincent\u0026#39;s PyData Amsterdam Keynote \u0026quot;Natural Intelligence is All You Need [tm]\u0026quot;\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/watch?v=kYMfE9u-lMo\" rel=\"nofollow\"\u003eVincent\u0026#39;s PyData Amsterdam 2019 talk: The profession of solving (the wrong problem) \u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eCheck out and subcribe to our \u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003elu.ma calendar\u003c/a\u003e for upcoming livestreams!\u003c/p\u003e","summary":"Hugo speaks with Vincent Warmerdam, a senior data professional and machine learning engineer at :probabl, the exclusive brand operator of scikit-learn. Vincent is known for challenging common assumptions and exploring innovative approaches in data science and machine learning.","date_published":"2024-07-09T19:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/455d1587-7ba6-4850-920e-360d8cbe33d3.mp3","mime_type":"audio/mpeg","size_in_bytes":92236825,"duration_in_seconds":5764}]},{"id":"5412d7de-a99a-48c1-a1b4-f37f9bb29254","title":"Episode 30: Lessons from a Year of Building with LLMs (Part 2)","url":"https://vanishinggradients.fireside.fm/30","content_text":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.\n\nThese five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.\n\nWe’ve split this roundtable into 2 episodes and, in this second episode, we'll explore:\n\n\nAn inside look at building end-to-end systems with LLMs;\nThe experimentation mindset: Why it's the key to successful AI products;\nBuilding trust in AI: Strategies for getting stakeholders on board;\nThe art of data examination: Why looking at your data is more crucial than ever;\nEvaluation strategies that separate the pros from the amateurs.\n\n\nAlthough we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Report: What We’ve Learned From A Year of Building with LLMs\nAbout the Guests/Authors \u0026lt;-- connect with them all on LinkedIn, follow them on Twitter, subscribe to their newsletters! (Seriously, though, the amount of collective wisdom here is 🤑\nYour AI product needs evals by Hamel Husain\nPrompting Fundamentals and How to Apply them Effectively by Eugene Yan\nFuck You, Show Me The Prompt by Hamel Husain\nVanishing Gradients on YouTube\nVanishing Gradients on Twitter\nVanishing Gradients on Lu.ma\n","content_html":"\u003cp\u003eHugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.\u003c/p\u003e\n\n\u003cp\u003eThese five guests, along with Jason Liu who couldn\u0026#39;t join us, have spent the past year building real-world applications with Large Language Models (LLMs). They\u0026#39;ve distilled their experiences \u003ca href=\"https://applied-llms.org/\" rel=\"nofollow\"\u003einto a report of 42 lessons across operational, strategic, and tactical dimensions\u003c/a\u003e, and they\u0026#39;re here to share their insights.\u003c/p\u003e\n\n\u003cp\u003eWe’ve split this roundtable into 2 episodes and, in this second episode, we\u0026#39;ll explore:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eAn inside look at building end-to-end systems with LLMs;\u003c/li\u003e\n\u003cli\u003eThe experimentation mindset: Why it\u0026#39;s the key to successful AI products;\u003c/li\u003e\n\u003cli\u003eBuilding trust in AI: Strategies for getting stakeholders on board;\u003c/li\u003e\n\u003cli\u003eThe art of data examination: Why looking at your data is more crucial than ever;\u003c/li\u003e\n\u003cli\u003eEvaluation strategies that separate the pros from the amateurs.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAlthough we\u0026#39;re focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/live/c0gcsprsFig\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://applied-llms.org/\" rel=\"nofollow\"\u003eThe Report: What We’ve Learned From A Year of Building with LLMs\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://applied-llms.org/about.html\" rel=\"nofollow\"\u003eAbout the Guests/Authors\u003c/a\u003e \u0026lt;-- connect with them all on LinkedIn, follow them on Twitter, subscribe to their newsletters! (Seriously, though, the amount of collective wisdom here is 🤑\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://hamel.dev/blog/posts/evals/\" rel=\"nofollow\"\u003eYour AI product needs evals by Hamel Husain\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://eugeneyan.com/writing/prompting/\" rel=\"nofollow\"\u003ePrompting Fundamentals and How to Apply them Effectively by Eugene Yan\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://hamel.dev/blog/posts/prompt/\" rel=\"nofollow\"\u003eFuck You, Show Me The Prompt by Hamel Husain\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003eVanishing Gradients on Lu.ma\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley (Part 2).","date_published":"2024-06-26T15:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/5412d7de-a99a-48c1-a1b4-f37f9bb29254.mp3","mime_type":"audio/mpeg","size_in_bytes":72382927,"duration_in_seconds":4523}]},{"id":"7a5a4f5a-0040-451c-82f5-fd61cf1515f4","title":"Episode 29: Lessons from a Year of Building with LLMs (Part 1)","url":"https://vanishinggradients.fireside.fm/29","content_text":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.\n\nThese five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.\n\nWe’ve split this roundtable into 2 episodes and, in this first episode, we'll explore:\n\n\nThe critical role of evaluation and monitoring in LLM applications and why they're non-negotiable, including \"evals\" - short for evaluations, which are automated tests for assessing LLM performance and output quality;\nWhy data literacy is your secret weapon in the AI landscape;\nThe fine-tuning dilemma: when to do it and when to skip it;\nReal-world lessons from building LLM applications that textbooks won't teach you;\nThe evolving role of data scientists and AI engineers in the age of AI.\n\n\nAlthough we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Report: What We’ve Learned From A Year of Building with LLMs\nAbout the Guests/Authors \u0026lt;-- connect with them all on LinkedIn, follow them on Twitter, subscribe to their newsletters! (Seriously, though, the amount of collective wisdom here is 🤑\nYour AI product needs evals by Hamel Husain\nPrompting Fundamentals and How to Apply them Effectively by Eugene Yan\nFuck You, Show Me The Prompt by Hamel Husain\nVanishing Gradients on YouTube\nVanishing Gradients on Twitter\nVanishing Gradients on Lu.ma\n","content_html":"\u003cp\u003eHugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.\u003c/p\u003e\n\n\u003cp\u003eThese five guests, along with Jason Liu who couldn\u0026#39;t join us, have spent the past year building real-world applications with Large Language Models (LLMs). They\u0026#39;ve distilled their experiences \u003ca href=\"https://applied-llms.org/\" rel=\"nofollow\"\u003einto a report of 42 lessons across operational, strategic, and tactical dimensions\u003c/a\u003e, and they\u0026#39;re here to share their insights.\u003c/p\u003e\n\n\u003cp\u003eWe’ve split this roundtable into 2 episodes and, in this first episode, we\u0026#39;ll explore:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eThe critical role of evaluation and monitoring in LLM applications and why they\u0026#39;re non-negotiable, including \u0026quot;evals\u0026quot; - short for evaluations, which are automated tests for assessing LLM performance and output quality;\u003c/li\u003e\n\u003cli\u003eWhy data literacy is your secret weapon in the AI landscape;\u003c/li\u003e\n\u003cli\u003eThe fine-tuning dilemma: when to do it and when to skip it;\u003c/li\u003e\n\u003cli\u003eReal-world lessons from building LLM applications that textbooks won\u0026#39;t teach you;\u003c/li\u003e\n\u003cli\u003eThe evolving role of data scientists and AI engineers in the age of AI.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAlthough we\u0026#39;re focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/live/c0gcsprsFig\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://applied-llms.org/\" rel=\"nofollow\"\u003eThe Report: What We’ve Learned From A Year of Building with LLMs\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://applied-llms.org/about.html\" rel=\"nofollow\"\u003eAbout the Guests/Authors\u003c/a\u003e \u0026lt;-- connect with them all on LinkedIn, follow them on Twitter, subscribe to their newsletters! (Seriously, though, the amount of collective wisdom here is 🤑\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://hamel.dev/blog/posts/evals/\" rel=\"nofollow\"\u003eYour AI product needs evals by Hamel Husain\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://eugeneyan.com/writing/prompting/\" rel=\"nofollow\"\u003ePrompting Fundamentals and How to Apply them Effectively by Eugene Yan\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://hamel.dev/blog/posts/prompt/\" rel=\"nofollow\"\u003eFuck You, Show Me The Prompt by Hamel Husain\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk\" rel=\"nofollow\"\u003eVanishing Gradients on Lu.ma\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley (Part 1).","date_published":"2024-06-26T14:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/7a5a4f5a-0040-451c-82f5-fd61cf1515f4.mp3","mime_type":"audio/mpeg","size_in_bytes":86750692,"duration_in_seconds":5421}]},{"id":"b268a89e-4fc9-4f9f-a2a5-c7636b3fbd70","title":"Episode 28: Beyond Supervised Learning: The Rise of In-Context Learning with LLMs","url":"https://vanishinggradients.fireside.fm/28","content_text":"Hugo speaks with Alan Nichol, co-founder and CTO of Rasa, where they build software to enable developers to create enterprise-grade conversational AI and chatbot systems across industries like telcos, healthcare, fintech, and government.\n\nWhat's super cool is that Alan and the Rasa team have been doing this type of thing for over a decade, giving them a wealth of wisdom on how to effectively incorporate LLMs into chatbots - and how not to. For example, if you want a chatbot that takes specific and important actions like transferring money, do you want to fully entrust the conversation to one big LLM like ChatGPT, or secure what the LLMs can do inside key business logic?\n\nIn this episode, they also dive into the history of conversational AI and explore how the advent of LLMs is reshaping the field. Alan shares his perspective on how supervised learning has failed us in some ways and discusses what he sees as the most overrated and underrated aspects of LLMs.\n\nAlan offers advice for those looking to work with LLMs and conversational AI, emphasizing the importance of not sleeping on proven techniques and looking beyond the latest hype. In a live demo, he showcases Rasa's Calm (Conversational AI with Language Models), which allows developers to define business logic declaratively and separate it from the LLM, enabling reliable execution of conversational flows.\n\nLINKS\n\n\nThe livestream on YouTube\nAlan's Rasa CALM Demo: Building Conversational AI with LLMs \nAlan on twitter.com\nRasa\nCALM, an LLM-native approach to building reliable conversational AI\nTask-Oriented Dialogue with In-Context Learning\n'We don’t know how to build conversational software yet' by Alan Nicol\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nUpcoming Livestreams\n\n\nLessons from a Year of Building with LLMs\nVALIDATING THE VALIDATORS with Shreya Shanker\n","content_html":"\u003cp\u003eHugo speaks with Alan Nichol, co-founder and CTO of Rasa, where they build software to enable developers to create enterprise-grade conversational AI and chatbot systems across industries like telcos, healthcare, fintech, and government.\u003c/p\u003e\n\n\u003cp\u003eWhat\u0026#39;s super cool is that Alan and the Rasa team have been doing this type of thing for over a decade, giving them a wealth of wisdom on how to effectively incorporate LLMs into chatbots - and how not to. For example, if you want a chatbot that takes specific and important actions like transferring money, do you want to fully entrust the conversation to one big LLM like ChatGPT, or secure what the LLMs can do inside key business logic?\u003c/p\u003e\n\n\u003cp\u003eIn this episode, they also dive into the history of conversational AI and explore how the advent of LLMs is reshaping the field. Alan shares his perspective on how supervised learning has failed us in some ways and discusses what he sees as the most overrated and underrated aspects of LLMs.\u003c/p\u003e\n\n\u003cp\u003eAlan offers advice for those looking to work with LLMs and conversational AI, emphasizing the importance of not sleeping on proven techniques and looking beyond the latest hype. In a live demo, he showcases Rasa\u0026#39;s Calm (Conversational AI with Language Models), which allows developers to define business logic declaratively and separate it from the LLM, enabling reliable execution of conversational flows.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/live/kMFBYC2pB30?si=yV5sGq1iuC47LBSi\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/4UnxaJ-GcT0?si=6uLY3GD5DkOmWiBW\" rel=\"nofollow\"\u003eAlan\u0026#39;s Rasa CALM Demo: Building Conversational AI with LLMs \u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://x.com/alanmnichol\" rel=\"nofollow\"\u003eAlan on twitter.com\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://rasa.com/\" rel=\"nofollow\"\u003eRasa\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://rasa.com/docs/rasa-pro/calm/\" rel=\"nofollow\"\u003eCALM, an LLM-native approach to building reliable conversational AI\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://arxiv.org/abs/2402.12234\" rel=\"nofollow\"\u003eTask-Oriented Dialogue with In-Context Learning\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://medium.com/rasa-blog/we-don-t-know-how-to-build-conversational-software-yet-a18301db0e4b\" rel=\"nofollow\"\u003e\u0026#39;We don’t know how to build conversational software yet\u0026#39; by Alan Nicol\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003e\u003cstrong\u003eUpcoming Livestreams\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/e8huz3s6?utm_source=vgan\" rel=\"nofollow\"\u003eLessons from a Year of Building with LLMs\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/zz3qic45?utm_source=vgan\" rel=\"nofollow\"\u003eVALIDATING THE VALIDATORS with Shreya Shanker\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Alan Nichol, co-founder and CTO of Rasa, where they build software to enable developers to create enterprise-grade conversational AI and chatbot systems across industries like telcos, healthcare, fintech, and government.","date_published":"2024-06-10T08:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/b268a89e-4fc9-4f9f-a2a5-c7636b3fbd70.mp3","mime_type":"audio/mpeg","size_in_bytes":63014789,"duration_in_seconds":3938}]},{"id":"d42a2479-a220-4f72-bf48-946c4a393efa","title":"Episode 27: How to Build Terrible AI Systems","url":"https://vanishinggradients.fireside.fm/27","content_text":"Hugo speaks with Jason Liu, an independent consultant who uses his expertise in recommendation systems to help fast-growing startups build out their RAG applications. He was previously at Meta and Stitch Fix is also the creator of Instructor, Flight, and an ML and data science educator.\n\nThey talk about how Jason approaches consulting companies across many industries, including construction and sales, in building production LLM apps, his playbook for getting ML and AI up and running to build and maintain such apps, and the future of tooling to do so.\n\nThey take an inverted thinking approach, envisaging all the failure modes that would result in building terrible AI systems, and then figure out how to avoid such pitfalls.\n\nLINKS\n\n\nThe livestream on YouTube\nJason's website\nPyDdantic is all you need, Jason's Keynote at AI Engineer Summit, 2023\nHow to build a terrible RAG system by Jason\nTo express interest in Jason's Systematically improving RAG Applications course\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nUpcoming Livestreams\n\n\nGood Riddance to Supervised Learning with Alan Nichol (CTO and co-founder, Rasa)\nLessons from a Year of Building with LLMs\n","content_html":"\u003cp\u003eHugo speaks with Jason Liu, an independent consultant who uses his expertise in recommendation systems to help fast-growing startups build out their RAG applications. He was previously at Meta and Stitch Fix is also the creator of Instructor, Flight, and an ML and data science educator.\u003c/p\u003e\n\n\u003cp\u003eThey talk about how Jason approaches consulting companies across many industries, including construction and sales, in building production LLM apps, his playbook for getting ML and AI up and running to build and maintain such apps, and the future of tooling to do so.\u003c/p\u003e\n\n\u003cp\u003eThey take an inverted thinking approach, envisaging all the failure modes that would result in building terrible AI systems, and then figure out how to avoid such pitfalls.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/USTG6sQlB6s?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://jxnl.co/\" rel=\"nofollow\"\u003eJason\u0026#39;s website\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/yj-wSRJwrrc?si=JIGhN0mx0i50dUR9\" rel=\"nofollow\"\u003ePyDdantic is all you need, Jason\u0026#39;s Keynote at AI Engineer Summit, 2023\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://jxnl.co/writing/2024/01/07/inverted-thinking-rag/\" rel=\"nofollow\"\u003eHow to build a terrible RAG system by Jason\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://q7gjsgfstrp.typeform.com/ragcourse?typeform-source=vg\" rel=\"nofollow\"\u003eTo express interest in Jason\u0026#39;s \u003cem\u003eSystematically improving RAG Applications\u003c/em\u003e course\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003e\u003cstrong\u003eUpcoming Livestreams\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/gphzzyyn?utm_source=vgj\" rel=\"nofollow\"\u003eGood Riddance to Supervised Learning with Alan Nichol (CTO and co-founder, Rasa)\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/e8huz3s6?utm_source=vgj\" rel=\"nofollow\"\u003eLessons from a Year of Building with LLMs\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Jason Liu, an independent consultant who uses his expertise in recommendation systems to help fast-growing startups build out their RAG applications. He was previously at Meta and Stitch Fix is also the creator of Instructor, Flight, and an ML and data science educator.","date_published":"2024-05-31T10:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/d42a2479-a220-4f72-bf48-946c4a393efa.mp3","mime_type":"audio/mpeg","size_in_bytes":88718026,"duration_in_seconds":5544}]},{"id":"d56cd02b-11cb-4be9-a2a7-31f783ef9c1a","title":"Episode 26: Developing and Training LLMs From Scratch","url":"https://vanishinggradients.fireside.fm/26","content_text":"Hugo speaks with Sebastian Raschka, a machine learning \u0026amp; AI researcher, programmer, and author. As Staff Research Engineer at Lightning AI, he focuses on the intersection of AI research, software development, and large language models (LLMs).\n\nHow do you build LLMs? How can you use them, both in prototype and production settings? What are the building blocks you need to know about?\n\n​In this episode, we’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.\n\nThe idea here is not that you’ll need to use an LLM you’ve built from scratch, but that we’ll learn a lot about LLMs and how to use them in the process.\n\nNear the end we also did some live coding to fine-tune GPT-2 in order to create a spam classifier! \n\nLINKS\n\n\nThe livestream on YouTube\nSebastian's website\nMachine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI by Sebastian\nBuild a Large Language Model (From Scratch) by Sebastian\nPyTorch Lightning\nLightning Fabric\nLitGPT\nSebastian's notebook for finetuning GPT-2 for spam classification!\nThe end of fine-tuning: Jeremy Howard on the Latent Space Podcast\nOur next livestream: How to Build Terrible AI Systems with Jason Liu\nVanishing Gradients on Twitter\nHugo on Twitter\n","content_html":"\u003cp\u003eHugo speaks with Sebastian Raschka, a machine learning \u0026amp; AI researcher, programmer, and author. As Staff Research Engineer at Lightning AI, he focuses on the intersection of AI research, software development, and large language models (LLMs).\u003c/p\u003e\n\n\u003cp\u003eHow do you build LLMs? How can you use them, both in prototype and production settings? What are the building blocks you need to know about?\u003c/p\u003e\n\n\u003cp\u003e​In this episode, we’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.\u003c/p\u003e\n\n\u003cp\u003eThe idea here is not that you’ll need to use an LLM you’ve built from scratch, but that we’ll learn a lot about LLMs and how to use them in the process.\u003c/p\u003e\n\n\u003cp\u003eNear the end we also did some live coding to fine-tune GPT-2 in order to create a spam classifier! \u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/qL4JY6Y5pmA\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://sebastianraschka.com/\" rel=\"nofollow\"\u003eSebastian\u0026#39;s website\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://nostarch.com/machine-learning-q-and-ai\" rel=\"nofollow\"\u003eMachine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI by Sebastian\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.manning.com/books/build-a-large-language-model-from-scratch\" rel=\"nofollow\"\u003eBuild a Large Language Model (From Scratch) by Sebastian\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lightning.ai/docs/pytorch/stable/\" rel=\"nofollow\"\u003ePyTorch Lightning\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lightning.ai/docs/fabric/stable/\" rel=\"nofollow\"\u003eLightning Fabric\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/Lightning-AI/litgpt\" rel=\"nofollow\"\u003eLitGPT\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb\" rel=\"nofollow\"\u003eSebastian\u0026#39;s notebook for finetuning GPT-2 for spam classification!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.latent.space/p/fastai\" rel=\"nofollow\"\u003eThe end of fine-tuning: Jeremy Howard on the Latent Space Podcast\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/terrible-ai-systems?utm_source=vg\" rel=\"nofollow\"\u003eOur next livestream: How to Build Terrible AI Systems with Jason Liu\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Sebastian Raschka, a machine learning \u0026 AI researcher, programmer, and author.They’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.","date_published":"2024-05-15T13:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/d56cd02b-11cb-4be9-a2a7-31f783ef9c1a.mp3","mime_type":"audio/mpeg","size_in_bytes":53564523,"duration_in_seconds":6695}]},{"id":"2e66472b-34f3-4068-b6f9-4942dc757325","title":"Episode 25: Fully Reproducible ML \u0026 AI Workflows","url":"https://vanishinggradients.fireside.fm/25","content_text":"Hugo speaks with Omoju Miller, a machine learning guru and founder and CEO of Fimio, where she is building 21st century dev tooling. In the past, she was Technical Advisor to the CEO at GitHub, spent time co-leading non-profit investment in Computer Science Education for Google, and served as a volunteer advisor to the Obama administration’s White House Presidential Innovation Fellows.\n\nWe need open tools, open data, provenance, and the ability to build fully reproducible, transparent machine learning workflows. With the advent of closed-source, vendor-based APIs and compute becoming a form of gate-keeping, developer tools are at the risk of becoming commoditized and developers becoming consumers.\n\nWe’ll talk about how ideas for escaping these burgeoning walled gardens. We’ll dive into\n\n\nWhat fully reproducible ML workflows would look like, including git for the workflow build process,\nThe need for loosely coupled and composable tools that embrace a UNIX-like philosophy,\nWhat a much more scientific toolchain would look like,\nWhat a future open sources commons for Generative AI could look like,\nWhat an open compute ecosystem could look like,\nHow to create LLMs and tooling so everyone can use them to build production-ready apps,\n\n\nAnd much more!\n\nLINKS\n\n\nThe livestream on YouTube\nOmoju on Twitter\nHugo on Twitter\nVanishing Gradients on Twitter\nLu.ma Calendar that includes details of Hugo's European Tour for Outerbounds\nBlog post that includes details of Hugo's European Tour for Outerbounds\n","content_html":"\u003cp\u003eHugo speaks with Omoju Miller, a machine learning guru and founder and CEO of Fimio, where she is building 21st century dev tooling. In the past, she was Technical Advisor to the CEO at GitHub, spent time co-leading non-profit investment in Computer Science Education for Google, and served as a volunteer advisor to the Obama administration’s White House Presidential Innovation Fellows.\u003c/p\u003e\n\n\u003cp\u003eWe need open tools, open data, provenance, and the ability to build fully reproducible, transparent machine learning workflows. With the advent of closed-source, vendor-based APIs and compute becoming a form of gate-keeping, developer tools are at the risk of becoming commoditized and developers becoming consumers.\u003c/p\u003e\n\n\u003cp\u003eWe’ll talk about how ideas for escaping these burgeoning walled gardens. We’ll dive into\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eWhat fully reproducible ML workflows would look like, including git for the workflow build process,\u003c/li\u003e\n\u003cli\u003eThe need for loosely coupled and composable tools that embrace a UNIX-like philosophy,\u003c/li\u003e\n\u003cli\u003eWhat a much more scientific toolchain would look like,\u003c/li\u003e\n\u003cli\u003eWhat a future open sources commons for Generative AI could look like,\u003c/li\u003e\n\u003cli\u003eWhat an open compute ecosystem could look like,\u003c/li\u003e\n\u003cli\u003eHow to create LLMs and tooling so everyone can use them to build production-ready apps,\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAnd much more!\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/live/n81PWNsHSMk?si=pgX2hH5xADATdJMu\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/omojumiller\" rel=\"nofollow\"\u003eOmoju on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/Outerbounds\" rel=\"nofollow\"\u003eLu.ma Calendar that includes details of Hugo\u0026#39;s European Tour for Outerbounds\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://outerbounds.com/blog/ob-on-the-road-2024-h1/\" rel=\"nofollow\"\u003eBlog post that includes details of Hugo\u0026#39;s European Tour for Outerbounds\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Omoju Miller, a machine learning guru and founder and CEO of Fimio, where she is building 21st century dev tooling.","date_published":"2024-03-18T23:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/2e66472b-34f3-4068-b6f9-4942dc757325.mp3","mime_type":"audio/mpeg","size_in_bytes":77423933,"duration_in_seconds":4838}]},{"id":"c6ebf900-c625-493a-b4c5-27a7f31da24f","title":"Episode 24: LLM and GenAI Accessibility","url":"https://vanishinggradients.fireside.fm/24","content_text":"Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R\u0026amp;D with answer.ai. His current focus is on generative AI, flitting between different modalities. He also likes teaching and making courses, having worked with both Hugging Face and fast.ai in these capacities.\n\nJohno recently reminded Hugo how hard everything was 10 years ago: “Want to install TensorFlow? Good luck. Need data? Perhaps try ImageNet. But now you can use big models from Hugging Face with hi-res satellite data and do all of this in a Colab notebook. Or think ecology and vision models… or medicine and multimodal models!”\n\nWe talk about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going. We’ll delve into\n\n\nWhat the Generative AI mindset is, in terms of using atomic building blocks, and how it evolved from both the data science and ML mindsets;\nHow fast.ai democratized access to deep learning, what successes they had, and what was learned;\nThe moving parts now required to make GenAI and ML as accessible as possible;\nThe importance of focusing on UX and the application in the world of generative AI and foundation models;\nThe skillset and toolkit needed to be an LLM and AI guru;\nWhat they’re up to at answer.ai to democratize LLMs and foundation models.\n\n\nLINKS\n\n\nThe livestream on YouTube\nZindi, the largest professional network for data scientists in Africa\nA new old kind of R\u0026amp;D lab: Announcing Answer.AI\nWhy and how I’m shifting focus to LLMs by Johno Whitaker\nApplying AI to Immune Cell Networks by Rachel Thomas\nReplicate -- a cool place to explore GenAI models, among other things\nHands-On Generative AI with Transformers and Diffusion Models\nJohno on Twitter\nHugo on Twitter\nVanishing Gradients on Twitter\nSciPy 2024 CFP\nEscaping Generative AI Walled Gardens with Omoju Miller, a Vanishing Gradients Livestream\n","content_html":"\u003cp\u003eHugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R\u0026amp;D with answer.ai. His current focus is on generative AI, flitting between different modalities. He also likes teaching and making courses, having worked with both Hugging Face and fast.ai in these capacities.\u003c/p\u003e\n\n\u003cp\u003eJohno recently reminded Hugo how hard everything was 10 years ago: “Want to install TensorFlow? Good luck. Need data? Perhaps try ImageNet. But now you can use big models from Hugging Face with hi-res satellite data and do all of this in a Colab notebook. Or think ecology and vision models… or medicine and multimodal models!”\u003c/p\u003e\n\n\u003cp\u003eWe talk about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going. We’ll delve into\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eWhat the Generative AI mindset is, in terms of using atomic building blocks, and how it evolved from both the data science and ML mindsets;\u003c/li\u003e\n\u003cli\u003eHow fast.ai democratized access to deep learning, what successes they had, and what was learned;\u003c/li\u003e\n\u003cli\u003eThe moving parts now required to make GenAI and ML as accessible as possible;\u003c/li\u003e\n\u003cli\u003eThe importance of focusing on UX and the application in the world of generative AI and foundation models;\u003c/li\u003e\n\u003cli\u003eThe skillset and toolkit needed to be an LLM and AI guru;\u003c/li\u003e\n\u003cli\u003eWhat they’re up to at answer.ai to democratize LLMs and foundation models.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/hxZX6fBi-W8?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://zindi.africa/\" rel=\"nofollow\"\u003eZindi, the largest professional network for data scientists in Africa\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://www.answer.ai/posts/2023-12-12-launch.html\" rel=\"nofollow\"\u003eA new old kind of R\u0026amp;D lab: Announcing Answer.AI\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://johnowhitaker.dev/dsc/2023-07-01-why-and-how-im-shifting-focus-to-llms.html\" rel=\"nofollow\"\u003eWhy and how I’m shifting focus to LLMs by Johno Whitaker\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.fast.ai/posts/2024-01-23-cytokines/\" rel=\"nofollow\"\u003eApplying AI to Immune Cell Networks by Rachel Thomas\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://replicate.com/explore\" rel=\"nofollow\"\u003eReplicate -- a cool place to explore GenAI models, among other things\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/library/view/hands-on-generative-ai/9781098149239/\" rel=\"nofollow\"\u003eHands-On Generative AI with Transformers and Diffusion Models\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/johnowhitaker\" rel=\"nofollow\"\u003eJohno on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/hugobowne\" rel=\"nofollow\"\u003eHugo on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/vanishingdata\" rel=\"nofollow\"\u003eVanishing Gradients on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.scipy2024.scipy.org/#CFP\" rel=\"nofollow\"\u003eSciPy 2024 CFP\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/xonnjqe4\" rel=\"nofollow\"\u003eEscaping Generative AI Walled Gardens with Omoju Miller, a Vanishing Gradients Livestream\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R\u0026D with answer.ai, about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going.","date_published":"2024-02-27T17:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c6ebf900-c625-493a-b4c5-27a7f31da24f.mp3","mime_type":"audio/mpeg","size_in_bytes":86459792,"duration_in_seconds":5403}]},{"id":"96dc5719-497e-4bdb-82e0-a336cf46ec5d","title":"Episode 23: Statistical and Algorithmic Thinking in the AI Age","url":"https://vanishinggradients.fireside.fm/23","content_text":"Hugo speaks with Allen Downey, a curriculum designer at Brilliant, Professor Emeritus at Olin College, and the author of Think Python, Think Bayes, Think Stats, and other computer science and data science books. In 2019-20 he was a Visiting Professor at Harvard University. He previously taught at Wellesley College and Colby College and was a Visiting Scientist at Google. He is also the author of the upcoming book Probably Overthinking It!\n\nThey discuss Allen's new book and the key statistical and data skills we all need to navigate an increasingly data-driven and algorithmic world. The goal was to dive deep into the statistical paradoxes and fallacies that get in the way of using data to make informed decisions. \n\nFor example, when it was reported in 2021 that “in the United Kingdom, 70-plus percent of the people who die now from COVID are fully vaccinated,” this was correct but the implication was entirely wrong. Their conversation jumps into many such concrete examples to get to the bottom of using data for more than “lies, damned lies, and statistics.” They cover\n\n\nInformation and misinformation around pandemics and the base rate fallacy;\nThe tools we need to comprehend the small probabilities of high-risk events such as stock market crashes, earthquakes, and more;\nThe many definitions of algorithmic fairness, why they can't all be met at once, and what we can do about it;\nPublic health, the need for robust causal inference, and variations on Berkson’s paradox, such as the low-birthweight paradox: an influential paper found that that the mortality rate for children of smokers is lower for low-birthweight babies;\nWhy none of us are normal in any sense of the word, both in physical and psychological measurements;\nThe Inspection paradox, which shows up in the criminal justice system and distorts our perception of prison sentences and the risk of repeat offenders.\n\n\nLINKS\n\n\nThe livestream on YouTube\nAllen Downey on Github\nAllen's new book Probably Overthinking It!\nAllen on Twitter\nPrediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions by Mitchell et al.\n","content_html":"\u003cp\u003eHugo speaks with Allen Downey, a curriculum designer at Brilliant, Professor Emeritus at Olin College, and the author of Think Python, Think Bayes, Think Stats, and other computer science and data science books. In 2019-20 he was a Visiting Professor at Harvard University. He previously taught at Wellesley College and Colby College and was a Visiting Scientist at Google. He is also the author of the upcoming book Probably Overthinking It!\u003c/p\u003e\n\n\u003cp\u003eThey discuss Allen\u0026#39;s new book and the key statistical and data skills we all need to navigate an increasingly data-driven and algorithmic world. The goal was to dive deep into the statistical paradoxes and fallacies that get in the way of using data to make informed decisions. \u003c/p\u003e\n\n\u003cp\u003eFor example, when it was reported in 2021 that “in the United Kingdom, 70-plus percent of the people who die now from COVID are fully vaccinated,” this was correct but the implication was entirely wrong. Their conversation jumps into many such concrete examples to get to the bottom of using data for more than “lies, damned lies, and statistics.” They cover\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eInformation and misinformation around pandemics and the base rate fallacy;\u003c/li\u003e\n\u003cli\u003eThe tools we need to comprehend the small probabilities of high-risk events such as stock market crashes, earthquakes, and more;\u003c/li\u003e\n\u003cli\u003eThe many definitions of algorithmic fairness, why they can\u0026#39;t all be met at once, and what we can do about it;\u003c/li\u003e\n\u003cli\u003ePublic health, the need for robust causal inference, and variations on Berkson’s paradox, such as the low-birthweight paradox: an influential paper found that that the mortality rate for children of smokers is lower for low-birthweight babies;\u003c/li\u003e\n\u003cli\u003eWhy none of us are normal in any sense of the word, both in physical and psychological measurements;\u003c/li\u003e\n\u003cli\u003eThe Inspection paradox, which shows up in the criminal justice system and distorts our perception of prison sentences and the risk of repeat offenders.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/G8LulD72kzs?feature=share\" rel=\"nofollow\"\u003eThe livestream on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/AllenDowney\" rel=\"nofollow\"\u003eAllen Downey on Github\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://greenteapress.com/wp/probably-overthinking-it/\" rel=\"nofollow\"\u003eAllen\u0026#39;s new book Probably Overthinking It!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/AllenDowney\" rel=\"nofollow\"\u003eAllen on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://arxiv.org/abs/1811.07867\" rel=\"nofollow\"\u003ePrediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions by Mitchell et al.\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Allen Downey, curriculum designer at Brilliant, Professor Emeritus at Olin College, and author, about the key statistical and data skills we all need to navigate an increasingly data-driven and algorithmic world. The goal will be to dive deep into the statistical paradoxes and fallacies that get in the way of using data to make informed decisions. ","date_published":"2023-12-21T09:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/96dc5719-497e-4bdb-82e0-a336cf46ec5d.mp3","mime_type":"audio/mpeg","size_in_bytes":77400109,"duration_in_seconds":4837}]},{"id":"1565738b-1090-4efe-bb2c-2a4244eff19c","title":"Episode 22: LLMs, OpenAI, and the Existential Crisis for Machine Learning Engineering","url":"https://vanishinggradients.fireside.fm/22","content_text":"Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.\n\nJeremy Howard is co-founder of fast.ai, an ex-Chief Scientist at Kaggle, and creator of the ULMFiT approach on which all modern language models are based. Shreya Shankar is at UC Berkeley, ex Google brain, Facebook, and Viaduct. Hamel Husain has his own generative AI and LLM consultancy Parlance Labs and was previously at Outerbounds, Github, and Airbnb.\n\nThey talk about\n\n\nHow LLMs shift the nature of the work we do in DS and ML,\nHow they change the tools we use,\nThe ways in which they could displace the role of traditional ML (e.g. will we stop using xgboost any time soon?),\nHow to navigate all the new tools and techniques,\nThe trade-offs between open and closed models,\nReactions to the recent Open Developer Day and the increasing existential crisis for ML.\n\n\nLINKS\n\n\nThe panel on YouTube\nHugo and Jeremy's upcoming livestream on what the hell happened recently at OpenAI, among many other things\nVanishing Gradients on YouTube\nVanishing Gradients on twitter\n","content_html":"\u003cp\u003eJeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.\u003c/p\u003e\n\n\u003cp\u003e\u003ca href=\"https://twitter.com/jeremyphoward\" rel=\"nofollow\"\u003eJeremy Howard\u003c/a\u003e is co-founder of fast.ai, an ex-Chief Scientist at Kaggle, and creator of the ULMFiT approach on which all modern language models are based. \u003ca href=\"https://twitter.com/sh_reya\" rel=\"nofollow\"\u003eShreya Shankar\u003c/a\u003e is at UC Berkeley, ex Google brain, Facebook, and Viaduct. \u003ca href=\"https://twitter.com/HamelHusain\" rel=\"nofollow\"\u003eHamel Husain\u003c/a\u003e has his own generative AI and LLM consultancy \u003ca href=\"https://parlance-labs.com/\" rel=\"nofollow\"\u003eParlance Labs\u003c/a\u003e and was previously at Outerbounds, Github, and Airbnb.\u003c/p\u003e\n\n\u003cp\u003eThey talk about\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eHow LLMs shift the nature of the work we do in DS and ML,\u003c/li\u003e\n\u003cli\u003eHow they change the tools we use,\u003c/li\u003e\n\u003cli\u003eThe ways in which they could displace the role of traditional ML (e.g. will we stop using xgboost any time soon?),\u003c/li\u003e\n\u003cli\u003eHow to navigate all the new tools and techniques,\u003c/li\u003e\n\u003cli\u003eThe trade-offs between open and closed models,\u003c/li\u003e\n\u003cli\u003eReactions to the recent Open Developer Day and the increasing existential crisis for ML.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/MTJHvgJtynU?feature=share\" rel=\"nofollow\"\u003eThe panel on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/byxyzfrr?utm_source=vg\" rel=\"nofollow\"\u003eHugo and Jeremy\u0026#39;s upcoming livestream on what the hell happened recently at OpenAI, among many other things\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/VanishingData\" rel=\"nofollow\"\u003eVanishing Gradients on twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.","date_published":"2023-11-28T08:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/1565738b-1090-4efe-bb2c-2a4244eff19c.mp3","mime_type":"audio/mpeg","size_in_bytes":76924471,"duration_in_seconds":4807}]},{"id":"e329eaa4-5768-44d0-878a-a96f3f2b53f0","title":"Episode 21: Deploying LLMs in Production: Lessons Learned","url":"https://vanishinggradients.fireside.fm/21","content_text":"Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.\n\nThey talk about generative AI, large language models, the business value they can generate, and how to get started. \n\nThey delve into\n\n\nWhere Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);\nCommon misconceptions about LLMs;\nThe skills you need to work with LLMs and GenAI models;\nTools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!\nVendor APIs vs OSS models.\n\n\nLINKS\n\n\nOur upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!\nOur recent livestream Data and DevOps Tools for Evaluating and Productionizing LLMs with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.\nExtended Guide: Instruction-tune Llama 2 by Philipp Schmid\nThe livestream recoding of this episode!\nHamel on twitter\n","content_html":"\u003cp\u003eHugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led \u003ca href=\"https://github.com/github/CodeSearchNet\" rel=\"nofollow\"\u003eCodeSearchNet\u003c/a\u003e, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of \u003ca href=\"https://parlance-labs.com/\" rel=\"nofollow\"\u003eParlance-Labs\u003c/a\u003e, a research and consultancy focused on LLMs.\u003c/p\u003e\n\n\u003cp\u003eThey talk about generative AI, large language models, the business value they can generate, and how to get started. \u003c/p\u003e\n\n\u003cp\u003eThey delve into\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eWhere Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);\u003c/li\u003e\n\u003cli\u003eCommon misconceptions about LLMs;\u003c/li\u003e\n\u003cli\u003eThe skills you need to work with LLMs and GenAI models;\u003c/li\u003e\n\u003cli\u003eTools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!\u003c/li\u003e\n\u003cli\u003eVendor APIs vs OSS models.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://lu.ma/m81oepqe/utm_source=vghh\" rel=\"nofollow\"\u003eOur upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eOur recent livestream \u003ca href=\"https://youtube.com/live/B_DMMlDuJB0\" rel=\"nofollow\"\u003eData and DevOps Tools for Evaluating and Productionizing LLMs\u003c/a\u003e with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.philschmid.de/instruction-tune-llama-2\" rel=\"nofollow\"\u003eExtended Guide: Instruction-tune Llama 2\u003c/a\u003e by Philipp Schmid\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtube.com/live/l7jJhL9geZQ?feature=share\" rel=\"nofollow\"\u003eThe livestream recoding of this episode!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/HamelHusain\" rel=\"nofollow\"\u003eHamel on twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Hamel Husain (ex-Github, Airbnb), a machine learning engineer who loves building machine learning infrastructure and tools, about generative AI, large language models, the business value they can generate, and how to get started. ","date_published":"2023-11-14T16:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/e329eaa4-5768-44d0-878a-a96f3f2b53f0.mp3","mime_type":"audio/mpeg","size_in_bytes":65466947,"duration_in_seconds":4091}]},{"id":"3c0c5565-056f-45f4-a785-ec46800bb2cd","title":"Episode 20: Data Science: Past, Present, and Future","url":"https://vanishinggradients.fireside.fm/20","content_text":"Hugo speaks with Chris Wiggins (Columbia, NYTimes) and Matthew Jones (Princeton) about their recent book How Data Happened, and the Columbia course it expands upon, data: past, present, and future.\n\nChris is an associate professor of applied mathematics at Columbia University and the New York Times’ chief data scientist, and Matthew is a professor of history at Princeton University and former Guggenheim Fellow.\n\nFrom facial recognition to automated decision systems that inform who gets loans and who receives bail, we all now move through a world determined by data-empowered algorithms. These technologies didn’t just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search.\n\nDJ Patil, former U.S. Chief Data Scientist, said of the book \"This is the first comprehensive look at the history of data and how power has played a critical role in shaping the history. It’s a must read for any data scientist about how we got here and what we need to do to ensure that data works for everyone.\"\n\nIf you’re a data scientist, machine learning engineer, or work with data in any way, it’s increasingly important to know more about the history and future of the work that you do and understand how your work impacts society and the world.\n\nAmong other things, they'll delve into\n\n\nthe history of human use of data;\nhow data are used to reveal insight and support decisions;\nhow data and data-powered algorithms shape, constrain, and manipulate our commercial, civic, and personal transactions and experiences; and\nhow exploration and analysis of data have become part of our logic and rhetoric of communication and persuasion.\n\n\nYou can also sign up for our next livestreamed podcast recording here! \n\nLINKS\n\n\nHow Data Happened, the book!\ndata: past, present, and future, the course\nRace After Technology, by Ruha Benjamin\nThe problem with metrics is a big problem for AI by Rachel Thomas\nVanishing Gradients on YouTube\n","content_html":"\u003cp\u003eHugo speaks with Chris Wiggins (Columbia, NYTimes) and Matthew Jones (Princeton) about their recent book How Data Happened, and the Columbia course it expands upon, data: past, present, and future.\u003c/p\u003e\n\n\u003cp\u003eChris is an associate professor of applied mathematics at Columbia University and the New York Times’ chief data scientist, and Matthew is a professor of history at Princeton University and former Guggenheim Fellow.\u003c/p\u003e\n\n\u003cp\u003eFrom facial recognition to automated decision systems that inform who gets loans and who receives bail, we all now move through a world determined by data-empowered algorithms. These technologies didn’t just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search.\u003c/p\u003e\n\n\u003cp\u003eDJ Patil, former U.S. Chief Data Scientist, said of the book \u0026quot;This is the first comprehensive look at the history of data and how power has played a critical role in shaping the history. It’s a must read for any data scientist about how we got here and what we need to do to ensure that data works for everyone.\u0026quot;\u003c/p\u003e\n\n\u003cp\u003eIf you’re a data scientist, machine learning engineer, or work with data in any way, it’s increasingly important to know more about the history and future of the work that you do and understand how your work impacts society and the world.\u003c/p\u003e\n\n\u003cp\u003eAmong other things, they\u0026#39;ll delve into\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003ethe history of human use of data;\u003c/li\u003e\n\u003cli\u003ehow data are used to reveal insight and support decisions;\u003c/li\u003e\n\u003cli\u003ehow data and data-powered algorithms shape, constrain, and manipulate our commercial, civic, and personal transactions and experiences; and\u003c/li\u003e\n\u003cli\u003ehow exploration and analysis of data have become part of our logic and rhetoric of communication and persuasion.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eYou can also sign up for our next livestreamed podcast recording \u003ca href=\"https://www.eventbrite.com/e/data-science-past-present-and-future-tickets-695643357007?aff=kjvg\" rel=\"nofollow\"\u003ehere\u003c/a\u003e! \u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://wwnorton.com/books/how-data-happened\" rel=\"nofollow\"\u003eHow Data Happened, the book!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://data-ppf.github.io/\" rel=\"nofollow\"\u003edata: past, present, and future, the course\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.ruhabenjamin.com/race-after-technology\" rel=\"nofollow\"\u003eRace After Technology, by Ruha Benjamin\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.ruhabenjamin.com/race-after-technology\" rel=\"nofollow\"\u003eThe problem with metrics is a big problem for AI by Rachel Thomas\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Chris Wiggins (Columbia, NYTimes) and Matthew Jones (Princeton) about their recent book How Data Happened, and the Columbia course it expands upon, data: past, present, and future.\r\n","date_published":"2023-10-05T15:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/3c0c5565-056f-45f4-a785-ec46800bb2cd.mp3","mime_type":"audio/mpeg","size_in_bytes":83201801,"duration_in_seconds":5199}]},{"id":"87376a4e-df73-494f-88ad-09d0313b95c6","title":"Episode 19: Privacy and Security in Data Science and Machine Learning","url":"https://vanishinggradients.fireside.fm/19","content_text":"Hugo speaks with Katharine Jarmul about privacy and security in data science and machine learning. Katharine is a Principal Data Scientist at Thoughtworks Germany focusing on privacy, ethics, and security for data science workflows. Previously, she has held numerous roles at large companies and startups in the US and Germany, implementing data processing and machine learning systems with a focus on reliability, testability, privacy, and security.\n\nIn this episode, Hugo and Katharine talk about\n\n\nWhat data privacy and security are, what they aren’t and the differences between them (hopefully dispelling common misconceptions along the way!);\nWhy you should care about them (hint: the answers will involve regulatory, ethical, risk, and organizational concerns);\nData governance, anonymization techniques, and privacy in data pipelines;\nPrivacy attacks!\nThe state of the art in privacy-aware machine learning and data science, including federated learning;\nWhat you need to know about the current state of regulation, including GDPR and CCPA…\n\n\nAnd much more, all the while grounding our conversation in real-world examples from data science, machine learning, business, and life!\n\nYou can also sign up for our next livestreamed podcast recording here! \n\nLINKS\n\n\nWin a copy of Practical Data Privacy, Katharine's new book!\nKatharine on twitter\nVanishing Gradients on YouTube\nProbably Private, a newsletter for privacy and data science enthusiasts\nProbably Private on YouTube\n","content_html":"\u003cp\u003eHugo speaks with Katharine Jarmul about privacy and security in data science and machine learning. Katharine is a Principal Data Scientist at Thoughtworks Germany focusing on privacy, ethics, and security for data science workflows. Previously, she has held numerous roles at large companies and startups in the US and Germany, implementing data processing and machine learning systems with a focus on reliability, testability, privacy, and security.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, Hugo and Katharine talk about\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eWhat data privacy and security are, what they aren’t and the differences between them (hopefully dispelling common misconceptions along the way!);\u003c/li\u003e\n\u003cli\u003eWhy you should care about them (hint: the answers will involve regulatory, ethical, risk, and organizational concerns);\u003c/li\u003e\n\u003cli\u003eData governance, anonymization techniques, and privacy in data pipelines;\u003c/li\u003e\n\u003cli\u003ePrivacy attacks!\u003c/li\u003e\n\u003cli\u003eThe state of the art in privacy-aware machine learning and data science, including federated learning;\u003c/li\u003e\n\u003cli\u003eWhat you need to know about the current state of regulation, including GDPR and CCPA…\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAnd much more, all the while grounding our conversation in real-world examples from data science, machine learning, business, and life!\u003c/p\u003e\n\n\u003cp\u003eYou can also sign up for our next livestreamed podcast recording \u003ca href=\"https://lu.ma/4b5xalpz\" rel=\"nofollow\"\u003ehere\u003c/a\u003e! \u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://forms.gle/wkF92vyvjfZLM6qt8\" rel=\"nofollow\"\u003eWin a copy of Practical Data Privacy, Katharine\u0026#39;s new book!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/kjam\" rel=\"nofollow\"\u003eKatharine on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://probablyprivate.com/\" rel=\"nofollow\"\u003eProbably Private, a newsletter for privacy and data science enthusiasts\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/@ProbablyPrivate\" rel=\"nofollow\"\u003eProbably Private on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Katharine Jarmul about privacy and security in data science and machine learning. Katharine is a Principal Data Scientist at Thoughtworks Germany focusing on privacy, ethics, and security for data science workflows.","date_published":"2023-08-15T03:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/87376a4e-df73-494f-88ad-09d0313b95c6.mp3","mime_type":"audio/mpeg","size_in_bytes":79998085,"duration_in_seconds":4999}]},{"id":"83afeb64-21ec-4828-bf96-75a08c710391","title":"Episode 18: Research Data Science in Biotech","url":"https://vanishinggradients.fireside.fm/18","content_text":"Hugo speaks with Eric Ma about Research Data Science in Biotech. Eric leads the Research team in the Data Science and Artificial Intelligence group at Moderna Therapeutics. Prior to that, he was part of a special ops data science team at the Novartis Institutes for Biomedical Research's Informatics department.\n\nIn this episode, Hugo and Eric talk about\n\n\n What tools and techniques they use for drug discovery (such as mRNA vaccines and medicines);\n The importance of machine learning, deep learning, and Bayesian inference;\n How to think more generally about such high-dimensional, multi-objective optimization problems;\n The importance of open-source software and Python;\n Institutional and cultural questions, including hiring and the trade-offs between being an individual contributor and a manager;\n How they’re approaching accelerating discovery science to the speed of thought using computation, data science, statistics, and ML.\n\n\nAnd as always, much, much more!\n\nLINKS\n\n\nEric's website\nEric on twitter\nVanishing Gradients on YouTube\nCell Biology by the Numbers by Ron Milo and Rob Phillips\nEric's JAX tutorials at PyCon and SciPy\nEric's blog post on Hiring data scientists at Moderna!\n","content_html":"\u003cp\u003eHugo speaks with Eric Ma about Research Data Science in Biotech. Eric leads the Research team in the Data Science and Artificial Intelligence group at Moderna Therapeutics. Prior to that, he was part of a special ops data science team at the Novartis Institutes for Biomedical Research\u0026#39;s Informatics department.\u003c/p\u003e\n\n\u003cp\u003eIn this episode, Hugo and Eric talk about\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e What tools and techniques they use for drug discovery (such as mRNA vaccines and medicines);\u003c/li\u003e\n\u003cli\u003e The importance of machine learning, deep learning, and Bayesian inference;\u003c/li\u003e\n\u003cli\u003e How to think more generally about such high-dimensional, multi-objective optimization problems;\u003c/li\u003e\n\u003cli\u003e The importance of open-source software and Python;\u003c/li\u003e\n\u003cli\u003e Institutional and cultural questions, including hiring and the trade-offs between being an individual contributor and a manager;\u003c/li\u003e\n\u003cli\u003e How they’re approaching accelerating discovery science to the speed of thought using computation, data science, statistics, and ML.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAnd as always, much, much more!\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://ericmjl.github.io/\" rel=\"nofollow\"\u003eEric\u0026#39;s website\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/ericmjl\" rel=\"nofollow\"\u003eEric on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://book.bionumbers.org/\" rel=\"nofollow\"\u003eCell Biology by the Numbers by Ron Milo and Rob Phillips\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eEric\u0026#39;s JAX tutorials at \u003ca href=\"https://youtu.be/ztthQJQFe20\" rel=\"nofollow\"\u003ePyCon\u003c/a\u003e and \u003ca href=\"https://youtu.be/DmR36wtel4Y\" rel=\"nofollow\"\u003eSciPy\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eEric\u0026#39;s blog post on \u003ca href=\"https://ericmjl.github.io/blog/2021/8/26/hiring-data-scientists-at-moderna-2021/\" rel=\"nofollow\"\u003eHiring data scientists at Moderna!\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Machine learning, deep learning, Bayesian inference for drug discovery, OSS, and accelerating discovery science to the speed of thought!","date_published":"2023-05-25T08:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/83afeb64-21ec-4828-bf96-75a08c710391.mp3","mime_type":"audio/mpeg","size_in_bytes":69807439,"duration_in_seconds":4362}]},{"id":"289285e2-f5aa-4900-a051-7b364f9d0bb6","title":"Episode 17: End-to-End Data Science","url":"https://vanishinggradients.fireside.fm/17","content_text":"Hugo speaks with Tanya Cashorali, a data scientist and consultant that helps businesses get the most out of data, about what end-to-end data science looks like across many industries, such as retail, defense, biotech, and sports, including\n\n\nscoping out projects,\nfiguring out the correct questions to ask,\nhow projects can change,\ndelivering on the promise,\nthe importance of rapid prototyping,\nwhat it means to put models in production, and\nhow to measure success.\n\n\nAnd much more, all the while grounding their conversation in real-world examples from data science, business, and life.\n\nIn a world where most organizations think they need AI and yet 10-15% of data science actually involves model building, it’s time to get real about how data science and machine learning actually deliver value!\n\nLINKS\n\n\nTanya on Twitter\nVanishing Gradients on YouTube\nSaving millions with a Shiny app | Data Science Hangout with Tanya Cashorali\nOur next livestream: Research Data Science in Biotech with Eric Ma\n","content_html":"\u003cp\u003eHugo speaks with Tanya Cashorali, a data scientist and consultant that helps businesses get the most out of data, about what end-to-end data science looks like across many industries, such as retail, defense, biotech, and sports, including\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003escoping out projects,\u003c/li\u003e\n\u003cli\u003efiguring out the correct questions to ask,\u003c/li\u003e\n\u003cli\u003ehow projects can change,\u003c/li\u003e\n\u003cli\u003edelivering on the promise,\u003c/li\u003e\n\u003cli\u003ethe importance of rapid prototyping,\u003c/li\u003e\n\u003cli\u003ewhat it means to put models in production, and\u003c/li\u003e\n\u003cli\u003ehow to measure success.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAnd much more, all the while grounding their conversation in real-world examples from data science, business, and life.\u003c/p\u003e\n\n\u003cp\u003eIn a world where most organizations think they need AI and yet 10-15% of data science actually involves model building, it’s time to get real about how data science and machine learning actually deliver value!\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLINKS\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/tanyacash21\" rel=\"nofollow\"\u003eTanya on Twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/qdAroyFRFCg\" rel=\"nofollow\"\u003eSaving millions with a Shiny app | Data Science Hangout with Tanya Cashorali\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.eventbrite.com/e/research-data-science-in-biotech-tickets-550400882857?aff=fs\" rel=\"nofollow\"\u003eOur next livestream: Research Data Science in Biotech with Eric Ma\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"It’s time to get real about how data science and machine learning actually deliver value! Hugo speaks with Tanya Cashorali, a data scientist and consultant that helps businesses get the most out of data, about what end-to-end data science looks like across many industries, such as retail, defense, biotech, and sports.","date_published":"2023-02-17T17:30:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/289285e2-f5aa-4900-a051-7b364f9d0bb6.mp3","mime_type":"audio/mpeg","size_in_bytes":73030076,"duration_in_seconds":4564}]},{"id":"9eb29a37-c694-45a8-bae5-38e5b3fd5849","title":"Episode 16: Data Science and Decision Making Under Uncertainty","url":"https://vanishinggradients.fireside.fm/16","content_text":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. \n\nThis is part 2 of a two part conversation in which we delve into decision making under uncertainty. Feel free to check out part 1 here but this episode should also stand alone.\n\nWhy am I speaking to JD about all of this? Because not only is he a wild conversationalist with a real knack for explaining hard to grok concepts with illustrative examples and useful stories, but he has worked for many years in re-insurance, that’s right, not insurance but re-insurance – these are the people who insure the insurers so if anyone can actually tell us about risk and uncertainty in decision making, it’s him!\n\nIn part 1, we discussed risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making.\n\nIn this, part 2, we discuss the ins and outs of decision making under uncertainty, including\n\n\nHow data science can be more tightly coupled with the decision function in organisations;\nSome common mistakes and failure modes of making decisions under uncertainty;\nHeuristics for principled decision-making in data science;\nThe intersection of model building, storytelling, and cognitive biases to keep in mind;\n\n\nAs JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”\n\nLinks\n\n\nVanishing Gradients' new YouTube channel!\nJD on twitter\nExecutive Data Science, episode 5 of Vanishing Gradients, in which Jim Savage and Hugo talk through decision making and why you should always be integrating your loss function over your posterior\nFooled by Randomness by Nassim Taleb\nSuperforecasting: The Art and Science of Prediction Philip E. Tetlock and Dan Gardner\nThinking in Bets by Annie Duke\nThe Signal and the Noise: Why So Many Predictions Fail by Nate Silver\nThinking, Fast and Slow by Daniel Kahneman\n","content_html":"\u003cp\u003eHugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. \u003c/p\u003e\n\n\u003cp\u003eThis is part 2 of a two part conversation in which we delve into decision making under uncertainty. Feel free to check out part 1 \u003ca href=\"https://vanishinggradients.fireside.fm/15\" rel=\"nofollow\"\u003ehere\u003c/a\u003e but this episode should also stand alone.\u003c/p\u003e\n\n\u003cp\u003eWhy am I speaking to JD about all of this? Because not only is he a wild conversationalist with a real knack for explaining hard to grok concepts with illustrative examples and useful stories, but he has worked for many years in re-insurance, that’s right, not insurance but re-insurance – these are the people who insure the insurers so if anyone can actually tell us about risk and uncertainty in decision making, it’s him!\u003c/p\u003e\n\n\u003cp\u003eIn part 1, we discussed risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making.\u003c/p\u003e\n\n\u003cp\u003eIn this, part 2, we discuss the ins and outs of decision making under uncertainty, including\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eHow data science can be more tightly coupled with the decision function in organisations;\u003c/li\u003e\n\u003cli\u003eSome common mistakes and failure modes of making decisions under uncertainty;\u003c/li\u003e\n\u003cli\u003eHeuristics for principled decision-making in data science;\u003c/li\u003e\n\u003cli\u003eThe intersection of model building, storytelling, and cognitive biases to keep in mind;\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAs JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients\u0026#39; new YouTube channel!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/CMastication\" rel=\"nofollow\"\u003eJD on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://vanishinggradients.fireside.fm/5\" rel=\"nofollow\"\u003eExecutive Data Science, episode 5 of Vanishing Gradients, in which Jim Savage and Hugo talk through decision making and why you should always be integrating your loss function over your posterior\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Fooled_by_Randomness\" rel=\"nofollow\"\u003eFooled by Randomness by Nassim Taleb\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction\" rel=\"nofollow\"\u003eSuperforecasting: The Art and Science of Prediction Philip E. Tetlock and Dan Gardner\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.penguin.com.au/books/thinking-in-bets-9780735216372\" rel=\"nofollow\"\u003eThinking in Bets by Annie Duke\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/The_Signal_and_the_Noise\" rel=\"nofollow\"\u003eThe Signal and the Noise: Why So Many Predictions Fail by Nate Silver\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow\" rel=\"nofollow\"\u003eThinking, Fast and Slow by Daniel Kahneman\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about data science, ML, and the nitty gritty of decision making under uncertainty, including how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. ","date_published":"2022-12-15T08:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/9eb29a37-c694-45a8-bae5-38e5b3fd5849.mp3","mime_type":"audio/mpeg","size_in_bytes":59947028,"duration_in_seconds":4995}]},{"id":"c2e27880-6d10-4b0b-afd7-e349d219662a","title":"Episode 15: Uncertainty, Risk, and Simulation in Data Science","url":"https://vanishinggradients.fireside.fm/15","content_text":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. \n\nThis is part 1 of a two part conversation. In this, part 1, we discuss risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making and we draw on examples from our personal lives, the pandemic, our jobs, the reinsurance space, and the corporate world. In part 2, we’ll get into the nitty gritty of decision making under uncertainty.\n\nAs JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”\n\nLinks\n\n\nVanishing Gradients' new YouTube channel!\nJD on twitter\nExecutive Data Science, episode 5 of Vanishing Gradients, in which Jim Savage and Hugo talk through decision making and why you should always be integrating your loss function over your posterior\nFooled by Randomness by Nassim Taleb\nSuperforecasting: The Art and Science of Prediction Philip E. Tetlock and Dan Gardner\nThinking in Bets by Annie Duke\nThe Signal and the Noise: Why So Many Predictions Fail by Nate Silver\nThinking, Fast and Slow by Daniel Kahneman\n\n","content_html":"\u003cp\u003eHugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. \u003c/p\u003e\n\n\u003cp\u003eThis is part 1 of a two part conversation. In this, part 1, we discuss risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making and we draw on examples from our personal lives, the pandemic, our jobs, the reinsurance space, and the corporate world. In part 2, we’ll get into the nitty gritty of decision making under uncertainty.\u003c/p\u003e\n\n\u003cp\u003eAs JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA\" rel=\"nofollow\"\u003eVanishing Gradients\u0026#39; new YouTube channel!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/CMastication\" rel=\"nofollow\"\u003eJD on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://vanishinggradients.fireside.fm/5\" rel=\"nofollow\"\u003eExecutive Data Science, episode 5 of Vanishing Gradients, in which Jim Savage and Hugo talk through decision making and why you should always be integrating your loss function over your posterior\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Fooled_by_Randomness\" rel=\"nofollow\"\u003eFooled by Randomness by Nassim Taleb\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction\" rel=\"nofollow\"\u003eSuperforecasting: The Art and Science of Prediction Philip E. Tetlock and Dan Gardner\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.penguin.com.au/books/thinking-in-bets-9780735216372\" rel=\"nofollow\"\u003eThinking in Bets by Annie Duke\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/The_Signal_and_the_Noise\" rel=\"nofollow\"\u003eThe Signal and the Noise: Why So Many Predictions Fail by Nate Silver\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow\" rel=\"nofollow\"\u003eThinking, Fast and Slow by Daniel Kahneman\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world.","date_published":"2022-12-08T05:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c2e27880-6d10-4b0b-afd7-e349d219662a.mp3","mime_type":"audio/mpeg","size_in_bytes":38526097,"duration_in_seconds":3210}]},{"id":"c02c6e9f-2a38-4f03-a8f5-4b19ed8966c3","title":"Episode 14: Decision Science, MLOps, and Machine Learning Everywhere","url":"https://vanishinggradients.fireside.fm/14","content_text":"Hugo Bowne-Anderson, host of Vanishing Gradients, reads 3 audio essays about decision science, MLOps, and what happens when machine learning models are everywhere.\n\nLinks\n\n\nOur upcoming Vanishing Gradients live recording of Data Science and Decision Making Under Uncertainty with Hugo and JD Long!\nDecision-Making in a Time of Crisis by Hugo Bowne-Anderson\nMLOps and DevOps: Why Data Makes It Different by Ville Tuulos and Hugo Bowne-Anderson\nThe above essay syndicated on VentureBeat\nWhen models are everywhere by Hugo Bowne-Anderson and Mike Loukides\n","content_html":"\u003cp\u003eHugo Bowne-Anderson, host of Vanishing Gradients, reads 3 audio essays about decision science, MLOps, and what happens when machine learning models are everywhere.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.eventbrite.com/e/data-science-and-decision-making-under-uncertainty-tickets-467379864757?aff=vg\" rel=\"nofollow\"\u003eOur upcoming Vanishing Gradients live recording of Data Science and Decision Making Under Uncertainty with Hugo and JD Long!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/radar/decision-making-in-a-time-of-crisis/\" rel=\"nofollow\"\u003eDecision-Making in a Time of Crisis\u003c/a\u003e by Hugo Bowne-Anderson\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/radar/mlops-and-devops-why-data-makes-it-different/\" rel=\"nofollow\"\u003eMLOps and DevOps: Why Data Makes It Different\u003c/a\u003e by Ville Tuulos and Hugo Bowne-Anderson\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://venturebeat.com/business/mlops-vs-devops-why-data-makes-it-different/\" rel=\"nofollow\"\u003eThe above essay syndicated on VentureBeat\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/radar/when-models-are-everywhere/\" rel=\"nofollow\"\u003eWhen models are everywhere\u003c/a\u003e by Hugo Bowne-Anderson and Mike Loukides\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo reads 3 audio essays about decision science, MLOps, and what happens when machine learning models are everywhere","date_published":"2022-11-21T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c02c6e9f-2a38-4f03-a8f5-4b19ed8966c3.mp3","mime_type":"audio/mpeg","size_in_bytes":66269255,"duration_in_seconds":4141}]},{"id":"0d9dafd4-c27b-4e49-9431-58c70de4d82d","title":"Episode 13: The Data Science Skills Gap, Economics, and Public Health","url":"https://vanishinggradients.fireside.fm/13","content_text":"Hugo speak with Norma Padron about data science education and continuous learning for people working in healthcare, broadly construed, along with how we can think about the democratization of data science skills more generally.\n\nNorma is CEO of EmpiricaLab, where her team‘s mission is to bridge work and training and empower healthcare teams to focus on what they care about the most: patient care. In a word, EmpiricaLab is a platform focused on peer learning and last-mile training for healthcare teams.\n\nAs you’ll discover, Norma’s background is fascinating: with a Ph.D. in health policy and management from Yale University, a master's degree in economics from Duke University (among other things), and then working with multiple early stage digital health companies to accelerate their growth and scale, this is a wide ranging conversation about how and where learning actually occurs, particularly with respect to data science; we talk about how the worlds of economics and econometrics, including causal inference, can be used to make data science and more robust and less fragile field, and why these disciplines are essential to both public and health policy. It was really invigorating to talk about the data skills gaps that exists in organizations and how Norma’s team at Empiricalab is thinking about solving it in the health space using a 3 tiered solution of content creation, a social layer, and an information discovery platform. \n\nAll of this in service of a key question we’re facing in this field: how do you get the right data skills, tools, and workflows, in the hands of the people who need them, when the space is evolving so quickly?\n\nLinks\n\n\nNorma's website\nEmpiricaLab\nNorma on twitter\n","content_html":"\u003cp\u003eHugo speak with Norma Padron about data science education and continuous learning for people working in healthcare, broadly construed, along with how we can think about the democratization of data science skills more generally.\u003c/p\u003e\n\n\u003cp\u003eNorma is CEO of EmpiricaLab, where her team‘s mission is to bridge work and training and empower healthcare teams to focus on what they care about the most: patient care. In a word, EmpiricaLab is a platform focused on peer learning and last-mile training for healthcare teams.\u003c/p\u003e\n\n\u003cp\u003eAs you’ll discover, Norma’s background is fascinating: with a Ph.D. in health policy and management from Yale University, a master\u0026#39;s degree in economics from Duke University (among other things), and then working with multiple early stage digital health companies to accelerate their growth and scale, this is a wide ranging conversation about how and where learning actually occurs, particularly with respect to data science; we talk about how the worlds of economics and econometrics, including causal inference, can be used to make data science and more robust and less fragile field, and why these disciplines are essential to both public and health policy. It was really invigorating to talk about the data skills gaps that exists in organizations and how Norma’s team at Empiricalab is thinking about solving it in the health space using a 3 tiered solution of content creation, a social layer, and an information discovery platform. \u003c/p\u003e\n\n\u003cp\u003eAll of this in service of a key question we’re facing in this field: how do you get the right data skills, tools, and workflows, in the hands of the people who need them, when the space is evolving so quickly?\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.normapadron.com/\" rel=\"nofollow\"\u003eNorma\u0026#39;s website\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.empiricalab.com/\" rel=\"nofollow\"\u003eEmpiricaLab\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/NormaPadron__\" rel=\"nofollow\"\u003eNorma on twitter\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Norma Padron, CEO of EmpiricaLab, about data science education and continuous learning for people working in healthcare, broadly construed, along with how we can think about the democratization of data science skills more generally.","date_published":"2022-10-12T09:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/0d9dafd4-c27b-4e49-9431-58c70de4d82d.mp3","mime_type":"audio/mpeg","size_in_bytes":59542966,"duration_in_seconds":4961}]},{"id":"edfe9061-d42f-4c7d-b0af-e769252ae94e","title":"Episode 12: Data Science for Social Media: Twitter and Reddit","url":"https://vanishinggradients.fireside.fm/12","content_text":"Hugo speakswith Katie Bauer about her time working in data science at both Twitter and Reddit. At the time of recording, Katie was a data science manager at Twitter and prior to that, a founding member of the data team at Reddit. She’s now Head of Data Science at Gloss Genius so congrats on the new job, Katie!\n\nIn this conversation, we dive into what type of challenges social media companies face that data science is equipped to solve: in doing so, we traverse \n\n\nthe difference and similarities in companies such as Twitter and Reddit, \nthe major differences in being an early member of a data team and joining an established data function at a larger organization, \nthe supreme importance of robust measurement and telemetry in data science, along with \nthe mixed incentives for career data scientists, such as building flashy new things instead of maintaining existing infrastructure.\n\n\nI’ve always found conversations with Katie to be a treasure trove of insights into data science and machine learning practice, along with key learnings about data science management. \n\nIn a word, Katie helps me to understand our space better. In this conversation, she told me that one important function data science can serve in any organization is creating a shared context for lots of different people in the org. We dive deep into what this actually means, how it can play out, traversing the world of dashboards, metric stores, feature stores, machine learning products, the need for top-down support, and much, much more.","content_html":"\u003cp\u003eHugo speakswith \u003ca href=\"https://twitter.com/imightbemary\" rel=\"nofollow\"\u003eKatie Bauer\u003c/a\u003e about her time working in data science at both Twitter and Reddit. At the time of recording, Katie was a data science manager at Twitter and prior to that, a founding member of the data team at Reddit. She’s now Head of Data Science at Gloss Genius so congrats on the new job, Katie!\u003c/p\u003e\n\n\u003cp\u003eIn this conversation, we dive into what type of challenges social media companies face that data science is equipped to solve: in doing so, we traverse \u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003ethe difference and similarities in companies such as Twitter and Reddit, \u003c/li\u003e\n\u003cli\u003ethe major differences in being an early member of a data team and joining an established data function at a larger organization, \u003c/li\u003e\n\u003cli\u003ethe supreme importance of robust measurement and telemetry in data science, along with \u003c/li\u003e\n\u003cli\u003ethe mixed incentives for career data scientists, such as building flashy new things instead of maintaining existing infrastructure.\u003c/li\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eI’ve always found conversations with Katie to be a treasure trove of insights into data science and machine learning practice, along with key learnings about data science management. \u003c/p\u003e\n\n\u003cp\u003eIn a word, Katie helps me to understand our space better. In this conversation, she told me that one important function data science can serve in any organization is creating a shared context for lots of different people in the org. We dive deep into what this actually means, how it can play out, traversing the world of dashboards, metric stores, feature stores, machine learning products, the need for top-down support, and much, much more.\u003c/p\u003e","summary":"Hugo speaks with Katie Bauer about her time working in data science at both Twitter and Reddit. At the time of recording, Katie was a data science manager at Twitter and prior to that, a founding member of the data team at Reddit. ","date_published":"2022-09-30T10:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/edfe9061-d42f-4c7d-b0af-e769252ae94e.mp3","mime_type":"audio/mpeg","size_in_bytes":89041208,"duration_in_seconds":5565}]},{"id":"697e817a-b886-4057-9dc1-4c9868c0b064","title":"Episode 11: Data Science: The Great Stagnation","url":"https://vanishinggradients.fireside.fm/11","content_text":"Hugo speaks with Mark Saroufim, an Applied AI Engineer at Meta who works on PyTorch where his team’s main focus is making it as easy as possible for people to deploy PyTorch in production outside Meta. \n\nMark first came on our radar with an essay he wrote called Machine Learning: the Great Stagnation, which was concerned with the stagnation in machine learning in academic research and in which he stated\n\n\nMachine learning researchers can now engage in risk-free, high-income, high-prestige work. They are today’s Medieval Catholic priests.\n\n\nThis is just the tip of the icebergs of Mark’s critical and often sociological eye and one of the reasons I was excited to speak with him.\n\nIn this conversation, we talk about the importance of open source software in modern data science and machine learning and how Mark thinks about making it as easy to use as possible. We also talk about risk assessments in considering whether to adopt open source or not, the supreme importance of good documentation, and what we can learn from the world of video game development when thinking about open source.\n\nWe then dive into the rise of the machine learning cult leader persona, in the context of examples such as Hugging Face and the community they’ve built. We discuss the role of marketing in open source tooling, along with for profit data science and ML tooling, how it can impact you as an end user, and how much of data science can be considered differing forms of live action role playing and simulation.\n\nWe also talk about developer marketing and content for data professionals and how we see some of the largest names in ML researchers being those that have gigantic Twitter followers, such as Andrei Karpathy. This is part of a broader trend in society about the skills that are required to capture significant mind share these days.\n\nIf that’s not enough, we jump into how machine learning ideally allows businesses to build sustainable and defensible moats, by which we mean the ability to maintain competitive advantages over competitors to retain market share.\n\nIn between this interview and its release, PyTorch joined the Linux Foundation, which is something we’ll need to get Mark back to discuss sometime.\n\nLinks\n\n\nThe Myth of Objective Tech Screens\nMachine Learning: The Great Stagnation\nFear the Boom and Bust: Keynes vs. Hayek - The Original Economics Rap Battle!\nHistory and the Security of Property by Nick Szabo\nMark on YouTube\nMark's Substack\nMark's Discord\n","content_html":"\u003cp\u003eHugo speaks with Mark Saroufim, an Applied AI Engineer at Meta who works on PyTorch where his team’s main focus is making it as easy as possible for people to deploy PyTorch in production outside Meta. \u003c/p\u003e\n\n\u003cp\u003eMark first came on our radar with an essay he wrote called \u003ca href=\"https://marksaroufim.substack.com/p/machine-learning-the-great-stagnation\" rel=\"nofollow\"\u003eMachine Learning: the Great Stagnation\u003c/a\u003e, which was concerned with the stagnation in machine learning in academic research and in which he stated\u003c/p\u003e\n\n\u003cblockquote\u003e\n\u003cp\u003eMachine learning researchers can now engage in risk-free, high-income, high-prestige work. They are today’s Medieval Catholic priests.\u003c/p\u003e\n\u003c/blockquote\u003e\n\n\u003cp\u003eThis is just the tip of the icebergs of Mark’s critical and often sociological eye and one of the reasons I was excited to speak with him.\u003c/p\u003e\n\n\u003cp\u003eIn this conversation, we talk about the importance of open source software in modern data science and machine learning and how Mark thinks about making it as easy to use as possible. We also talk about risk assessments in considering whether to adopt open source or not, the supreme importance of good documentation, and what we can learn from the world of video game development when thinking about open source.\u003c/p\u003e\n\n\u003cp\u003eWe then dive into the rise of the machine learning cult leader persona, in the context of examples such as Hugging Face and the community they’ve built. We discuss the role of marketing in open source tooling, along with for profit data science and ML tooling, how it can impact you as an end user, and how much of data science can be considered differing forms of live action role playing and simulation.\u003c/p\u003e\n\n\u003cp\u003eWe also talk about developer marketing and content for data professionals and how we see some of the largest names in ML researchers being those that have gigantic Twitter followers, such as Andrei Karpathy. This is part of a broader trend in society about the skills that are required to capture significant mind share these days.\u003c/p\u003e\n\n\u003cp\u003eIf that’s not enough, we jump into how machine learning ideally allows businesses to build sustainable and defensible moats, by which we mean the ability to maintain competitive advantages over competitors to retain market share.\u003c/p\u003e\n\n\u003cp\u003eIn between this interview and its release, PyTorch joined the Linux Foundation, which is something we’ll need to get Mark back to discuss sometime.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://marksaroufim.substack.com/p/the-myth-of-objective-tech-screens\" rel=\"nofollow\"\u003eThe Myth of Objective Tech Screens\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://marksaroufim.substack.com/p/machine-learning-the-great-stagnation\" rel=\"nofollow\"\u003eMachine Learning: The Great Stagnation\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/watch?v=d0nERTFo-Sk\" rel=\"nofollow\"\u003eFear the Boom and Bust: Keynes vs. Hayek - The Original Economics Rap Battle!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://archive.ph/dRXEK#selection-21.0-21.36\" rel=\"nofollow\"\u003eHistory and the Security of Property\u003c/a\u003e by Nick Szabo\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/marksaroufim\" rel=\"nofollow\"\u003eMark on YouTube\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://marksaroufim.substack.com/p/machine-learning-the-great-stagnation\" rel=\"nofollow\"\u003eMark\u0026#39;s Substack\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://discord.com/invite/drmuTjWZrm\" rel=\"nofollow\"\u003eMark\u0026#39;s Discord\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Mark Saroufim, an Applied AI Engineer at Meta who works on PyTorch where his team’s main focus is making it as easy as possible for people to deploy PyTorch in production outside Meta. ","date_published":"2022-09-16T12:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/697e817a-b886-4057-9dc1-4c9868c0b064.mp3","mime_type":"audio/mpeg","size_in_bytes":101417351,"duration_in_seconds":6338}]},{"id":"4552d501-5bc5-43c9-9246-5dbd221ebd06","title":"Episode 10: Investing in Machine Learning","url":"https://vanishinggradients.fireside.fm/10","content_text":"Hugo speaks with Sarah Catanzaro, General Partner at Amplify Partners, about investing in data science and machine learning tooling and where we see progress happening in the space.\n\nSarah invests in the tools that we both wish we had earlier in our careers: tools that enable data scientists and machine learners to collect, store, manage, analyze, and model data more effectively. As you’ll discover, Sarah identifies as a scientist first and an investor second and still believes that her mission is to enable companies to become data-driven and to generate ROI through machine and statistical learning. In her words, she’s still that cuckoo kid who’s ranting and raving about how data and AI will shift every tide.\n\nIn this conversation, we talk about what scientific inquiry actually is and the elements of playfulness and seriousness it necessarily involves, and how it can be used to generate business value. We talk about Sarah’s unorthodox path from a data scientist working in defense to her time at Palantir and how that led her to build out a data team and function for a venture capital firm and then to becoming a VC in the data tooling space.\n\nWe then really dive into the data science and machine learning tooling space to figure out why it’s so fragmented: we look to the data analytics stack and software engineering communities to find historical tethers that may be useful. We discuss the moving parts that led to the establishment of a standard, a system of record, and clearly defined roles in analytics and what we can learn from that for machine learning!\n\nWe also dive into the development of tools, workflows, and division of labour as partial exercises in pattern recognition and how this can be at odds with the variance we see in the machine learning landscape, more generally!\n\nTwo take-aways are that we need best practices and we need more standardization.\n\nWe also discussed that, with all our focus and conversations on tools, what conversation we’re missing and Sarah was adamant that we need to be focusing on questions, not solutions, and even questioning what ML is useful for and what it isn’t, diving into a bunch of thoughtful and nuanced examples.\n\nI’m also grateful that Sarah let me take her down a slightly dangerous and self-critical path where we riffed on both our roles in potentially contributing to the tragedy of commons we’re all experiencing in the data tooling landscape, me working in tool building, developer relations, and in marketing, and Sarah in venture capital. ","content_html":"\u003cp\u003eHugo speaks with Sarah Catanzaro, General Partner at Amplify Partners, about investing in data science and machine learning tooling and where we see progress happening in the space.\u003c/p\u003e\n\n\u003cp\u003eSarah invests in the tools that we both wish we had earlier in our careers: tools that enable data scientists and machine learners to collect, store, manage, analyze, and model data more effectively. As you’ll discover, Sarah identifies as a scientist first and an investor second and still believes that her mission is to enable companies to become data-driven and to generate ROI through machine and statistical learning. In her words, she’s still that cuckoo kid who’s ranting and raving about how data and AI will shift every tide.\u003c/p\u003e\n\n\u003cp\u003eIn this conversation, we talk about what scientific inquiry actually is and the elements of playfulness and seriousness it necessarily involves, and how it can be used to generate business value. We talk about Sarah’s unorthodox path from a data scientist working in defense to her time at Palantir and how that led her to build out a data team and function for a venture capital firm and then to becoming a VC in the data tooling space.\u003c/p\u003e\n\n\u003cp\u003eWe then really dive into the data science and machine learning tooling space to figure out why it’s so fragmented: we look to the data analytics stack and software engineering communities to find historical tethers that may be useful. We discuss the moving parts that led to the establishment of a standard, a system of record, and clearly defined roles in analytics and what we can learn from that for machine learning!\u003c/p\u003e\n\n\u003cp\u003eWe also dive into the development of tools, workflows, and division of labour as partial exercises in pattern recognition and how this can be at odds with the variance we see in the machine learning landscape, more generally!\u003c/p\u003e\n\n\u003cp\u003eTwo take-aways are that we need best practices and we need more standardization.\u003c/p\u003e\n\n\u003cp\u003eWe also discussed that, with all our focus and conversations on tools, what conversation we’re missing and Sarah was adamant that we need to be focusing on questions, not solutions, and even questioning what ML is useful for and what it isn’t, diving into a bunch of thoughtful and nuanced examples.\u003c/p\u003e\n\n\u003cp\u003eI’m also grateful that Sarah let me take her down a slightly dangerous and self-critical path where we riffed on both our roles in potentially contributing to the tragedy of commons we’re all experiencing in the data tooling landscape, me working in tool building, developer relations, and in marketing, and Sarah in venture capital. \u003c/p\u003e","summary":"Hugo speaks with Sarah Catanzaro, General Partner at Amplify Partners, about investing in data science and machine learning tooling and where we see progress happening in the space.","date_published":"2022-08-19T01:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/4552d501-5bc5-43c9-9246-5dbd221ebd06.mp3","mime_type":"audio/mpeg","size_in_bytes":83101043,"duration_in_seconds":5193}]},{"id":"86c9a94f-4c33-40a8-aa83-50a9e125484b","title":"9: AutoML, Literate Programming, and Data Tooling Cargo Cults","url":"https://vanishinggradients.fireside.fm/9","content_text":"Hugo speaks with Hamel Husain, Head of Data Science at Outerbounds, with extensive experience in data science consulting, at DataRobot, Airbnb, and Github.\n\nIn this conversation, they talk about Hamel's early days in data science, consulting for a wide array of companies, such as Crocs, restaurants, and casinos in Las Vegas, diving into what data science even looked like in 2005 and how you could think about delivering business value using data and analytics back then.\n\nThey talk about his trajectory in moving to data science and machine learning in Silicon Valley, what his expectations were, and what he actually found there.\n\nThey then take a dive into AutoML, discussing what should be automated in Machine learning and what shouldn’t. They talk about software engineering best practices and what aspects it would be useful for data scientists to know about.\n\nThey also got to talk about the importance of literate programming, notebooks, and documentation in data science and ML. All this and more!\n\nLinks\n\n\nHamel on twitter\nThe Outerbounds documentation project repo\nPractical Advice for R in Production\nnbdev: Create delightful python projects using Jupyter Notebooks\n","content_html":"\u003cp\u003eHugo speaks with Hamel Husain, Head of Data Science at Outerbounds, with extensive experience in data science consulting, at DataRobot, Airbnb, and Github.\u003c/p\u003e\n\n\u003cp\u003eIn this conversation, they talk about Hamel\u0026#39;s early days in data science, consulting for a wide array of companies, such as Crocs, restaurants, and casinos in Las Vegas, diving into what data science even looked like in 2005 and how you could think about delivering business value using data and analytics back then.\u003c/p\u003e\n\n\u003cp\u003eThey talk about his trajectory in moving to data science and machine learning in Silicon Valley, what his expectations were, and what he actually found there.\u003c/p\u003e\n\n\u003cp\u003eThey then take a dive into AutoML, discussing what should be automated in Machine learning and what shouldn’t. They talk about software engineering best practices and what aspects it would be useful for data scientists to know about.\u003c/p\u003e\n\n\u003cp\u003eThey also got to talk about the importance of literate programming, notebooks, and documentation in data science and ML. All this and more!\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/HamelHusain\" rel=\"nofollow\"\u003eHamel on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/outerbounds/docs\" rel=\"nofollow\"\u003eThe Outerbounds documentation project repo\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.rstudio.com/blog/practical-advice-for-r-in-production-answering-your-questions/\" rel=\"nofollow\"\u003ePractical Advice for R in Production\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://nbdev.fast.ai/\" rel=\"nofollow\"\u003enbdev: Create delightful python projects using Jupyter Notebooks\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Hamel Husain, Head of Data Science at Outerbounds, with extensive experience in data science consulting, at DataRobot, Airbnb, and Github.","date_published":"2022-07-19T23:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/86c9a94f-4c33-40a8-aa83-50a9e125484b.mp3","mime_type":"audio/mpeg","size_in_bytes":97642250,"duration_in_seconds":6102}]},{"id":"fe4aec2a-6f67-4259-ae88-6baefd6f008e","title":"Episode 8: The Open Source Cybernetic Revolution","url":"https://vanishinggradients.fireside.fm/8","content_text":"Hugo speaks with Peter Wang, CEO of Anaconda, about what the value proposition of data science actually is, data not as the new oil, but rather data as toxic, nuclear sludge, the fact that data isn’t real (and what we really have are frozen models), and the future promise of data science.\n\nThey also dive into an experimental conversation around open source software development as a model for the development of human civilization, in the context of developing systems that prize local generativity over global extractive principles. If that’s a mouthful, which it was, or an earful, which it may have been, all will be revealed in the conversation.\n\nLInks\n\n\nPeter on twitter\nAnaconda Nucleus\nJordan Hall on the Jim Rutt Show: Game B\nMeditations On Moloch -- On multipolar traps\nHere Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky\nFinite and Infinite Games by James Carse\nGoverning the Commons: The Evolution of Institutions for Collective Action by Elinor Olstrom\nElinor Ostrom's 8 Principles for Managing A Commmons\nHaunted by Data, a beautiful and mesmerising talk by Pinboard.in founder Maciej Ceglowski\n","content_html":"\u003cp\u003eHugo speaks with Peter Wang, CEO of Anaconda, about what the value proposition of data science actually is, data not as the new oil, but rather data as toxic, nuclear sludge, the fact that data isn’t real (and what we really have are frozen models), and the future promise of data science.\u003c/p\u003e\n\n\u003cp\u003eThey also dive into an experimental conversation around open source software development as a model for the development of human civilization, in the context of developing systems that prize local generativity over global extractive principles. If that’s a mouthful, which it was, or an earful, which it may have been, all will be revealed in the conversation.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLInks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/pwang\" rel=\"nofollow\"\u003ePeter on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://anaconda.cloud/\" rel=\"nofollow\"\u003eAnaconda Nucleus\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.jimruttshow.com/jordan-greenhall-hall/\" rel=\"nofollow\"\u003eJordan Hall on the Jim Rutt Show\u003c/a\u003e: Game B\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://slatestarcodex.com/2014/07/30/meditations-on-moloch\" rel=\"nofollow\"\u003eMeditations On Moloch\u003c/a\u003e -- On multipolar traps\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Here_Comes_Everybody_(book)\" rel=\"nofollow\"\u003eHere Comes Everybody: The Power of Organizing Without Organizations\u003c/a\u003e by Clay Shirky\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Finite_and_Infinite_Games\" rel=\"nofollow\"\u003eFinite and Infinite Games\u003c/a\u003e by James Carse\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.cambridge.org/core/books/governing-the-commons/7AB7AE11BADA84409C34815CC288CD79\" rel=\"nofollow\"\u003eGoverning the Commons: The Evolution of Institutions for Collective Action\u003c/a\u003e by Elinor Olstrom\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.onthecommons.org/magazine/elinor-ostroms-8-principles-managing-commmons\" rel=\"nofollow\"\u003eElinor Ostrom\u0026#39;s 8 Principles for Managing A Commmons\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://idlewords.com/talks/haunted_by_data.htm\" rel=\"nofollow\"\u003eHaunted by Data\u003c/a\u003e, a beautiful and mesmerising talk by Pinboard.in founder Maciej Ceglowski\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Peter Wang, CEO of Anaconda, about what the value proposition of data science actually is, data not as the new oil, but rather data as toxic, nuclear sludge, the fact that data isn’t real (and what we really have are frozen models), and the future promise of data science, Gifting economies with finite game economics thrust onto them.\r\n\r\nThey also dive into an experimental conversation around open source software development as a model for the development of human civilization, in the context of developing systems that prize local generativity over global extractive principles. If that’s a mouthful, which it was, or an earful, which it may have been, all will be revealed in the conversation.\r\n","date_published":"2022-05-16T15:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/fe4aec2a-6f67-4259-ae88-6baefd6f008e.mp3","mime_type":"audio/mpeg","size_in_bytes":63326903,"duration_in_seconds":3957}]},{"id":"da4fab18-c5fa-460d-9ddf-0c8f1e60f3f8","title":"Episode 7: The Evolution of Python for Data Science","url":"https://vanishinggradients.fireside.fm/7","content_text":"Hugo speaks with Peter Wang, CEO of Anaconda, about how Python became so big in data science, machine learning, and AI. They jump into many of the technical and sociological beginnings of Python being used for data science, a history of PyData, the conda distribution, and NUMFOCUS.\n\nThey also talk about the emergence of online collaborative environments, particularly with respect to open source, and attempt to figure out the movings parts of PyData and why it has had the impact it has, including the fact that many core developers were not computer scientists or software engineers, but rather scientists and researchers building tools that they needed on an as-needed basis\n\nThey also discuss the challenges in getting adoption for Python and the things that the PyData stack solves, those that it doesn’t and what progress is being made there.\n\nPeople who have listened to Hugo podcast for some time may have recognized that he's interested in the sociology of the data science space and he really considered speaking with Peter a fascinating opportunity to delve into how the Pythonic data science space evolved, particularly with respect to tooling, not only because Peter had a front row seat for much of it, but that he was one of several key actors at various different points. On top of this, Hugo wanted to allow Peter’s inner sociologist room to breathe and evolve in this conversation. \n\nWhat happens then is slightly experimental – Peter is a deep, broad, and occasionally hallucinatory thinker and Hugo wanted to explore new spaces with him so we hope you enjoy the experiments they play as they begin to discuss open-source software in the broader context of finite and infinite games and how OSS is a paradigm of humanity’s ability to create generative, nourishing and anti-rivlarous systems where, by anti-rivalrous, we mean things that become more valuable for everyone the more people use them! But we need to be mindful of finite-game dynamics (for example, those driven by corporate incentives) co-opting and parasitizing the generative systems that we build.\n\nThese are all considerations they delve far deeper into in Part 2 of this interview, which will be the next episode of VG, where we also dive into the relationship between OSS, tools, and venture capital, amonh many others things.\n\nLInks\n\n\nPeter on twitter\nAnaconda Nucleus\nCalling out SciPy on diversity (even though it hurts) by Juan Nunez-Iglesias\nHere Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky\nFinite and Infinite Games by James Carse\nGoverning the Commons: The Evolution of Institutions for Collective Action by Elinor Olstrom\nElinor Ostrom's 8 Principles for Managing A Commmons\n","content_html":"\u003cp\u003eHugo speaks with Peter Wang, CEO of Anaconda, about how Python became so big in data science, machine learning, and AI. They jump into many of the technical and sociological beginnings of Python being used for data science, a history of PyData, the conda distribution, and NUMFOCUS.\u003c/p\u003e\n\n\u003cp\u003eThey also talk about the emergence of online collaborative environments, particularly with respect to open source, and attempt to figure out the movings parts of PyData and why it has had the impact it has, including the fact that many core developers were not computer scientists or software engineers, but rather scientists and researchers building tools that they needed on an as-needed basis\u003c/p\u003e\n\n\u003cp\u003eThey also discuss the challenges in getting adoption for Python and the things that the PyData stack solves, those that it doesn’t and what progress is being made there.\u003c/p\u003e\n\n\u003cp\u003ePeople who have listened to Hugo podcast for some time may have recognized that he\u0026#39;s interested in the sociology of the data science space and he really considered speaking with Peter a fascinating opportunity to delve into how the Pythonic data science space evolved, particularly with respect to tooling, not only because Peter had a front row seat for much of it, but that he was one of several key actors at various different points. On top of this, Hugo wanted to allow Peter’s inner sociologist room to breathe and evolve in this conversation. \u003c/p\u003e\n\n\u003cp\u003eWhat happens then is slightly experimental – Peter is a deep, broad, and occasionally hallucinatory thinker and Hugo wanted to explore new spaces with him so we hope you enjoy the experiments they play as they begin to discuss open-source software in the broader context of finite and infinite games and how OSS is a paradigm of humanity’s ability to create generative, nourishing and anti-rivlarous systems where, by anti-rivalrous, we mean things that become more valuable for everyone the more people use them! But we need to be mindful of finite-game dynamics (for example, those driven by corporate incentives) co-opting and parasitizing the generative systems that we build.\u003c/p\u003e\n\n\u003cp\u003eThese are all considerations they delve far deeper into in Part 2 of this interview, which will be the next episode of VG, where we also dive into the relationship between OSS, tools, and venture capital, amonh many others things.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLInks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/pwang\" rel=\"nofollow\"\u003ePeter on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://anaconda.cloud/\" rel=\"nofollow\"\u003eAnaconda Nucleus\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://ilovesymposia.com/2015/04/03/calling-out-scipy-on-diversity/\" rel=\"nofollow\"\u003eCalling out SciPy on diversity (even though it hurts)\u003c/a\u003e by Juan Nunez-Iglesias\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Here_Comes_Everybody_(book)\" rel=\"nofollow\"\u003eHere Comes Everybody: The Power of Organizing Without Organizations\u003c/a\u003e by Clay Shirky\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://en.wikipedia.org/wiki/Finite_and_Infinite_Games\" rel=\"nofollow\"\u003eFinite and Infinite Games\u003c/a\u003e by James Carse\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.cambridge.org/core/books/governing-the-commons/7AB7AE11BADA84409C34815CC288CD79\" rel=\"nofollow\"\u003eGoverning the Commons: The Evolution of Institutions for Collective Action\u003c/a\u003e by Elinor Olstrom\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.onthecommons.org/magazine/elinor-ostroms-8-principles-managing-commmons\" rel=\"nofollow\"\u003eElinor Ostrom\u0026#39;s 8 Principles for Managing A Commmons\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Peter Wang, CEO of Anaconda, about how Python became so big in data science, machine learning, and AI. They jump into many of the technical and sociological beginnings of Python being used for data science, a history of PyData, the conda distribution, and NUMFOCUS.\r\n","date_published":"2022-05-02T06:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/da4fab18-c5fa-460d-9ddf-0c8f1e60f3f8.mp3","mime_type":"audio/mpeg","size_in_bytes":60022178,"duration_in_seconds":3751}]},{"id":"811a664b-7b02-45b1-8cd7-84155bf4e39d","title":"Episode 6: Bullshit Jobs in Data Science (and what to do about them)","url":"https://vanishinggradients.fireside.fm/6","content_text":"Hugo speaks with Jacqueline Nolis, Chief Product Officer at Saturn Cloud (formerly Head of Data Science), about all types of failure modes in data science, ML, and AI, and they delve into bullshit jobs in data science (yes, that’s a technical term, as you’ll find out) –they discuss the elements that are bullshit, the elements that aren’t, and how to increase the ratio of the latter to the former.\n\nThey also talk about her journey in moving from mainly working in prescriptive analytics building reports in PDFs and power points to deploying machine learning products in production. They delve into her motion from doing data science to designing products for data scientists and how to think about choosing career paths. Jacqueline has been an individual contributor, a team lead, and a principal data scientist so has a lot of valuable experience here. They talk about her experience of transitioning gender while working in data science and they work hard to find a bright vision for the future of this industry!\n\nLinks\n\n\nJacqueline on twitter\nBuilding a Career in Data Science by Jacqueline and Emily Robinson\nSaturn Cloud\nWhy are we so surprised?, a post by Allen Downey on communicating and thinking through uncertainty\nData Mishaps Night!\nThe Trump administration’s “cubic model” of coronavirus deaths, explained by Matthew Yglesias\nWorking Class Deep Learner by Mark Saroufim\n","content_html":"\u003cp\u003eHugo speaks with Jacqueline Nolis, Chief Product Officer at Saturn Cloud (formerly Head of Data Science), about all types of failure modes in data science, ML, and AI, and they delve into bullshit jobs in data science (yes, that’s a technical term, as you’ll find out) –they discuss the elements that are bullshit, the elements that aren’t, and how to increase the ratio of the latter to the former.\u003c/p\u003e\n\n\u003cp\u003eThey also talk about her journey in moving from mainly working in prescriptive analytics building reports in PDFs and power points to deploying machine learning products in production. They delve into her motion from doing data science to designing products for data scientists and how to think about choosing career paths. Jacqueline has been an individual contributor, a team lead, and a principal data scientist so has a lot of valuable experience here. They talk about her experience of transitioning gender while working in data science and they work hard to find a bright vision for the future of this industry!\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/skyetetra\" rel=\"nofollow\"\u003eJacqueline on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://jnolis.com/book/\" rel=\"nofollow\"\u003eBuilding a Career in Data Science\u003c/a\u003e by Jacqueline and Emily Robinson\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://saturncloud.io/\" rel=\"nofollow\"\u003eSaturn Cloud\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://allendowney.blogspot.com/2016/11/why-are-we-so-surprised.html\" rel=\"nofollow\"\u003eWhy are we so surprised?\u003c/a\u003e, a post by Allen Downey on communicating and thinking through uncertainty\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://datamishapsnight.com/\" rel=\"nofollow\"\u003eData Mishaps Night!\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.vox.com/2020/5/8/21250641/kevin-hassett-cubic-model-smoothing\" rel=\"nofollow\"\u003eThe Trump administration’s “cubic model” of coronavirus deaths, explained\u003c/a\u003e by Matthew Yglesias\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://marksaroufim.substack.com/p/working-class-deep-learner?s=r\" rel=\"nofollow\"\u003eWorking Class Deep Learner\u003c/a\u003e by Mark Saroufim\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Jacqueline Nolis, Chief Product Officer at Saturn Cloud (formerly Head of Data Science), about all types of failure modes in data science, ML, and AI, and they delve into bullshit jobs in data science (yes, that’s a technical term, as you’ll find out) –they discuss the elements that are bullshit, the elements that aren’t, and how to increase the ratio of the latter to the former.\r\n","date_published":"2022-04-05T07:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/811a664b-7b02-45b1-8cd7-84155bf4e39d.mp3","mime_type":"audio/mpeg","size_in_bytes":83542646,"duration_in_seconds":5221}]},{"id":"9078010f-454b-4bcf-bafc-f54f44e04868","title":"Episode 5: Executive Data Science","url":"https://vanishinggradients.fireside.fm/5","content_text":"Hugo speaks with Jim Savage, the Director of Data Science at Schmidt Futures, about the need for data science in executive training and decision, what data scientists can learn from economists, the perils of \"data for good\", and why you should always be integrating your loss function over your posterior.\n\nJim and Hugo talk about what data science is and isn’t capable of, what can actually deliver value, and what people really enjoy doing: the intersection in this Venn diagram is where we need to focus energy and it may not be quite what you think it is!\n\nThey then dive into Jim's thoughts on what he dubs Executive Data Science. You may be aware of the slicing of the data science and machine learning spaces into descriptive analytics, predictive analytics, and prescriptive analytics but, being the thought surgeon that he is, Jim proposes a different slicing into \n\n(1) tool building OR data science as a product, \n\n(2) tools to automate and augment parts of us, and \n\n(3) what Jim calls Executive Data Science.\n\nJim and Hugo also talk about decision theory, the woeful state of causal inference techniques in contemporary data science, and what techniques it would behoove us all to import from econometrics and economics, more generally. If that’s not enough, they talk about the importance of thinking through the data generating process and things that can go wrong if you don’t. In terms of allowing your data work to inform your decision making, thery also discuss Jim’s maxim “ALWAYS BE INTEGRATING YOUR LOSS FUNCTION OVER YOUR POSTERIOR”\n\nLast but definitively not least, as Jim has worked in the data for good space for much of his career, they talk about what this actually means, with particular reference to fast.ai founder \u0026amp; QUT professor of practice Rachel Thomas’ blog post called “Doing Data Science for Social Good, Responsibly”. Rachel’s post takes as its starting point the following words of Sarah Hooker, a researcher at Google Brain:\n\n\n\"Data for good\" is an imprecise term that says little about who we serve, the tools used, or the goals. Being more precise can help us be more accountable \u0026amp; have a greater positive impact.\n\n\nAnd Jim and I discuss his work in the light of these foundational considerations.\n\nLinks\n\n\nJim on twitter\nWhat Is Causal Inference?An Introduction for Data Scientists by Hugo Bowne-Anderson and Mike Loukides\n Jim's must-watch Data Council talk on Productizing Structural Models\n [Mastering Metrics}(https://www.masteringmetrics.com/) by Angrist and Pischke\n Mostly Harmless Econometrics: An Empiricist's Companion by Angrist and Pischke\n The Book of Why by Judea Pearl\nDecision-Making in a Time of Crisis by Hugo Bowne-Anderson\nDoing Data Science for Social Good, Responsibly by Rachel Thomas\n","content_html":"\u003cp\u003eHugo speaks with Jim Savage, the Director of Data Science at Schmidt Futures, about the need for data science in executive training and decision, what data scientists can learn from economists, the perils of \u0026quot;data for good\u0026quot;, and why you should always be integrating your loss function over your posterior.\u003c/p\u003e\n\n\u003cp\u003eJim and Hugo talk about what data science is and isn’t capable of, what can actually deliver value, and what people really enjoy doing: the intersection in this Venn diagram is where we need to focus energy and it may not be quite what you think it is!\u003c/p\u003e\n\n\u003cp\u003eThey then dive into Jim\u0026#39;s thoughts on what he dubs Executive Data Science. You may be aware of the slicing of the data science and machine learning spaces into descriptive analytics, predictive analytics, and prescriptive analytics but, being the thought surgeon that he is, Jim proposes a different slicing into \u003c/p\u003e\n\n\u003cp\u003e(1) tool building OR data science as a product, \u003c/p\u003e\n\n\u003cp\u003e(2) tools to automate and augment parts of us, and \u003c/p\u003e\n\n\u003cp\u003e(3) what Jim calls Executive Data Science.\u003c/p\u003e\n\n\u003cp\u003eJim and Hugo also talk about decision theory, the woeful state of causal inference techniques in contemporary data science, and what techniques it would behoove us all to import from econometrics and economics, more generally. If that’s not enough, they talk about the importance of thinking through the data generating process and things that can go wrong if you don’t. In terms of allowing your data work to inform your decision making, thery also discuss Jim’s maxim “ALWAYS BE INTEGRATING YOUR LOSS FUNCTION OVER YOUR POSTERIOR”\u003c/p\u003e\n\n\u003cp\u003eLast but definitively not least, as Jim has worked in the data for good space for much of his career, they talk about what this actually means, with particular reference to fast.ai founder \u0026amp; QUT professor of practice Rachel Thomas’ blog post called \u003ca href=\"https://www.fast.ai/2021/11/23/data-for-good/\" rel=\"nofollow\"\u003e“Doing Data Science for Social Good, Responsibly”\u003c/a\u003e. Rachel’s post takes as its starting point the following words of Sarah Hooker, a researcher at Google Brain:\u003c/p\u003e\n\n\u003cblockquote\u003e\n\u003cp\u003e\u0026quot;Data for good\u0026quot; is an imprecise term that says little about who we serve, the tools used, or the goals. Being more precise can help us be more accountable \u0026amp; have a greater positive impact.\u003c/p\u003e\n\u003c/blockquote\u003e\n\n\u003cp\u003eAnd Jim and I discuss his work in the light of these foundational considerations.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/abiylfoyp/\" rel=\"nofollow\"\u003eJim on twitter\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/radar/what-is-causal-inference/\" rel=\"nofollow\"\u003eWhat Is Causal Inference?An Introduction for Data Scientists\u003c/a\u003e by Hugo Bowne-Anderson and Mike Loukides\u003c/li\u003e\n\u003cli\u003e Jim\u0026#39;s must-watch Data Council talk on \u003ca href=\"https://www.datacouncil.ai/talks/productizing-structural-models\" rel=\"nofollow\"\u003eProductizing Structural Models\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e [Mastering Metrics}(\u003ca href=\"https://www.masteringmetrics.com/\" rel=\"nofollow\"\u003ehttps://www.masteringmetrics.com/\u003c/a\u003e) by Angrist and Pischke\u003c/li\u003e\n\u003cli\u003e \u003ca href=\"https://press.princeton.edu/books/paperback/9780691120355/mostly-harmless-econometrics\" rel=\"nofollow\"\u003eMostly Harmless Econometrics: An Empiricist\u0026#39;s Companion\u003c/a\u003e by Angrist and Pischke\u003c/li\u003e\n\u003cli\u003e \u003ca href=\"https://en.wikipedia.org/wiki/The_Book_of_Why\" rel=\"nofollow\"\u003eThe Book of Why\u003c/a\u003e by Judea Pearl\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/radar/decision-making-in-a-time-of-crisis/\" rel=\"nofollow\"\u003eDecision-Making in a Time of Crisis\u003c/a\u003e by Hugo Bowne-Anderson\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.fast.ai/2021/11/23/data-for-good/\" rel=\"nofollow\"\u003eDoing Data Science for Social Good, Responsibly\u003c/a\u003e by Rachel Thomas\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Jim Savage, the Director of Data Science at Schmidt Futures, about the need for data science in executive training and decision, what data scientists can learn from economists, the perils of \"data for good\", and why you should always be integrating your loss function over your posterior.","date_published":"2022-03-23T16:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/9078010f-454b-4bcf-bafc-f54f44e04868.mp3","mime_type":"audio/mpeg","size_in_bytes":103917601,"duration_in_seconds":6494}]},{"id":"32f4444c-6c16-4411-ab8a-2adbf23b65c8","title":"Episode 4: Machine Learning at T-Mobile","url":"https://vanishinggradients.fireside.fm/4","content_text":"Hugo speaks with Heather Nolis, Principal Machine Learning engineer at T-mobile, about what data science, machine learning, and AI look like at T-mobile, along with Heather’s path from a software development intern there to principal ML engineer running a team of 15.\n\nThey talk about: how to build a DS culture from scratch and what executive-level support looks like, as well as how to demonstrate machine learning value early on from a shark tank style pitch night to the initial investment through to the POC and building out the function; all the great work they do with R and the Tidyverse in production; what it’s like to be a lesbian in tech, and about what it was like to discover she was autistic and how that impacted her work; how to measure and demonstrate success and ROI for the org; some massive data science fails!; how to deal with execs wanting you to use the latest GPT-X – in a fragmented tooling landscape; how to use the simplest technology to deliver the most value.\n\nFinally, the team just hired their first FT ethicist and they speak about how ethics can be embedded in a team and across an institution.\n\nLinks\n\n\nPut R in prod: Tools and guides to put R models into production\nEnterprise Web Services with Neural Networks Using R and TensorFlow\nHeather on twitter \nT-Mobile is hiring!\nHugo's upcoming fireside chat and AMA with Hilary Parker about how to actually produce sustainable business value using machine learning and product management for ML! \n","content_html":"\u003cp\u003eHugo speaks with Heather Nolis, Principal Machine Learning engineer at T-mobile, about what data science, machine learning, and AI look like at T-mobile, along with Heather’s path from a software development intern there to principal ML engineer running a team of 15.\u003c/p\u003e\n\n\u003cp\u003eThey talk about: how to build a DS culture from scratch and what executive-level support looks like, as well as how to demonstrate machine learning value early on from a shark tank style pitch night to the initial investment through to the POC and building out the function; all the great work they do with R and the Tidyverse in production; what it’s like to be a lesbian in tech, and about what it was like to discover she was autistic and how that impacted her work; how to measure and demonstrate success and ROI for the org; some massive data science fails!; how to deal with execs wanting you to use the latest GPT-X – in a fragmented tooling landscape; how to use the simplest technology to deliver the most value.\u003c/p\u003e\n\n\u003cp\u003eFinally, the team just hired their first FT ethicist and they speak about how ethics can be embedded in a team and across an institution.\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://putrinprod.com/\" rel=\"nofollow\"\u003ePut R in prod\u003c/a\u003e: Tools and guides to put R models into production\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://medium.com/tmobile-tech/enterprise-web-services-with-neural-networks-using-r-and-tensorflow-a09c1b100c11\" rel=\"nofollow\"\u003eEnterprise Web Services with Neural Networks Using R and TensorFlow\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/heatherklus\" rel=\"nofollow\"\u003eHeather on twitter\u003c/a\u003e \u003c/li\u003e\n\u003cli\u003e\u003cp\u003e\u003ca href=\"https://www.t-mobile.com/careers\" rel=\"nofollow\"\u003eT-Mobile is hiring!\u003c/a\u003e\u003c/p\u003e\u003c/li\u003e\n\u003cli\u003e\u003cp\u003e\u003ca href=\"https://www.eventbrite.com/e/select-ml-project-where-value-is-not-null-tickets-284000161127?aff=hba\" rel=\"nofollow\"\u003eHugo\u0026#39;s upcoming fireside chat and AMA with Hilary Parker about \u003cstrong\u003ehow to actually produce sustainable business value\u003c/strong\u003e using machine learning and product management for ML!\u003c/a\u003e \u003c/p\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Heather Nolis, Principal Machine Learning engineer at T-mobile, about what data science, machine learning, and AI look like at T-mobile, along with Heather’s path from a software development intern there to principal ML engineer running a team of 15.\r\n","date_published":"2022-03-10T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/32f4444c-6c16-4411-ab8a-2adbf23b65c8.mp3","mime_type":"audio/mpeg","size_in_bytes":100002470,"duration_in_seconds":6250}]},{"id":"8f08dc5e-bb75-4fec-9db9-3808cd980ba9","title":"Episode 3: Language Tech For All","url":"https://vanishinggradients.fireside.fm/3","content_text":"Rachael Tatman is a senior developer advocate for Rasa, where she’s helping developers build and deploy ML chatbots using their open source framework.\n\nRachael has a PhD in Linguistics from the University of Washington where her research was on computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle and she’s still a Kaggle Grandmaster.\n\nIn this conversation, Rachael and I talk about the history of NLP and conversational AI//chatbots and we dive into the fascinating tension between rule-based techniques and ML and deep learning – we also talk about how to incorporate machine and human intelligence together by thinking through questions such as “should a response to a human ever be automated?” Spoiler alert: the answer is a resounding NO WAY! \n\nIn this journey, something that becomes apparent is that many of the trends, concepts, questions, and answers, although framed for NLP and chatbots, are applicable to much of data science, more generally.\n\nWe also discuss the data scientist’s responsibility to end-users and stakeholders using, among other things, the lens of considering those whose data you’re working with to be data donors.\n\nWe then consider what globalized language technology looks like and can look like, what we can learn from the history of science here, particularly given that so much training data and models are in English when it accounts for so little of language spoken globally. \n\nLinks\n\n\nRachael's website\nRasa\nSpeech and Language Processing\nby Dan Jurafsky and James H. Martin \n\n\nMasakhane, putting African languages on the #NLP map since 2019\nThe Distributed AI Research Institute, a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence\nThe Algorithmic Justice League, unmasking AI harms and biases\nBlack in AI, increasing the presence and inclusion of Black people in the field of AI by creating space for sharing ideas, fostering collaborations, mentorship and advocacy\nHugo's blog post on his new job and why it's exciting for him to double down on helping scientists do better science\n\n","content_html":"\u003cp\u003eRachael Tatman is a senior developer advocate for Rasa, where she’s helping developers build and deploy ML chatbots using their open source framework.\u003c/p\u003e\n\n\u003cp\u003eRachael has a PhD in Linguistics from the University of Washington where her research was on computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle and she’s still a Kaggle Grandmaster.\u003c/p\u003e\n\n\u003cp\u003eIn this conversation, Rachael and I talk about the history of NLP and conversational AI//chatbots and we dive into the fascinating tension between rule-based techniques and ML and deep learning – we also talk about how to incorporate machine and human intelligence together by thinking through questions such as “should a response to a human ever be automated?” Spoiler alert: the answer is a resounding NO WAY! \u003c/p\u003e\n\n\u003cp\u003eIn this journey, something that becomes apparent is that many of the trends, concepts, questions, and answers, although framed for NLP and chatbots, are applicable to much of data science, more generally.\u003c/p\u003e\n\n\u003cp\u003eWe also discuss the data scientist’s responsibility to end-users and stakeholders using, among other things, the lens of considering those whose data you’re working with to be data donors.\u003c/p\u003e\n\n\u003cp\u003eWe then consider what globalized language technology looks like and can look like, what we can learn from the history of science here, particularly given that so much training data and models are in English when it accounts for so little of language spoken globally. \u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.rctatman.com/\" rel=\"nofollow\"\u003eRachael\u0026#39;s website\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://rasa.com/\" rel=\"nofollow\"\u003eRasa\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://web.stanford.edu/%7Ejurafsky/slp3/\" rel=\"nofollow\"\u003eSpeech and Language Processing\u003c/a\u003e\nby Dan Jurafsky and James H. Martin \n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://twitter.com/MasakhaneNLP\" rel=\"nofollow\"\u003eMasakhane\u003c/a\u003e, putting African languages on the #NLP map since 2019\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.dair-institute.org/\" rel=\"nofollow\"\u003eThe Distributed AI Research Institute\u003c/a\u003e, a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.ajl.org/\" rel=\"nofollow\"\u003eThe Algorithmic Justice League\u003c/a\u003e, unmasking AI harms and biases\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://blackinai.github.io/#/\" rel=\"nofollow\"\u003eBlack in AI\u003c/a\u003e, increasing the presence and inclusion of Black people in the field of AI by creating space for sharing ideas, fostering collaborations, mentorship and advocacy\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://outerbounds.com/blog/hba-excited-to-join-metaflow-and-outerbounds/\" rel=\"nofollow\"\u003eHugo\u0026#39;s blog post on his new job and why it\u0026#39;s exciting for him to double down on helping scientists do better science\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo speaks with Rachael Tatman about the democratization of natural language processing, conversational AI, and chatbots, including, among other things, the data scientist’s responsibility to end-users and stakeholders.","date_published":"2022-03-01T13:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/8f08dc5e-bb75-4fec-9db9-3808cd980ba9.mp3","mime_type":"audio/mpeg","size_in_bytes":88851890,"duration_in_seconds":5553}]},{"id":"65695b45-10a7-4785-adca-f1aeaa5818bc","title":"Episode 2: Making Data Science Uncool Again","url":"https://vanishinggradients.fireside.fm/2","content_text":"Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, the chair of WAMRI, and is Chief Scientist at platform.ai.\n\nIn this conversation, we’ll be talking about the history of data science, machine learning, and AI, where we’ve come from and where we’re going, how new techniques can be applied to real-world problems, whether it be deep learning to medicine or porting techniques from computer vision to NLP. We’ll also talk about what’s present and what’s missing in the ML skills revolution, what software engineering skills data scientists need to learn, how to cope in a space of such fragmented tooling, and paths for emerging out of the shadow of FAANG. If that’s not enough, we’ll jump into how spreading DS skills around the globe involves serious investments in education, building software, communities, and research, along with diving into the social challenges that the information age and the AI revolution (so to speak) bring with it.\n\nBut to get to all of this, you’ll need to listen to a few minutes of us chatting about chocolate biscuits in Australia!\n\nLinks\n\n\nfast.ai · making neural nets uncool again\nnbdev: create delightful python projects using Jupyter Notebooks\nThe fastai book, published as Jupyter Notebooks\nDeep Learning for Coders with fastai and PyTorch\nThe wonderful and terrifying implications of computers that can learn -- Jeremy' awesome TED talk!\nManna by Marshall Brain\nGhost Work by Mary L. Gray and Siddharth Suri\nUberland by Alex Rosenblat\n","content_html":"\u003cp\u003eJeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, the chair of WAMRI, and is Chief Scientist at platform.ai.\u003c/p\u003e\n\n\u003cp\u003eIn this conversation, we’ll be talking about the history of data science, machine learning, and AI, where we’ve come from and where we’re going, how new techniques can be applied to real-world problems, whether it be deep learning to medicine or porting techniques from computer vision to NLP. We’ll also talk about what’s present and what’s missing in the ML skills revolution, what software engineering skills data scientists need to learn, how to cope in a space of such fragmented tooling, and paths for emerging out of the shadow of FAANG. If that’s not enough, we’ll jump into how spreading DS skills around the globe involves serious investments in education, building software, communities, and research, along with diving into the social challenges that the information age and the AI revolution (so to speak) bring with it.\u003c/p\u003e\n\n\u003cp\u003eBut to get to all of this, you’ll need to listen to a few minutes of us chatting about chocolate biscuits in Australia!\u003c/p\u003e\n\n\u003cp\u003e\u003cstrong\u003eLinks\u003c/strong\u003e\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.fast.ai/\" target=\"_blank\"\u003efast.ai · making neural nets uncool again\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/fastai/nbdev\" rel=\"nofollow\"\u003enbdev: create delightful python projects using Jupyter Notebooks\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/fastai/fastbook\" rel=\"nofollow\"\u003eThe fastai book, published as Jupyter Notebooks\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.oreilly.com/library/view/deep-learning-for/9781492045519/\" rel=\"nofollow\"\u003eDeep Learning for Coders with fastai and PyTorch\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/watch?v=t4kyRyKyOpo\" rel=\"nofollow\"\u003eThe wonderful and terrifying implications of computers that can learn\u003c/a\u003e -- Jeremy\u0026#39; awesome TED talk!\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://marshallbrain.com/manna\" rel=\"nofollow\"\u003eManna\u003c/a\u003e by Marshall Brain\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://ghostwork.info/\" rel=\"nofollow\"\u003eGhost Work\u003c/a\u003e by Mary L. Gray and Siddharth Suri\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.ucpress.edu/book/9780520324800/uberland\" rel=\"nofollow\"\u003eUberland\u003c/a\u003e by Alex Rosenblat\u003c/li\u003e\n\u003c/ul\u003e","summary":"Hugo talks with Jeremy Howard about the past, present, and future of data science, machine learning, and AI, with a focus on the democratization of deep learning.","date_published":"2022-02-21T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/65695b45-10a7-4785-adca-f1aeaa5818bc.mp3","mime_type":"audio/mpeg","size_in_bytes":101524103,"duration_in_seconds":6345}]},{"id":"a77d732e-f7be-4b71-be2f-fd09a392bd86","title":"Episode 1: Introducing Vanishing Gradients","url":"https://vanishinggradients.fireside.fm/1","content_text":"In this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis!\n\nOriginal music, bleeps, and blops by local Sydney legend PlaneFace!","content_html":"\u003cp\u003eIn this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis!\u003c/p\u003e\n\n\u003cp\u003eOriginal music, bleeps, and blops by local Sydney legend \u003ca href=\"https://planeface.bandcamp.com/album/fishing-from-an-asteroid\" rel=\"nofollow\"\u003ePlaneFace\u003c/a\u003e!\u003c/p\u003e","summary":"In this episode, Hugo introduces the new data science podcast Vanishing Gradients. ","date_published":"2022-02-16T20:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/a77d732e-f7be-4b71-be2f-fd09a392bd86.mp3","mime_type":"audio/mpeg","size_in_bytes":5270212,"duration_in_seconds":329}]}]}