{"version":"https://jsonfeed.org/version/1","title":"Vanishing Gradients","home_page_url":"https://vanishinggradients.fireside.fm","feed_url":"https://vanishinggradients.fireside.fm/json","description":"A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.\r\n\r\nIt's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.","_fireside":{"subtitle":"a data podcast with hugo bowne-anderson","pubdate":"2024-11-26T03:00:00.000+11:00","explicit":false,"copyright":"2024 by Hugo Bowne-Anderson","owner":"Hugo Bowne-Anderson","image":"https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/1/140c3904-8258-4c39-a698-a112b7077bd7/cover.jpg?v=1"},"items":[{"id":"bf5453c0-4aa2-4abb-b323-20334f787512","title":"Episode 39: From Models to Products: Bridging Research and Practice in Generative AI at Google Labs","url":"https://vanishinggradients.fireside.fm/39","content_text":"Hugo speaks with Ravin Kumar, Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility.\n\nIn this episode, we dive into:\n • Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline.\n • How to build generative AI systems that are scalable, reliable, and aligned with user needs.\n • Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI.\n • The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind.\n\nWe also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy.\n\nLINKS\n\n\nThe livestream on YouTube\nGoogle Labs\nRavin's GenAI Handbook\nBreadboard: A library for prototyping generative AI applications\n\n\nAs mentioned in the episode, Hugo is teaching a four-week course, Building LLM Applications for Data Scientists and SWEs, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q&As, and guest lectures from industry experts.\n\nListeners of Vanishing Gradients can get 25% off the course using this special link or by applying the code VG25 at checkout.","content_html":"

Hugo speaks with Ravin Kumar, Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility.

\n\n

In this episode, we dive into:
\n • Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline.
\n • How to build generative AI systems that are scalable, reliable, and aligned with user needs.
\n • Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI.
\n • The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind.

\n\n

We also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy.

\n\n

LINKS

\n\n\n\n

As mentioned in the episode, Hugo is teaching a four-week course, Building LLM Applications for Data Scientists and SWEs, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q&As, and guest lectures from industry experts.

\n\n

Listeners of Vanishing Gradients can get 25% off the course using this special link or by applying the code VG25 at checkout.

","summary":"From building rockets at SpaceX to advancing generative AI at Google Labs, Ravin Kumar has carved a unique path through the world of technology. In this episode, we explore how to build scalable, reliable AI systems, the skills needed to work across the AI/ML pipeline, and the real-world impact of tools like open-weight models such as Gemma. Ravin also shares insights into designing AI tools like Notebook LM with the user journey at the forefront.","date_published":"2024-11-26T03:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/bf5453c0-4aa2-4abb-b323-20334f787512.mp3","mime_type":"audio/mpeg","size_in_bytes":99346310,"duration_in_seconds":6208}]},{"id":"c1a5c8d1-777a-41b7-a123-6b06861dbc35","title":"Episode 38: The Art of Freelance AI Consulting and Products: Data, Dollars, and Deliverables","url":"https://vanishinggradients.fireside.fm/38","content_text":"Hugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions.\n\nThis episode is a bit of an experiment. Instead of our usual technical deep dives, we’re focusing on the world of AI consulting and freelancing. We explore Jason’s consulting playbook, covering how he structures contracts to maximize value, strategies for moving from hourly billing to securing larger deals, and the mindset shift needed to align incentives with clients. We’ll also discuss the challenges of moving from deterministic software to probabilistic AI systems and even do a live role-playing session where Jason coaches me on client engagement and pricing pitfalls.\n\nLINKS\n\n\nThe livestream on YouTube\nJason's Upcoming course: AI Consultant Accelerator: From Expert to High-Demand Business\nHugo's upcoming course: Building LLM Applications for Data Scientists and Software Engineers\nJason's website\nJason's indie consulting newsletter\nYour AI Product Needs Evals by Hamel Husain\nWhat We’ve Learned From A Year of Building with LLMs\nDear Future AI Consultant by Jason\nAlex Hormozi's books\nThe Burnout Society by Byung-Chul Han\nJason on Twitter\nVanishing Gradients on Twitter\nHugo on Twitter\nVanishing Gradients' lu.ma calendar\nVanishing Gradients on YouTube\n","content_html":"

Hugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions.

\n\n

This episode is a bit of an experiment. Instead of our usual technical deep dives, we’re focusing on the world of AI consulting and freelancing. We explore Jason’s consulting playbook, covering how he structures contracts to maximize value, strategies for moving from hourly billing to securing larger deals, and the mindset shift needed to align incentives with clients. We’ll also discuss the challenges of moving from deterministic software to probabilistic AI systems and even do a live role-playing session where Jason coaches me on client engagement and pricing pitfalls.

\n\n

LINKS

\n\n","summary":"Hugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions.","date_published":"2024-11-05T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c1a5c8d1-777a-41b7-a123-6b06861dbc35.mp3","mime_type":"audio/mpeg","size_in_bytes":80443270,"duration_in_seconds":5027}]},{"id":"eadec2c4-f8f9-45b0-ae7e-5867f7201801","title":"Episode 37: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 2","url":"https://vanishinggradients.fireside.fm/37","content_text":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.\n\nThis is Part 2 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.\n\nIn this episode, we cover:\n\n\nThe Prompt Report: A comprehensive survey on prompting techniques, agents, and generative AI, including advanced evaluation methods for assessing these techniques.\nSecurity Risks and Prompt Hacking: A detailed exploration of the security concerns surrounding prompt engineering, including Sander’s thoughts on its potential applications in cybersecurity and military contexts.\nAI’s Impact Across Fields: A discussion on how generative AI is reshaping various domains, including the social sciences and security.\nMultimodal AI: Updates on how large language models (LLMs) are expanding to interact with images, code, and music.\nCase Study - Detecting Suicide Risk: A careful examination of how prompting techniques are being used in important areas like detecting suicide risk, showcasing the critical potential of AI in addressing sensitive, real-world challenges.\n\n\nThe episode concludes with a reflection on the evolving landscape of LLMs and multimodal AI, and what might be on the horizon.\n\nIf you haven’t yet, make sure to check out Part 1, where we discuss the history of NLP, prompt engineering techniques, and Sander’s development of the Learn Prompting initiative.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Prompt Report: A Systematic Survey of Prompting Techniques\nLearn Prompting: Your Guide to Communicating with AI\nVanishing Gradients on Twitter\nHugo on Twitter\nVanishing Gradients' lu.ma calendar\nVanishing Gradients on YouTube\n","content_html":"

Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.

\n\n

This is Part 2 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.

\n\n

In this episode, we cover:

\n\n\n\n

The episode concludes with a reflection on the evolving landscape of LLMs and multimodal AI, and what might be on the horizon.

\n\n

If you haven’t yet, make sure to check out Part 1, where we discuss the history of NLP, prompt engineering techniques, and Sander’s development of the Learn Prompting initiative.

\n\n

LINKS

\n\n","summary":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.","date_published":"2024-10-08T17:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/eadec2c4-f8f9-45b0-ae7e-5867f7201801.mp3","mime_type":"audio/mpeg","size_in_bytes":48585166,"duration_in_seconds":3036}]},{"id":"acd8aaec-1788-459d-a4e9-10feae67a19a","title":"Episode 36: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 1","url":"https://vanishinggradients.fireside.fm/36","content_text":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.\n\nThis is Part 1 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.\n\nIn this first part, \n\n\nwe’ll explore the critical role of prompt engineering, \n& diving into adversarial techniques like prompt hacking and \nthe challenges of evaluating these techniques. \nwe’ll examine the impact of few-shot learning and \nthe groundbreaking taxonomy of prompting techniques from the Prompt Report.\n\n\nAlong the way, \n\n\nwe’ll uncover the rich history of natural language processing (NLP) and AI, showing how modern prompting techniques evolved from early rule-based systems and statistical methods. \nwe’ll also hear how Sander’s experimentation with GPT-3 for diplomatic tasks led him to develop Learn Prompting, and \nhow Dennis highlights the accessibility of AI through prompting, which allows non-technical users to interact with AI without needing to code.\n\n\nFinally, we’ll explore the future of multimodal AI, where LLMs interact with images, code, and even music creation. Make sure to tune in to Part 2, where we dive deeper into security risks, prompt hacking, and more.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Prompt Report: A Systematic Survey of Prompting Techniques\nLearn Prompting: Your Guide to Communicating with AI\nVanishing Gradients on Twitter\nHugo on Twitter\nVanishing Gradients' lu.ma calendar\nVanishing Gradients on YouTube\n","content_html":"

Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.

\n\n

This is Part 1 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.

\n\n

In this first part,

\n\n\n\n

Along the way,

\n\n\n\n

Finally, we’ll explore the future of multimodal AI, where LLMs interact with images, code, and even music creation. Make sure to tune in to Part 2, where we dive deeper into security risks, prompt hacking, and more.

\n\n

LINKS

\n\n","summary":"Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.","date_published":"2024-09-30T18:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/acd8aaec-1788-459d-a4e9-10feae67a19a.mp3","mime_type":"audio/mpeg","size_in_bytes":61232193,"duration_in_seconds":3826}]},{"id":"feeeecc8-a170-48c7-ae4c-8dd64484c64c","title":"Episode 35: Open Science at NASA -- Measuring Impact and the Future of AI","url":"https://vanishinggradients.fireside.fm/35","content_text":"Hugo speaks with Dr. Chelle Gentemann, Open Science Program Scientist for NASA’s Office of the Chief Science Data Officer, about NASA’s ambitious efforts to integrate AI across the research lifecycle. In this episode, we’ll dive deeper into how AI is transforming NASA’s approach to science, making data more accessible and advancing open science practices. We explore\n\n\nMeasuring the Impact of Open Science: How NASA is developing new metrics to evaluate the effectiveness of open science, moving beyond traditional publication-based assessments.\nThe Process of Scientific Discovery: Insights into the collaborative nature of research and how breakthroughs are achieved at NASA.\n** AI Applications in NASA’s Science:** From rats in space to exploring the origins of the universe, we cover how AI is being applied across NASA’s divisions to improve data accessibility and analysis.\nAddressing Challenges in Open Science: The complexities of implementing open science within government agencies and research environments.\nReforming Incentive Systems: How NASA is reconsidering traditional metrics like publications and citations, and starting to recognize contributions such as software development and data sharing.\nThe Future of Open Science: How open science is shaping the future of research, fostering interdisciplinary collaboration, and increasing accessibility.\n\n\nThis conversation offers valuable insights for researchers, data scientists, and those interested in the practical applications of AI and open science. Join us as we discuss how NASA is working to make science more collaborative, reproducible, and impactful.\n\nLINKS\n\n\nThe livestream on YouTube\nNASA's Open Science 101 course <-- do it to learn and also to get NASA Swag!\nScience Cast\nNASA and IBM Openly Release Geospatial AI Foundation Model for NASA Earth Observation Data\nJake VanderPlas' daily conundrum tweet from 2013\nReplit, \"an AI-powered software development & deployment platform for building, sharing, and shipping software fast.\"\n","content_html":"

Hugo speaks with Dr. Chelle Gentemann, Open Science Program Scientist for NASA’s Office of the Chief Science Data Officer, about NASA’s ambitious efforts to integrate AI across the research lifecycle. In this episode, we’ll dive deeper into how AI is transforming NASA’s approach to science, making data more accessible and advancing open science practices. We explore

\n\n\n\n

This conversation offers valuable insights for researchers, data scientists, and those interested in the practical applications of AI and open science. Join us as we discuss how NASA is working to make science more collaborative, reproducible, and impactful.

\n\n

LINKS

\n\n","summary":"Hugo speaks with Dr. Chelle Gentemann, Open Science Program Scientist for NASA’s Office of the Chief Science Data Officer, about NASA’s ambitious efforts to integrate AI across the research lifecycle. In this episode, we’ll dive deeper into how AI is transforming NASA’s approach to science, making data more accessible and advancing open science practices.","date_published":"2024-09-19T17:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/feeeecc8-a170-48c7-ae4c-8dd64484c64c.mp3","mime_type":"audio/mpeg","size_in_bytes":55905303,"duration_in_seconds":3493}]},{"id":"8c18d59e-9b79-4682-8e3c-ba682daf1c1c","title":"Episode 34: The AI Revolution Will Not Be Monopolized","url":"https://vanishinggradients.fireside.fm/34","content_text":"Hugo speaks with Ines Montani and Matthew Honnibal, the creators of spaCy and founders of Explosion AI. Collectively, they've had a huge impact on the fields of industrial natural language processing (NLP), ML, and AI through their widely-used open-source library spaCy and their innovative annotation tool Prodigy. These tools have become essential for many data scientists and NLP practitioners in industry and academia alike.\n\nIn this wide-ranging discussion, we dive into:\n\n• The evolution of applied NLP and its role in industry\n• The balance between large language models and smaller, specialized models\n• Human-in-the-loop distillation for creating faster, more data-private AI systems\n• The challenges and opportunities in NLP, including modularity, transparency, and privacy\n• The future of AI and software development\n• The potential impact of AI regulation on innovation and competition\n\nWe also touch on their recent transition back to a smaller, more independent-minded company structure and the lessons learned from their journey in the AI startup world.\n\nInes and Matt offer invaluable insights for data scientists, machine learning practitioners, and anyone interested in the practical applications of AI. They share their thoughts on how to approach NLP projects, the importance of data quality, and the role of open-source in advancing the field.\n\nWhether you're a seasoned NLP practitioner or just getting started with AI, this episode offers a wealth of knowledge from two of the field's most respected figures. Join us for a discussion that explores the current landscape of AI development, with insights that bridge the gap between cutting-edge research and real-world applications.\n\nLINKS\n\n\nThe livestream on YouTube\nHow S&P Global is making markets more transparent with NLP, spaCy and Prodigy\nA practical guide to human-in-the-loop distillation\nLaws of Tech: Commoditize Your Complement\nspaCy: Industrial-Strength Natural Language Processing\nLLMs with spaCy\nExplosion, building developer tools for AI, Machine Learning and Natural Language Processing\nBack to our roots: Company update and future plans, by Matt and Ines\nMatt's detailed blog post: back to our roots\nInes on twitter\nMatt on twitter\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nCheck out and subcribe to our lu.ma calendar for upcoming livestreams!","content_html":"

Hugo speaks with Ines Montani and Matthew Honnibal, the creators of spaCy and founders of Explosion AI. Collectively, they've had a huge impact on the fields of industrial natural language processing (NLP), ML, and AI through their widely-used open-source library spaCy and their innovative annotation tool Prodigy. These tools have become essential for many data scientists and NLP practitioners in industry and academia alike.

\n\n

In this wide-ranging discussion, we dive into:

\n\n

• The evolution of applied NLP and its role in industry
\n• The balance between large language models and smaller, specialized models
\n• Human-in-the-loop distillation for creating faster, more data-private AI systems
\n• The challenges and opportunities in NLP, including modularity, transparency, and privacy
\n• The future of AI and software development
\n• The potential impact of AI regulation on innovation and competition

\n\n

We also touch on their recent transition back to a smaller, more independent-minded company structure and the lessons learned from their journey in the AI startup world.

\n\n

Ines and Matt offer invaluable insights for data scientists, machine learning practitioners, and anyone interested in the practical applications of AI. They share their thoughts on how to approach NLP projects, the importance of data quality, and the role of open-source in advancing the field.

\n\n

Whether you're a seasoned NLP practitioner or just getting started with AI, this episode offers a wealth of knowledge from two of the field's most respected figures. Join us for a discussion that explores the current landscape of AI development, with insights that bridge the gap between cutting-edge research and real-world applications.

\n\n

LINKS

\n\n\n\n

Check out and subcribe to our lu.ma calendar for upcoming livestreams!

","summary":"Hugo speaks with Ines Montani and Matthew Honnibal, the creators of spaCy and founders of Explosion AI. Collectively, they've had a huge impact on the fields of industrial natural language processing (NLP), ML, and AI through their widely-used open-source library spaCy and their innovative annotation tool Prodigy.","date_published":"2024-08-22T17:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/8c18d59e-9b79-4682-8e3c-ba682daf1c1c.mp3","mime_type":"audio/mpeg","size_in_bytes":98751972,"duration_in_seconds":6171}]},{"id":"9cae0a8b-259a-4b01-a0f4-e5958297542b","title":"Episode 33: What We Learned Teaching LLMs to 1,000s of Data Scientists","url":"https://vanishinggradients.fireside.fm/33","content_text":"Hugo speaks with Dan Becker and Hamel Husain, two veterans in the world of data science, machine learning, and AI education. Collectively, they’ve worked at Google, DataRobot, Airbnb, Github (where Hamel built out the precursor to copilot and more) and they both currently work as independent LLM and Generative AI consultants.\n\nDan and Hamel recently taught a course on fine-tuning large language models that evolved into a full-fledged conference, attracting over 2,000 participants. This experience gave them unique insights into the current state and future of AI education and application.\n\nIn this episode, we dive into:\n\n\nThe evolution of their course from fine-tuning to a comprehensive AI conference\nThe unexpected challenges and insights gained from teaching LLMs to data scientists\nThe current state of AI tooling and accessibility compared to a decade ago\nThe role of playful experimentation in driving innovation in the field\nThoughts on the economic impact and ROI of generative AI in various industries\nThe importance of proper evaluation in machine learning projects\nFuture predictions for AI education and application in the next five years\nWe also touch on the challenges of using AI tools effectively, the potential for AI in physical world applications, and the need for a more nuanced understanding of AI capabilities in the workplace.\n\n\nDuring our conversation, Dan mentions an exciting project he's been working on, which we couldn't showcase live due to technical difficulties. However, I've included a link to a video demonstration in the show notes that you won't want to miss. In this demo, Dan showcases his innovative AI-powered 3D modeling tool that allows users to create 3D printable objects simply by describing them in natural language.\n\nLINKS\n\n\nThe livestream on YouTube\nEducational resources from Dan and Hamel's LLM course\nUpwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence\nEpisode 29: Lessons from a Year of Building with LLMs (Part 1)\nEpisode 30: Lessons from a Year of Building with LLMs (Part 2)\nDan's demo: Creating Physical Products with Generative AI\nBuild Great AI, Dan's boutique consulting firm helping clients be successful with large language models\nParlance Labs, Hamel's Practical consulting that improves your AI\nHamel on Twitter\nDan on Twitter\nVanishing Gradients on Twitter\nHugo on Twitter\n","content_html":"

Hugo speaks with Dan Becker and Hamel Husain, two veterans in the world of data science, machine learning, and AI education. Collectively, they’ve worked at Google, DataRobot, Airbnb, Github (where Hamel built out the precursor to copilot and more) and they both currently work as independent LLM and Generative AI consultants.

\n\n

Dan and Hamel recently taught a course on fine-tuning large language models that evolved into a full-fledged conference, attracting over 2,000 participants. This experience gave them unique insights into the current state and future of AI education and application.

\n\n

In this episode, we dive into:

\n\n\n\n

During our conversation, Dan mentions an exciting project he's been working on, which we couldn't showcase live due to technical difficulties. However, I've included a link to a video demonstration in the show notes that you won't want to miss. In this demo, Dan showcases his innovative AI-powered 3D modeling tool that allows users to create 3D printable objects simply by describing them in natural language.

\n\n

LINKS

\n\n","summary":"Hugo speaks with Dan Becker and Hamel Husain, two veterans in the world of data science, machine learning, and AI education. Collectively, they’ve worked at Google, DataRobot, Airbnb, Github (where Hamel built out the pre-cursor to copilot and more). And they both currently work as independent LLM and Generative AI consultants.\r\n\r\nDan and Hamel recently taught a course on fine-tuning large language models that evolved into a full-fledged conference, attracting over 2,000 participants. \r\n\r\nIn this episode, we dive deep into their experience and the unique insights it gave them into the current state and future of AI education and application.","date_published":"2024-08-12T18:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/9cae0a8b-259a-4b01-a0f4-e5958297542b.mp3","mime_type":"audio/mpeg","size_in_bytes":81774888,"duration_in_seconds":5110}]},{"id":"3aa4ba58-30aa-4a85-a139-e9057629171c","title":"Episode 32: Building Reliable and Robust ML/AI Pipelines","url":"https://vanishinggradients.fireside.fm/32","content_text":"Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.\n\nIn this episode, we dive deep into the world of LLMs and the critical challenges of building reliable AI pipelines. We'll explore:\n\n\nThe fascinating journey from classic machine learning to the current LLM revolution\nWhy Shreya believes most ML problems are actually data management issues\nThe concept of \"data flywheels\" for LLM applications and how to implement them\nThe intriguing world of evaluating AI systems - who validates the validators?\nShreya's work on SPADE and EvalGen, innovative tools for synthesizing data quality assertions and aligning LLM evaluations with human preferences\nThe importance of human-in-the-loop processes in AI development\nThe future of low-code and no-code tools in the AI landscape\n\n\nWe'll also touch on the potential pitfalls of over-relying on LLMs, the concept of \"Habsburg AI,\" and how to avoid disappearing up our own proverbial arseholes in the world of recursive AI processes.\n\nWhether you're a seasoned AI practitioner, a curious data scientist, or someone interested in the human side of AI development, this conversation offers valuable insights into building more robust, reliable, and human-centered AI systems.\n\nLINKS\n\n\nThe livestream on YouTube\nShreya's website\nShreya on Twitter\nData Flywheels for LLM Applications\nSPADE: Synthesizing Data Quality Assertions for Large Language Model Pipelines\nWhat We’ve Learned From A Year of Building with LLMs\nWho Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences\nOperationalizing Machine Learning: An Interview Study\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nIn the podcast, Hugo also mentioned that this was the 5th time he and Shreya chatted publicly. which is wild!\n\nIf you want to dive deep into Shreya's work and related topics through their chats, you can check them all out here:\n\n\nOuterbounds' Fireside Chat: Operationalizing ML -- Patterns and Pain Points from MLOps Practitioners\nThe Past, Present, and Future of Generative AI\nLLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering\nLessons from a Year of Building with LLMs\n\n\nCheck out and subcribe to our lu.ma calendar for upcoming livestreams!","content_html":"

Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.

\n\n

In this episode, we dive deep into the world of LLMs and the critical challenges of building reliable AI pipelines. We'll explore:

\n\n\n\n

We'll also touch on the potential pitfalls of over-relying on LLMs, the concept of "Habsburg AI," and how to avoid disappearing up our own proverbial arseholes in the world of recursive AI processes.

\n\n

Whether you're a seasoned AI practitioner, a curious data scientist, or someone interested in the human side of AI development, this conversation offers valuable insights into building more robust, reliable, and human-centered AI systems.

\n\n

LINKS

\n\n\n\n

In the podcast, Hugo also mentioned that this was the 5th time he and Shreya chatted publicly. which is wild!

\n\n

If you want to dive deep into Shreya's work and related topics through their chats, you can check them all out here:

\n\n
    \n
  1. Outerbounds' Fireside Chat: Operationalizing ML -- Patterns and Pain Points from MLOps Practitioners
  2. \n
  3. The Past, Present, and Future of Generative AI
  4. \n
  5. LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering
  6. \n
  7. Lessons from a Year of Building with LLMs
  8. \n
\n\n

Check out and subcribe to our lu.ma calendar for upcoming livestreams!

","summary":"Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.","date_published":"2024-07-27T13:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/3aa4ba58-30aa-4a85-a139-e9057629171c.mp3","mime_type":"audio/mpeg","size_in_bytes":72173111,"duration_in_seconds":4510}]},{"id":"455d1587-7ba6-4850-920e-360d8cbe33d3","title":"Episode 31: Rethinking Data Science, Machine Learning, and AI","url":"https://vanishinggradients.fireside.fm/31","content_text":"Hugo speaks with Vincent Warmerdam, a senior data professional and machine learning engineer at :probabl, the exclusive brand operator of scikit-learn. Vincent is known for challenging common assumptions and exploring innovative approaches in data science and machine learning.\n\nIn this episode, they dive deep into rethinking established methods in data science, machine learning, and AI. We explore Vincent's principled approach to the field, including:\n\n\nThe critical importance of exposing yourself to real-world problems before applying ML solutions\nFraming problems correctly and understanding the data generating process\nThe power of visualization and human intuition in data analysis\nQuestioning whether algorithms truly meet the actual problem at hand\nThe value of simple, interpretable models and when to consider more complex approaches\nThe importance of UI and user experience in data science tools\nStrategies for preventing algorithmic failures by rethinking evaluation metrics and data quality\nThe potential and limitations of LLMs in the current data science landscape\nThe benefits of open-source collaboration and knowledge sharing in the community\n\n\nThroughout the conversation, Vincent illustrates these principles with vivid, real-world examples from his extensive experience in the field. They also discuss Vincent's thoughts on the future of data science and his call to action for more knowledge sharing in the community through blogging and open dialogue.\n\nLINKS\n\n\nThe livestream on YouTube\nVincent's blog\nCalmCode\nscikit-lego\nVincent's book Data Science Fiction (WIP)\nThe Deon Checklist, an ethics checklist for data scientists\nOf oaths and checklists, by DJ Patil, Hilary Mason and Mike Loukides\nVincent's Getting Started with NLP and spaCy Course course on Talk Python\nVincent on twitter\n:probabl. on twitter\nVincent's PyData Amsterdam Keynote \"Natural Intelligence is All You Need [tm]\"\nVincent's PyData Amsterdam 2019 talk: The profession of solving (the wrong problem) \nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nCheck out and subcribe to our lu.ma calendar for upcoming livestreams!","content_html":"

Hugo speaks with Vincent Warmerdam, a senior data professional and machine learning engineer at :probabl, the exclusive brand operator of scikit-learn. Vincent is known for challenging common assumptions and exploring innovative approaches in data science and machine learning.

\n\n

In this episode, they dive deep into rethinking established methods in data science, machine learning, and AI. We explore Vincent's principled approach to the field, including:

\n\n\n\n

Throughout the conversation, Vincent illustrates these principles with vivid, real-world examples from his extensive experience in the field. They also discuss Vincent's thoughts on the future of data science and his call to action for more knowledge sharing in the community through blogging and open dialogue.

\n\n

LINKS

\n\n\n\n

Check out and subcribe to our lu.ma calendar for upcoming livestreams!

","summary":"Hugo speaks with Vincent Warmerdam, a senior data professional and machine learning engineer at :probabl, the exclusive brand operator of scikit-learn. Vincent is known for challenging common assumptions and exploring innovative approaches in data science and machine learning.","date_published":"2024-07-09T19:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/455d1587-7ba6-4850-920e-360d8cbe33d3.mp3","mime_type":"audio/mpeg","size_in_bytes":92236825,"duration_in_seconds":5764}]},{"id":"5412d7de-a99a-48c1-a1b4-f37f9bb29254","title":"Episode 30: Lessons from a Year of Building with LLMs (Part 2)","url":"https://vanishinggradients.fireside.fm/30","content_text":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.\n\nThese five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.\n\nWe’ve split this roundtable into 2 episodes and, in this second episode, we'll explore:\n\n\nAn inside look at building end-to-end systems with LLMs;\nThe experimentation mindset: Why it's the key to successful AI products;\nBuilding trust in AI: Strategies for getting stakeholders on board;\nThe art of data examination: Why looking at your data is more crucial than ever;\nEvaluation strategies that separate the pros from the amateurs.\n\n\nAlthough we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Report: What We’ve Learned From A Year of Building with LLMs\nAbout the Guests/Authors <-- connect with them all on LinkedIn, follow them on Twitter, subscribe to their newsletters! (Seriously, though, the amount of collective wisdom here is 🤑\nYour AI product needs evals by Hamel Husain\nPrompting Fundamentals and How to Apply them Effectively by Eugene Yan\nFuck You, Show Me The Prompt by Hamel Husain\nVanishing Gradients on YouTube\nVanishing Gradients on Twitter\nVanishing Gradients on Lu.ma\n","content_html":"

Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.

\n\n

These five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.

\n\n

We’ve split this roundtable into 2 episodes and, in this second episode, we'll explore:

\n\n\n\n

Although we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.

\n\n

LINKS

\n\n","summary":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley (Part 2).","date_published":"2024-06-26T15:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/5412d7de-a99a-48c1-a1b4-f37f9bb29254.mp3","mime_type":"audio/mpeg","size_in_bytes":72382927,"duration_in_seconds":4523}]},{"id":"7a5a4f5a-0040-451c-82f5-fd61cf1515f4","title":"Episode 29: Lessons from a Year of Building with LLMs (Part 1)","url":"https://vanishinggradients.fireside.fm/29","content_text":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.\n\nThese five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.\n\nWe’ve split this roundtable into 2 episodes and, in this first episode, we'll explore:\n\n\nThe critical role of evaluation and monitoring in LLM applications and why they're non-negotiable, including \"evals\" - short for evaluations, which are automated tests for assessing LLM performance and output quality;\nWhy data literacy is your secret weapon in the AI landscape;\nThe fine-tuning dilemma: when to do it and when to skip it;\nReal-world lessons from building LLM applications that textbooks won't teach you;\nThe evolving role of data scientists and AI engineers in the age of AI.\n\n\nAlthough we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.\n\nLINKS\n\n\nThe livestream on YouTube\nThe Report: What We’ve Learned From A Year of Building with LLMs\nAbout the Guests/Authors <-- connect with them all on LinkedIn, follow them on Twitter, subscribe to their newsletters! (Seriously, though, the amount of collective wisdom here is 🤑\nYour AI product needs evals by Hamel Husain\nPrompting Fundamentals and How to Apply them Effectively by Eugene Yan\nFuck You, Show Me The Prompt by Hamel Husain\nVanishing Gradients on YouTube\nVanishing Gradients on Twitter\nVanishing Gradients on Lu.ma\n","content_html":"

Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.

\n\n

These five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.

\n\n

We’ve split this roundtable into 2 episodes and, in this first episode, we'll explore:

\n\n\n\n

Although we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.

\n\n

LINKS

\n\n","summary":"Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley (Part 1).","date_published":"2024-06-26T14:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/7a5a4f5a-0040-451c-82f5-fd61cf1515f4.mp3","mime_type":"audio/mpeg","size_in_bytes":86750692,"duration_in_seconds":5421}]},{"id":"b268a89e-4fc9-4f9f-a2a5-c7636b3fbd70","title":"Episode 28: Beyond Supervised Learning: The Rise of In-Context Learning with LLMs","url":"https://vanishinggradients.fireside.fm/28","content_text":"Hugo speaks with Alan Nichol, co-founder and CTO of Rasa, where they build software to enable developers to create enterprise-grade conversational AI and chatbot systems across industries like telcos, healthcare, fintech, and government.\n\nWhat's super cool is that Alan and the Rasa team have been doing this type of thing for over a decade, giving them a wealth of wisdom on how to effectively incorporate LLMs into chatbots - and how not to. For example, if you want a chatbot that takes specific and important actions like transferring money, do you want to fully entrust the conversation to one big LLM like ChatGPT, or secure what the LLMs can do inside key business logic?\n\nIn this episode, they also dive into the history of conversational AI and explore how the advent of LLMs is reshaping the field. Alan shares his perspective on how supervised learning has failed us in some ways and discusses what he sees as the most overrated and underrated aspects of LLMs.\n\nAlan offers advice for those looking to work with LLMs and conversational AI, emphasizing the importance of not sleeping on proven techniques and looking beyond the latest hype. In a live demo, he showcases Rasa's Calm (Conversational AI with Language Models), which allows developers to define business logic declaratively and separate it from the LLM, enabling reliable execution of conversational flows.\n\nLINKS\n\n\nThe livestream on YouTube\nAlan's Rasa CALM Demo: Building Conversational AI with LLMs \nAlan on twitter.com\nRasa\nCALM, an LLM-native approach to building reliable conversational AI\nTask-Oriented Dialogue with In-Context Learning\n'We don’t know how to build conversational software yet' by Alan Nicol\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nUpcoming Livestreams\n\n\nLessons from a Year of Building with LLMs\nVALIDATING THE VALIDATORS with Shreya Shanker\n","content_html":"

Hugo speaks with Alan Nichol, co-founder and CTO of Rasa, where they build software to enable developers to create enterprise-grade conversational AI and chatbot systems across industries like telcos, healthcare, fintech, and government.

\n\n

What's super cool is that Alan and the Rasa team have been doing this type of thing for over a decade, giving them a wealth of wisdom on how to effectively incorporate LLMs into chatbots - and how not to. For example, if you want a chatbot that takes specific and important actions like transferring money, do you want to fully entrust the conversation to one big LLM like ChatGPT, or secure what the LLMs can do inside key business logic?

\n\n

In this episode, they also dive into the history of conversational AI and explore how the advent of LLMs is reshaping the field. Alan shares his perspective on how supervised learning has failed us in some ways and discusses what he sees as the most overrated and underrated aspects of LLMs.

\n\n

Alan offers advice for those looking to work with LLMs and conversational AI, emphasizing the importance of not sleeping on proven techniques and looking beyond the latest hype. In a live demo, he showcases Rasa's Calm (Conversational AI with Language Models), which allows developers to define business logic declaratively and separate it from the LLM, enabling reliable execution of conversational flows.

\n\n

LINKS

\n\n\n\n

Upcoming Livestreams

\n\n","summary":"Hugo speaks with Alan Nichol, co-founder and CTO of Rasa, where they build software to enable developers to create enterprise-grade conversational AI and chatbot systems across industries like telcos, healthcare, fintech, and government.","date_published":"2024-06-10T08:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/b268a89e-4fc9-4f9f-a2a5-c7636b3fbd70.mp3","mime_type":"audio/mpeg","size_in_bytes":63014789,"duration_in_seconds":3938}]},{"id":"d42a2479-a220-4f72-bf48-946c4a393efa","title":"Episode 27: How to Build Terrible AI Systems","url":"https://vanishinggradients.fireside.fm/27","content_text":"Hugo speaks with Jason Liu, an independent consultant who uses his expertise in recommendation systems to help fast-growing startups build out their RAG applications. He was previously at Meta and Stitch Fix is also the creator of Instructor, Flight, and an ML and data science educator.\n\nThey talk about how Jason approaches consulting companies across many industries, including construction and sales, in building production LLM apps, his playbook for getting ML and AI up and running to build and maintain such apps, and the future of tooling to do so.\n\nThey take an inverted thinking approach, envisaging all the failure modes that would result in building terrible AI systems, and then figure out how to avoid such pitfalls.\n\nLINKS\n\n\nThe livestream on YouTube\nJason's website\nPyDdantic is all you need, Jason's Keynote at AI Engineer Summit, 2023\nHow to build a terrible RAG system by Jason\nTo express interest in Jason's Systematically improving RAG Applications course\nVanishing Gradients on Twitter\nHugo on Twitter\n\n\nUpcoming Livestreams\n\n\nGood Riddance to Supervised Learning with Alan Nichol (CTO and co-founder, Rasa)\nLessons from a Year of Building with LLMs\n","content_html":"

Hugo speaks with Jason Liu, an independent consultant who uses his expertise in recommendation systems to help fast-growing startups build out their RAG applications. He was previously at Meta and Stitch Fix is also the creator of Instructor, Flight, and an ML and data science educator.

\n\n

They talk about how Jason approaches consulting companies across many industries, including construction and sales, in building production LLM apps, his playbook for getting ML and AI up and running to build and maintain such apps, and the future of tooling to do so.

\n\n

They take an inverted thinking approach, envisaging all the failure modes that would result in building terrible AI systems, and then figure out how to avoid such pitfalls.

\n\n

LINKS

\n\n\n\n

Upcoming Livestreams

\n\n","summary":"Hugo speaks with Jason Liu, an independent consultant who uses his expertise in recommendation systems to help fast-growing startups build out their RAG applications. He was previously at Meta and Stitch Fix is also the creator of Instructor, Flight, and an ML and data science educator.","date_published":"2024-05-31T10:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/d42a2479-a220-4f72-bf48-946c4a393efa.mp3","mime_type":"audio/mpeg","size_in_bytes":88718026,"duration_in_seconds":5544}]},{"id":"d56cd02b-11cb-4be9-a2a7-31f783ef9c1a","title":"Episode 26: Developing and Training LLMs From Scratch","url":"https://vanishinggradients.fireside.fm/26","content_text":"Hugo speaks with Sebastian Raschka, a machine learning & AI researcher, programmer, and author. As Staff Research Engineer at Lightning AI, he focuses on the intersection of AI research, software development, and large language models (LLMs).\n\nHow do you build LLMs? How can you use them, both in prototype and production settings? What are the building blocks you need to know about?\n\n​In this episode, we’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.\n\nThe idea here is not that you’ll need to use an LLM you’ve built from scratch, but that we’ll learn a lot about LLMs and how to use them in the process.\n\nNear the end we also did some live coding to fine-tune GPT-2 in order to create a spam classifier! \n\nLINKS\n\n\nThe livestream on YouTube\nSebastian's website\nMachine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI by Sebastian\nBuild a Large Language Model (From Scratch) by Sebastian\nPyTorch Lightning\nLightning Fabric\nLitGPT\nSebastian's notebook for finetuning GPT-2 for spam classification!\nThe end of fine-tuning: Jeremy Howard on the Latent Space Podcast\nOur next livestream: How to Build Terrible AI Systems with Jason Liu\nVanishing Gradients on Twitter\nHugo on Twitter\n","content_html":"

Hugo speaks with Sebastian Raschka, a machine learning & AI researcher, programmer, and author. As Staff Research Engineer at Lightning AI, he focuses on the intersection of AI research, software development, and large language models (LLMs).

\n\n

How do you build LLMs? How can you use them, both in prototype and production settings? What are the building blocks you need to know about?

\n\n

​In this episode, we’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.

\n\n

The idea here is not that you’ll need to use an LLM you’ve built from scratch, but that we’ll learn a lot about LLMs and how to use them in the process.

\n\n

Near the end we also did some live coding to fine-tune GPT-2 in order to create a spam classifier!

\n\n

LINKS

\n\n","summary":"Hugo speaks with Sebastian Raschka, a machine learning & AI researcher, programmer, and author.They’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.","date_published":"2024-05-15T13:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/d56cd02b-11cb-4be9-a2a7-31f783ef9c1a.mp3","mime_type":"audio/mpeg","size_in_bytes":53564523,"duration_in_seconds":6695}]},{"id":"2e66472b-34f3-4068-b6f9-4942dc757325","title":"Episode 25: Fully Reproducible ML & AI Workflows","url":"https://vanishinggradients.fireside.fm/25","content_text":"Hugo speaks with Omoju Miller, a machine learning guru and founder and CEO of Fimio, where she is building 21st century dev tooling. In the past, she was Technical Advisor to the CEO at GitHub, spent time co-leading non-profit investment in Computer Science Education for Google, and served as a volunteer advisor to the Obama administration’s White House Presidential Innovation Fellows.\n\nWe need open tools, open data, provenance, and the ability to build fully reproducible, transparent machine learning workflows. With the advent of closed-source, vendor-based APIs and compute becoming a form of gate-keeping, developer tools are at the risk of becoming commoditized and developers becoming consumers.\n\nWe’ll talk about how ideas for escaping these burgeoning walled gardens. We’ll dive into\n\n\nWhat fully reproducible ML workflows would look like, including git for the workflow build process,\nThe need for loosely coupled and composable tools that embrace a UNIX-like philosophy,\nWhat a much more scientific toolchain would look like,\nWhat a future open sources commons for Generative AI could look like,\nWhat an open compute ecosystem could look like,\nHow to create LLMs and tooling so everyone can use them to build production-ready apps,\n\n\nAnd much more!\n\nLINKS\n\n\nThe livestream on YouTube\nOmoju on Twitter\nHugo on Twitter\nVanishing Gradients on Twitter\nLu.ma Calendar that includes details of Hugo's European Tour for Outerbounds\nBlog post that includes details of Hugo's European Tour for Outerbounds\n","content_html":"

Hugo speaks with Omoju Miller, a machine learning guru and founder and CEO of Fimio, where she is building 21st century dev tooling. In the past, she was Technical Advisor to the CEO at GitHub, spent time co-leading non-profit investment in Computer Science Education for Google, and served as a volunteer advisor to the Obama administration’s White House Presidential Innovation Fellows.

\n\n

We need open tools, open data, provenance, and the ability to build fully reproducible, transparent machine learning workflows. With the advent of closed-source, vendor-based APIs and compute becoming a form of gate-keeping, developer tools are at the risk of becoming commoditized and developers becoming consumers.

\n\n

We’ll talk about how ideas for escaping these burgeoning walled gardens. We’ll dive into

\n\n\n\n

And much more!

\n\n

LINKS

\n\n","summary":"Hugo speaks with Omoju Miller, a machine learning guru and founder and CEO of Fimio, where she is building 21st century dev tooling.","date_published":"2024-03-18T23:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/2e66472b-34f3-4068-b6f9-4942dc757325.mp3","mime_type":"audio/mpeg","size_in_bytes":77423933,"duration_in_seconds":4838}]},{"id":"c6ebf900-c625-493a-b4c5-27a7f31da24f","title":"Episode 24: LLM and GenAI Accessibility","url":"https://vanishinggradients.fireside.fm/24","content_text":"Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R&D with answer.ai. His current focus is on generative AI, flitting between different modalities. He also likes teaching and making courses, having worked with both Hugging Face and fast.ai in these capacities.\n\nJohno recently reminded Hugo how hard everything was 10 years ago: “Want to install TensorFlow? Good luck. Need data? Perhaps try ImageNet. But now you can use big models from Hugging Face with hi-res satellite data and do all of this in a Colab notebook. Or think ecology and vision models… or medicine and multimodal models!”\n\nWe talk about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going. We’ll delve into\n\n\nWhat the Generative AI mindset is, in terms of using atomic building blocks, and how it evolved from both the data science and ML mindsets;\nHow fast.ai democratized access to deep learning, what successes they had, and what was learned;\nThe moving parts now required to make GenAI and ML as accessible as possible;\nThe importance of focusing on UX and the application in the world of generative AI and foundation models;\nThe skillset and toolkit needed to be an LLM and AI guru;\nWhat they’re up to at answer.ai to democratize LLMs and foundation models.\n\n\nLINKS\n\n\nThe livestream on YouTube\nZindi, the largest professional network for data scientists in Africa\nA new old kind of R&D lab: Announcing Answer.AI\nWhy and how I’m shifting focus to LLMs by Johno Whitaker\nApplying AI to Immune Cell Networks by Rachel Thomas\nReplicate -- a cool place to explore GenAI models, among other things\nHands-On Generative AI with Transformers and Diffusion Models\nJohno on Twitter\nHugo on Twitter\nVanishing Gradients on Twitter\nSciPy 2024 CFP\nEscaping Generative AI Walled Gardens with Omoju Miller, a Vanishing Gradients Livestream\n","content_html":"

Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R&D with answer.ai. His current focus is on generative AI, flitting between different modalities. He also likes teaching and making courses, having worked with both Hugging Face and fast.ai in these capacities.

\n\n

Johno recently reminded Hugo how hard everything was 10 years ago: “Want to install TensorFlow? Good luck. Need data? Perhaps try ImageNet. But now you can use big models from Hugging Face with hi-res satellite data and do all of this in a Colab notebook. Or think ecology and vision models… or medicine and multimodal models!”

\n\n

We talk about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going. We’ll delve into

\n\n\n\n

LINKS

\n\n","summary":"Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R&D with answer.ai, about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going.","date_published":"2024-02-27T17:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c6ebf900-c625-493a-b4c5-27a7f31da24f.mp3","mime_type":"audio/mpeg","size_in_bytes":86459792,"duration_in_seconds":5403}]},{"id":"96dc5719-497e-4bdb-82e0-a336cf46ec5d","title":"Episode 23: Statistical and Algorithmic Thinking in the AI Age","url":"https://vanishinggradients.fireside.fm/23","content_text":"Hugo speaks with Allen Downey, a curriculum designer at Brilliant, Professor Emeritus at Olin College, and the author of Think Python, Think Bayes, Think Stats, and other computer science and data science books. In 2019-20 he was a Visiting Professor at Harvard University. He previously taught at Wellesley College and Colby College and was a Visiting Scientist at Google. He is also the author of the upcoming book Probably Overthinking It!\n\nThey discuss Allen's new book and the key statistical and data skills we all need to navigate an increasingly data-driven and algorithmic world. The goal was to dive deep into the statistical paradoxes and fallacies that get in the way of using data to make informed decisions. \n\nFor example, when it was reported in 2021 that “in the United Kingdom, 70-plus percent of the people who die now from COVID are fully vaccinated,” this was correct but the implication was entirely wrong. Their conversation jumps into many such concrete examples to get to the bottom of using data for more than “lies, damned lies, and statistics.” They cover\n\n\nInformation and misinformation around pandemics and the base rate fallacy;\nThe tools we need to comprehend the small probabilities of high-risk events such as stock market crashes, earthquakes, and more;\nThe many definitions of algorithmic fairness, why they can't all be met at once, and what we can do about it;\nPublic health, the need for robust causal inference, and variations on Berkson’s paradox, such as the low-birthweight paradox: an influential paper found that that the mortality rate for children of smokers is lower for low-birthweight babies;\nWhy none of us are normal in any sense of the word, both in physical and psychological measurements;\nThe Inspection paradox, which shows up in the criminal justice system and distorts our perception of prison sentences and the risk of repeat offenders.\n\n\nLINKS\n\n\nThe livestream on YouTube\nAllen Downey on Github\nAllen's new book Probably Overthinking It!\nAllen on Twitter\nPrediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions by Mitchell et al.\n","content_html":"

Hugo speaks with Allen Downey, a curriculum designer at Brilliant, Professor Emeritus at Olin College, and the author of Think Python, Think Bayes, Think Stats, and other computer science and data science books. In 2019-20 he was a Visiting Professor at Harvard University. He previously taught at Wellesley College and Colby College and was a Visiting Scientist at Google. He is also the author of the upcoming book Probably Overthinking It!

\n\n

They discuss Allen's new book and the key statistical and data skills we all need to navigate an increasingly data-driven and algorithmic world. The goal was to dive deep into the statistical paradoxes and fallacies that get in the way of using data to make informed decisions.

\n\n

For example, when it was reported in 2021 that “in the United Kingdom, 70-plus percent of the people who die now from COVID are fully vaccinated,” this was correct but the implication was entirely wrong. Their conversation jumps into many such concrete examples to get to the bottom of using data for more than “lies, damned lies, and statistics.” They cover

\n\n\n\n

LINKS

\n\n","summary":"Hugo speaks with Allen Downey, curriculum designer at Brilliant, Professor Emeritus at Olin College, and author, about the key statistical and data skills we all need to navigate an increasingly data-driven and algorithmic world. The goal will be to dive deep into the statistical paradoxes and fallacies that get in the way of using data to make informed decisions. ","date_published":"2023-12-21T09:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/96dc5719-497e-4bdb-82e0-a336cf46ec5d.mp3","mime_type":"audio/mpeg","size_in_bytes":77400109,"duration_in_seconds":4837}]},{"id":"1565738b-1090-4efe-bb2c-2a4244eff19c","title":"Episode 22: LLMs, OpenAI, and the Existential Crisis for Machine Learning Engineering","url":"https://vanishinggradients.fireside.fm/22","content_text":"Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.\n\nJeremy Howard is co-founder of fast.ai, an ex-Chief Scientist at Kaggle, and creator of the ULMFiT approach on which all modern language models are based. Shreya Shankar is at UC Berkeley, ex Google brain, Facebook, and Viaduct. Hamel Husain has his own generative AI and LLM consultancy Parlance Labs and was previously at Outerbounds, Github, and Airbnb.\n\nThey talk about\n\n\nHow LLMs shift the nature of the work we do in DS and ML,\nHow they change the tools we use,\nThe ways in which they could displace the role of traditional ML (e.g. will we stop using xgboost any time soon?),\nHow to navigate all the new tools and techniques,\nThe trade-offs between open and closed models,\nReactions to the recent Open Developer Day and the increasing existential crisis for ML.\n\n\nLINKS\n\n\nThe panel on YouTube\nHugo and Jeremy's upcoming livestream on what the hell happened recently at OpenAI, among many other things\nVanishing Gradients on YouTube\nVanishing Gradients on twitter\n","content_html":"

Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.

\n\n

Jeremy Howard is co-founder of fast.ai, an ex-Chief Scientist at Kaggle, and creator of the ULMFiT approach on which all modern language models are based. Shreya Shankar is at UC Berkeley, ex Google brain, Facebook, and Viaduct. Hamel Husain has his own generative AI and LLM consultancy Parlance Labs and was previously at Outerbounds, Github, and Airbnb.

\n\n

They talk about

\n\n\n\n

LINKS

\n\n","summary":"Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.","date_published":"2023-11-28T08:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/1565738b-1090-4efe-bb2c-2a4244eff19c.mp3","mime_type":"audio/mpeg","size_in_bytes":76924471,"duration_in_seconds":4807}]},{"id":"e329eaa4-5768-44d0-878a-a96f3f2b53f0","title":"Episode 21: Deploying LLMs in Production: Lessons Learned","url":"https://vanishinggradients.fireside.fm/21","content_text":"Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.\n\nThey talk about generative AI, large language models, the business value they can generate, and how to get started. \n\nThey delve into\n\n\nWhere Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);\nCommon misconceptions about LLMs;\nThe skills you need to work with LLMs and GenAI models;\nTools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!\nVendor APIs vs OSS models.\n\n\nLINKS\n\n\nOur upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!\nOur recent livestream Data and DevOps Tools for Evaluating and Productionizing LLMs with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.\nExtended Guide: Instruction-tune Llama 2 by Philipp Schmid\nThe livestream recoding of this episode!\nHamel on twitter\n","content_html":"

Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.

\n\n

They talk about generative AI, large language models, the business value they can generate, and how to get started.

\n\n

They delve into

\n\n\n\n

LINKS

\n\n","summary":"Hugo speaks with Hamel Husain (ex-Github, Airbnb), a machine learning engineer who loves building machine learning infrastructure and tools, about generative AI, large language models, the business value they can generate, and how to get started. ","date_published":"2023-11-14T16:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/e329eaa4-5768-44d0-878a-a96f3f2b53f0.mp3","mime_type":"audio/mpeg","size_in_bytes":65466947,"duration_in_seconds":4091}]},{"id":"3c0c5565-056f-45f4-a785-ec46800bb2cd","title":"Episode 20: Data Science: Past, Present, and Future","url":"https://vanishinggradients.fireside.fm/20","content_text":"Hugo speaks with Chris Wiggins (Columbia, NYTimes) and Matthew Jones (Princeton) about their recent book How Data Happened, and the Columbia course it expands upon, data: past, present, and future.\n\nChris is an associate professor of applied mathematics at Columbia University and the New York Times’ chief data scientist, and Matthew is a professor of history at Princeton University and former Guggenheim Fellow.\n\nFrom facial recognition to automated decision systems that inform who gets loans and who receives bail, we all now move through a world determined by data-empowered algorithms. These technologies didn’t just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search.\n\nDJ Patil, former U.S. Chief Data Scientist, said of the book \"This is the first comprehensive look at the history of data and how power has played a critical role in shaping the history. It’s a must read for any data scientist about how we got here and what we need to do to ensure that data works for everyone.\"\n\nIf you’re a data scientist, machine learning engineer, or work with data in any way, it’s increasingly important to know more about the history and future of the work that you do and understand how your work impacts society and the world.\n\nAmong other things, they'll delve into\n\n\nthe history of human use of data;\nhow data are used to reveal insight and support decisions;\nhow data and data-powered algorithms shape, constrain, and manipulate our commercial, civic, and personal transactions and experiences; and\nhow exploration and analysis of data have become part of our logic and rhetoric of communication and persuasion.\n\n\nYou can also sign up for our next livestreamed podcast recording here! \n\nLINKS\n\n\nHow Data Happened, the book!\ndata: past, present, and future, the course\nRace After Technology, by Ruha Benjamin\nThe problem with metrics is a big problem for AI by Rachel Thomas\nVanishing Gradients on YouTube\n","content_html":"

Hugo speaks with Chris Wiggins (Columbia, NYTimes) and Matthew Jones (Princeton) about their recent book How Data Happened, and the Columbia course it expands upon, data: past, present, and future.

\n\n

Chris is an associate professor of applied mathematics at Columbia University and the New York Times’ chief data scientist, and Matthew is a professor of history at Princeton University and former Guggenheim Fellow.

\n\n

From facial recognition to automated decision systems that inform who gets loans and who receives bail, we all now move through a world determined by data-empowered algorithms. These technologies didn’t just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search.

\n\n

DJ Patil, former U.S. Chief Data Scientist, said of the book "This is the first comprehensive look at the history of data and how power has played a critical role in shaping the history. It’s a must read for any data scientist about how we got here and what we need to do to ensure that data works for everyone."

\n\n

If you’re a data scientist, machine learning engineer, or work with data in any way, it’s increasingly important to know more about the history and future of the work that you do and understand how your work impacts society and the world.

\n\n

Among other things, they'll delve into

\n\n\n\n

You can also sign up for our next livestreamed podcast recording here!

\n\n

LINKS

\n\n","summary":"Hugo speaks with Chris Wiggins (Columbia, NYTimes) and Matthew Jones (Princeton) about their recent book How Data Happened, and the Columbia course it expands upon, data: past, present, and future.\r\n","date_published":"2023-10-05T15:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/3c0c5565-056f-45f4-a785-ec46800bb2cd.mp3","mime_type":"audio/mpeg","size_in_bytes":83201801,"duration_in_seconds":5199}]},{"id":"87376a4e-df73-494f-88ad-09d0313b95c6","title":"Episode 19: Privacy and Security in Data Science and Machine Learning","url":"https://vanishinggradients.fireside.fm/19","content_text":"Hugo speaks with Katharine Jarmul about privacy and security in data science and machine learning. Katharine is a Principal Data Scientist at Thoughtworks Germany focusing on privacy, ethics, and security for data science workflows. Previously, she has held numerous roles at large companies and startups in the US and Germany, implementing data processing and machine learning systems with a focus on reliability, testability, privacy, and security.\n\nIn this episode, Hugo and Katharine talk about\n\n\nWhat data privacy and security are, what they aren’t and the differences between them (hopefully dispelling common misconceptions along the way!);\nWhy you should care about them (hint: the answers will involve regulatory, ethical, risk, and organizational concerns);\nData governance, anonymization techniques, and privacy in data pipelines;\nPrivacy attacks!\nThe state of the art in privacy-aware machine learning and data science, including federated learning;\nWhat you need to know about the current state of regulation, including GDPR and CCPA…\n\n\nAnd much more, all the while grounding our conversation in real-world examples from data science, machine learning, business, and life!\n\nYou can also sign up for our next livestreamed podcast recording here! \n\nLINKS\n\n\nWin a copy of Practical Data Privacy, Katharine's new book!\nKatharine on twitter\nVanishing Gradients on YouTube\nProbably Private, a newsletter for privacy and data science enthusiasts\nProbably Private on YouTube\n","content_html":"

Hugo speaks with Katharine Jarmul about privacy and security in data science and machine learning. Katharine is a Principal Data Scientist at Thoughtworks Germany focusing on privacy, ethics, and security for data science workflows. Previously, she has held numerous roles at large companies and startups in the US and Germany, implementing data processing and machine learning systems with a focus on reliability, testability, privacy, and security.

\n\n

In this episode, Hugo and Katharine talk about

\n\n\n\n

And much more, all the while grounding our conversation in real-world examples from data science, machine learning, business, and life!

\n\n

You can also sign up for our next livestreamed podcast recording here!

\n\n

LINKS

\n\n","summary":"Hugo speaks with Katharine Jarmul about privacy and security in data science and machine learning. Katharine is a Principal Data Scientist at Thoughtworks Germany focusing on privacy, ethics, and security for data science workflows.","date_published":"2023-08-15T03:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/87376a4e-df73-494f-88ad-09d0313b95c6.mp3","mime_type":"audio/mpeg","size_in_bytes":79998085,"duration_in_seconds":4999}]},{"id":"83afeb64-21ec-4828-bf96-75a08c710391","title":"Episode 18: Research Data Science in Biotech","url":"https://vanishinggradients.fireside.fm/18","content_text":"Hugo speaks with Eric Ma about Research Data Science in Biotech. Eric leads the Research team in the Data Science and Artificial Intelligence group at Moderna Therapeutics. Prior to that, he was part of a special ops data science team at the Novartis Institutes for Biomedical Research's Informatics department.\n\nIn this episode, Hugo and Eric talk about\n\n\n What tools and techniques they use for drug discovery (such as mRNA vaccines and medicines);\n The importance of machine learning, deep learning, and Bayesian inference;\n How to think more generally about such high-dimensional, multi-objective optimization problems;\n The importance of open-source software and Python;\n Institutional and cultural questions, including hiring and the trade-offs between being an individual contributor and a manager;\n How they’re approaching accelerating discovery science to the speed of thought using computation, data science, statistics, and ML.\n\n\nAnd as always, much, much more!\n\nLINKS\n\n\nEric's website\nEric on twitter\nVanishing Gradients on YouTube\nCell Biology by the Numbers by Ron Milo and Rob Phillips\nEric's JAX tutorials at PyCon and SciPy\nEric's blog post on Hiring data scientists at Moderna!\n","content_html":"

Hugo speaks with Eric Ma about Research Data Science in Biotech. Eric leads the Research team in the Data Science and Artificial Intelligence group at Moderna Therapeutics. Prior to that, he was part of a special ops data science team at the Novartis Institutes for Biomedical Research's Informatics department.

\n\n

In this episode, Hugo and Eric talk about

\n\n\n\n

And as always, much, much more!

\n\n

LINKS

\n\n","summary":"Machine learning, deep learning, Bayesian inference for drug discovery, OSS, and accelerating discovery science to the speed of thought!","date_published":"2023-05-25T08:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/83afeb64-21ec-4828-bf96-75a08c710391.mp3","mime_type":"audio/mpeg","size_in_bytes":69807439,"duration_in_seconds":4362}]},{"id":"289285e2-f5aa-4900-a051-7b364f9d0bb6","title":"Episode 17: End-to-End Data Science","url":"https://vanishinggradients.fireside.fm/17","content_text":"Hugo speaks with Tanya Cashorali, a data scientist and consultant that helps businesses get the most out of data, about what end-to-end data science looks like across many industries, such as retail, defense, biotech, and sports, including\n\n\nscoping out projects,\nfiguring out the correct questions to ask,\nhow projects can change,\ndelivering on the promise,\nthe importance of rapid prototyping,\nwhat it means to put models in production, and\nhow to measure success.\n\n\nAnd much more, all the while grounding their conversation in real-world examples from data science, business, and life.\n\nIn a world where most organizations think they need AI and yet 10-15% of data science actually involves model building, it’s time to get real about how data science and machine learning actually deliver value!\n\nLINKS\n\n\nTanya on Twitter\nVanishing Gradients on YouTube\nSaving millions with a Shiny app | Data Science Hangout with Tanya Cashorali\nOur next livestream: Research Data Science in Biotech with Eric Ma\n","content_html":"

Hugo speaks with Tanya Cashorali, a data scientist and consultant that helps businesses get the most out of data, about what end-to-end data science looks like across many industries, such as retail, defense, biotech, and sports, including

\n\n\n\n

And much more, all the while grounding their conversation in real-world examples from data science, business, and life.

\n\n

In a world where most organizations think they need AI and yet 10-15% of data science actually involves model building, it’s time to get real about how data science and machine learning actually deliver value!

\n\n

LINKS

\n\n","summary":"It’s time to get real about how data science and machine learning actually deliver value! Hugo speaks with Tanya Cashorali, a data scientist and consultant that helps businesses get the most out of data, about what end-to-end data science looks like across many industries, such as retail, defense, biotech, and sports.","date_published":"2023-02-17T17:30:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/289285e2-f5aa-4900-a051-7b364f9d0bb6.mp3","mime_type":"audio/mpeg","size_in_bytes":73030076,"duration_in_seconds":4564}]},{"id":"9eb29a37-c694-45a8-bae5-38e5b3fd5849","title":"Episode 16: Data Science and Decision Making Under Uncertainty","url":"https://vanishinggradients.fireside.fm/16","content_text":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. \n\nThis is part 2 of a two part conversation in which we delve into decision making under uncertainty. Feel free to check out part 1 here but this episode should also stand alone.\n\nWhy am I speaking to JD about all of this? Because not only is he a wild conversationalist with a real knack for explaining hard to grok concepts with illustrative examples and useful stories, but he has worked for many years in re-insurance, that’s right, not insurance but re-insurance – these are the people who insure the insurers so if anyone can actually tell us about risk and uncertainty in decision making, it’s him!\n\nIn part 1, we discussed risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making.\n\nIn this, part 2, we discuss the ins and outs of decision making under uncertainty, including\n\n\nHow data science can be more tightly coupled with the decision function in organisations;\nSome common mistakes and failure modes of making decisions under uncertainty;\nHeuristics for principled decision-making in data science;\nThe intersection of model building, storytelling, and cognitive biases to keep in mind;\n\n\nAs JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”\n\nLinks\n\n\nVanishing Gradients' new YouTube channel!\nJD on twitter\nExecutive Data Science, episode 5 of Vanishing Gradients, in which Jim Savage and Hugo talk through decision making and why you should always be integrating your loss function over your posterior\nFooled by Randomness by Nassim Taleb\nSuperforecasting: The Art and Science of Prediction Philip E. Tetlock and Dan Gardner\nThinking in Bets by Annie Duke\nThe Signal and the Noise: Why So Many Predictions Fail by Nate Silver\nThinking, Fast and Slow by Daniel Kahneman\n","content_html":"

Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world.

\n\n

This is part 2 of a two part conversation in which we delve into decision making under uncertainty. Feel free to check out part 1 here but this episode should also stand alone.

\n\n

Why am I speaking to JD about all of this? Because not only is he a wild conversationalist with a real knack for explaining hard to grok concepts with illustrative examples and useful stories, but he has worked for many years in re-insurance, that’s right, not insurance but re-insurance – these are the people who insure the insurers so if anyone can actually tell us about risk and uncertainty in decision making, it’s him!

\n\n

In part 1, we discussed risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making.

\n\n

In this, part 2, we discuss the ins and outs of decision making under uncertainty, including

\n\n\n\n

As JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”

\n\n

Links

\n\n","summary":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about data science, ML, and the nitty gritty of decision making under uncertainty, including how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. ","date_published":"2022-12-15T08:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/9eb29a37-c694-45a8-bae5-38e5b3fd5849.mp3","mime_type":"audio/mpeg","size_in_bytes":59947028,"duration_in_seconds":4995}]},{"id":"c2e27880-6d10-4b0b-afd7-e349d219662a","title":"Episode 15: Uncertainty, Risk, and Simulation in Data Science","url":"https://vanishinggradients.fireside.fm/15","content_text":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world. \n\nThis is part 1 of a two part conversation. In this, part 1, we discuss risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making and we draw on examples from our personal lives, the pandemic, our jobs, the reinsurance space, and the corporate world. In part 2, we’ll get into the nitty gritty of decision making under uncertainty.\n\nAs JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”\n\nLinks\n\n\nVanishing Gradients' new YouTube channel!\nJD on twitter\nExecutive Data Science, episode 5 of Vanishing Gradients, in which Jim Savage and Hugo talk through decision making and why you should always be integrating your loss function over your posterior\nFooled by Randomness by Nassim Taleb\nSuperforecasting: The Art and Science of Prediction Philip E. Tetlock and Dan Gardner\nThinking in Bets by Annie Duke\nThe Signal and the Noise: Why So Many Predictions Fail by Nate Silver\nThinking, Fast and Slow by Daniel Kahneman\n\n","content_html":"

Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world.

\n\n

This is part 1 of a two part conversation. In this, part 1, we discuss risk, uncertainty, probabilistic thinking, and simulation, all with a view towards improving decision making and we draw on examples from our personal lives, the pandemic, our jobs, the reinsurance space, and the corporate world. In part 2, we’ll get into the nitty gritty of decision making under uncertainty.

\n\n

As JD says, and I paraphrase, “You may think you train your models, but your models are really training you.”

\n\n

Links

\n\n","summary":"Hugo speaks with JD Long, agricultural economist, quant, and stochastic modeler, about decision making under uncertainty and how we can use our knowledge of risk, uncertainty, probabilistic thinking, causal inference, and more to help us use data science and machine learning to make better decisions in an uncertain world.","date_published":"2022-12-08T05:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c2e27880-6d10-4b0b-afd7-e349d219662a.mp3","mime_type":"audio/mpeg","size_in_bytes":38526097,"duration_in_seconds":3210}]},{"id":"c02c6e9f-2a38-4f03-a8f5-4b19ed8966c3","title":"Episode 14: Decision Science, MLOps, and Machine Learning Everywhere","url":"https://vanishinggradients.fireside.fm/14","content_text":"Hugo Bowne-Anderson, host of Vanishing Gradients, reads 3 audio essays about decision science, MLOps, and what happens when machine learning models are everywhere.\n\nLinks\n\n\nOur upcoming Vanishing Gradients live recording of Data Science and Decision Making Under Uncertainty with Hugo and JD Long!\nDecision-Making in a Time of Crisis by Hugo Bowne-Anderson\nMLOps and DevOps: Why Data Makes It Different by Ville Tuulos and Hugo Bowne-Anderson\nThe above essay syndicated on VentureBeat\nWhen models are everywhere by Hugo Bowne-Anderson and Mike Loukides\n","content_html":"

Hugo Bowne-Anderson, host of Vanishing Gradients, reads 3 audio essays about decision science, MLOps, and what happens when machine learning models are everywhere.

\n\n

Links

\n\n","summary":"Hugo reads 3 audio essays about decision science, MLOps, and what happens when machine learning models are everywhere","date_published":"2022-11-21T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/c02c6e9f-2a38-4f03-a8f5-4b19ed8966c3.mp3","mime_type":"audio/mpeg","size_in_bytes":66269255,"duration_in_seconds":4141}]},{"id":"0d9dafd4-c27b-4e49-9431-58c70de4d82d","title":"Episode 13: The Data Science Skills Gap, Economics, and Public Health","url":"https://vanishinggradients.fireside.fm/13","content_text":"Hugo speak with Norma Padron about data science education and continuous learning for people working in healthcare, broadly construed, along with how we can think about the democratization of data science skills more generally.\n\nNorma is CEO of EmpiricaLab, where her team‘s mission is to bridge work and training and empower healthcare teams to focus on what they care about the most: patient care. In a word, EmpiricaLab is a platform focused on peer learning and last-mile training for healthcare teams.\n\nAs you’ll discover, Norma’s background is fascinating: with a Ph.D. in health policy and management from Yale University, a master's degree in economics from Duke University (among other things), and then working with multiple early stage digital health companies to accelerate their growth and scale, this is a wide ranging conversation about how and where learning actually occurs, particularly with respect to data science; we talk about how the worlds of economics and econometrics, including causal inference, can be used to make data science and more robust and less fragile field, and why these disciplines are essential to both public and health policy. It was really invigorating to talk about the data skills gaps that exists in organizations and how Norma’s team at Empiricalab is thinking about solving it in the health space using a 3 tiered solution of content creation, a social layer, and an information discovery platform. \n\nAll of this in service of a key question we’re facing in this field: how do you get the right data skills, tools, and workflows, in the hands of the people who need them, when the space is evolving so quickly?\n\nLinks\n\n\nNorma's website\nEmpiricaLab\nNorma on twitter\n","content_html":"

Hugo speak with Norma Padron about data science education and continuous learning for people working in healthcare, broadly construed, along with how we can think about the democratization of data science skills more generally.

\n\n

Norma is CEO of EmpiricaLab, where her team‘s mission is to bridge work and training and empower healthcare teams to focus on what they care about the most: patient care. In a word, EmpiricaLab is a platform focused on peer learning and last-mile training for healthcare teams.

\n\n

As you’ll discover, Norma’s background is fascinating: with a Ph.D. in health policy and management from Yale University, a master's degree in economics from Duke University (among other things), and then working with multiple early stage digital health companies to accelerate their growth and scale, this is a wide ranging conversation about how and where learning actually occurs, particularly with respect to data science; we talk about how the worlds of economics and econometrics, including causal inference, can be used to make data science and more robust and less fragile field, and why these disciplines are essential to both public and health policy. It was really invigorating to talk about the data skills gaps that exists in organizations and how Norma’s team at Empiricalab is thinking about solving it in the health space using a 3 tiered solution of content creation, a social layer, and an information discovery platform.

\n\n

All of this in service of a key question we’re facing in this field: how do you get the right data skills, tools, and workflows, in the hands of the people who need them, when the space is evolving so quickly?

\n\n

Links

\n\n","summary":"Hugo speaks with Norma Padron, CEO of EmpiricaLab, about data science education and continuous learning for people working in healthcare, broadly construed, along with how we can think about the democratization of data science skills more generally.","date_published":"2022-10-12T09:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/0d9dafd4-c27b-4e49-9431-58c70de4d82d.mp3","mime_type":"audio/mpeg","size_in_bytes":59542966,"duration_in_seconds":4961}]},{"id":"edfe9061-d42f-4c7d-b0af-e769252ae94e","title":"Episode 12: Data Science for Social Media: Twitter and Reddit","url":"https://vanishinggradients.fireside.fm/12","content_text":"Hugo speakswith Katie Bauer about her time working in data science at both Twitter and Reddit. At the time of recording, Katie was a data science manager at Twitter and prior to that, a founding member of the data team at Reddit. She’s now Head of Data Science at Gloss Genius so congrats on the new job, Katie!\n\nIn this conversation, we dive into what type of challenges social media companies face that data science is equipped to solve: in doing so, we traverse \n\n\nthe difference and similarities in companies such as Twitter and Reddit, \nthe major differences in being an early member of a data team and joining an established data function at a larger organization, \nthe supreme importance of robust measurement and telemetry in data science, along with \nthe mixed incentives for career data scientists, such as building flashy new things instead of maintaining existing infrastructure.\n\n\nI’ve always found conversations with Katie to be a treasure trove of insights into data science and machine learning practice, along with key learnings about data science management. \n\nIn a word, Katie helps me to understand our space better. In this conversation, she told me that one important function data science can serve in any organization is creating a shared context for lots of different people in the org. We dive deep into what this actually means, how it can play out, traversing the world of dashboards, metric stores, feature stores, machine learning products, the need for top-down support, and much, much more.","content_html":"

Hugo speakswith Katie Bauer about her time working in data science at both Twitter and Reddit. At the time of recording, Katie was a data science manager at Twitter and prior to that, a founding member of the data team at Reddit. She’s now Head of Data Science at Gloss Genius so congrats on the new job, Katie!

\n\n

In this conversation, we dive into what type of challenges social media companies face that data science is equipped to solve: in doing so, we traverse

\n\n\n\n

I’ve always found conversations with Katie to be a treasure trove of insights into data science and machine learning practice, along with key learnings about data science management.

\n\n

In a word, Katie helps me to understand our space better. In this conversation, she told me that one important function data science can serve in any organization is creating a shared context for lots of different people in the org. We dive deep into what this actually means, how it can play out, traversing the world of dashboards, metric stores, feature stores, machine learning products, the need for top-down support, and much, much more.

","summary":"Hugo speaks with Katie Bauer about her time working in data science at both Twitter and Reddit. At the time of recording, Katie was a data science manager at Twitter and prior to that, a founding member of the data team at Reddit. ","date_published":"2022-09-30T10:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/edfe9061-d42f-4c7d-b0af-e769252ae94e.mp3","mime_type":"audio/mpeg","size_in_bytes":89041208,"duration_in_seconds":5565}]},{"id":"697e817a-b886-4057-9dc1-4c9868c0b064","title":"Episode 11: Data Science: The Great Stagnation","url":"https://vanishinggradients.fireside.fm/11","content_text":"Hugo speaks with Mark Saroufim, an Applied AI Engineer at Meta who works on PyTorch where his team’s main focus is making it as easy as possible for people to deploy PyTorch in production outside Meta. \n\nMark first came on our radar with an essay he wrote called Machine Learning: the Great Stagnation, which was concerned with the stagnation in machine learning in academic research and in which he stated\n\n\nMachine learning researchers can now engage in risk-free, high-income, high-prestige work. They are today’s Medieval Catholic priests.\n\n\nThis is just the tip of the icebergs of Mark’s critical and often sociological eye and one of the reasons I was excited to speak with him.\n\nIn this conversation, we talk about the importance of open source software in modern data science and machine learning and how Mark thinks about making it as easy to use as possible. We also talk about risk assessments in considering whether to adopt open source or not, the supreme importance of good documentation, and what we can learn from the world of video game development when thinking about open source.\n\nWe then dive into the rise of the machine learning cult leader persona, in the context of examples such as Hugging Face and the community they’ve built. We discuss the role of marketing in open source tooling, along with for profit data science and ML tooling, how it can impact you as an end user, and how much of data science can be considered differing forms of live action role playing and simulation.\n\nWe also talk about developer marketing and content for data professionals and how we see some of the largest names in ML researchers being those that have gigantic Twitter followers, such as Andrei Karpathy. This is part of a broader trend in society about the skills that are required to capture significant mind share these days.\n\nIf that’s not enough, we jump into how machine learning ideally allows businesses to build sustainable and defensible moats, by which we mean the ability to maintain competitive advantages over competitors to retain market share.\n\nIn between this interview and its release, PyTorch joined the Linux Foundation, which is something we’ll need to get Mark back to discuss sometime.\n\nLinks\n\n\nThe Myth of Objective Tech Screens\nMachine Learning: The Great Stagnation\nFear the Boom and Bust: Keynes vs. Hayek - The Original Economics Rap Battle!\nHistory and the Security of Property by Nick Szabo\nMark on YouTube\nMark's Substack\nMark's Discord\n","content_html":"

Hugo speaks with Mark Saroufim, an Applied AI Engineer at Meta who works on PyTorch where his team’s main focus is making it as easy as possible for people to deploy PyTorch in production outside Meta.

\n\n

Mark first came on our radar with an essay he wrote called Machine Learning: the Great Stagnation, which was concerned with the stagnation in machine learning in academic research and in which he stated

\n\n
\n

Machine learning researchers can now engage in risk-free, high-income, high-prestige work. They are today’s Medieval Catholic priests.

\n
\n\n

This is just the tip of the icebergs of Mark’s critical and often sociological eye and one of the reasons I was excited to speak with him.

\n\n

In this conversation, we talk about the importance of open source software in modern data science and machine learning and how Mark thinks about making it as easy to use as possible. We also talk about risk assessments in considering whether to adopt open source or not, the supreme importance of good documentation, and what we can learn from the world of video game development when thinking about open source.

\n\n

We then dive into the rise of the machine learning cult leader persona, in the context of examples such as Hugging Face and the community they’ve built. We discuss the role of marketing in open source tooling, along with for profit data science and ML tooling, how it can impact you as an end user, and how much of data science can be considered differing forms of live action role playing and simulation.

\n\n

We also talk about developer marketing and content for data professionals and how we see some of the largest names in ML researchers being those that have gigantic Twitter followers, such as Andrei Karpathy. This is part of a broader trend in society about the skills that are required to capture significant mind share these days.

\n\n

If that’s not enough, we jump into how machine learning ideally allows businesses to build sustainable and defensible moats, by which we mean the ability to maintain competitive advantages over competitors to retain market share.

\n\n

In between this interview and its release, PyTorch joined the Linux Foundation, which is something we’ll need to get Mark back to discuss sometime.

\n\n

Links

\n\n","summary":"Hugo speaks with Mark Saroufim, an Applied AI Engineer at Meta who works on PyTorch where his team’s main focus is making it as easy as possible for people to deploy PyTorch in production outside Meta. ","date_published":"2022-09-16T12:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/697e817a-b886-4057-9dc1-4c9868c0b064.mp3","mime_type":"audio/mpeg","size_in_bytes":101417351,"duration_in_seconds":6338}]},{"id":"4552d501-5bc5-43c9-9246-5dbd221ebd06","title":"Episode 10: Investing in Machine Learning","url":"https://vanishinggradients.fireside.fm/10","content_text":"Hugo speaks with Sarah Catanzaro, General Partner at Amplify Partners, about investing in data science and machine learning tooling and where we see progress happening in the space.\n\nSarah invests in the tools that we both wish we had earlier in our careers: tools that enable data scientists and machine learners to collect, store, manage, analyze, and model data more effectively. As you’ll discover, Sarah identifies as a scientist first and an investor second and still believes that her mission is to enable companies to become data-driven and to generate ROI through machine and statistical learning. In her words, she’s still that cuckoo kid who’s ranting and raving about how data and AI will shift every tide.\n\nIn this conversation, we talk about what scientific inquiry actually is and the elements of playfulness and seriousness it necessarily involves, and how it can be used to generate business value. We talk about Sarah’s unorthodox path from a data scientist working in defense to her time at Palantir and how that led her to build out a data team and function for a venture capital firm and then to becoming a VC in the data tooling space.\n\nWe then really dive into the data science and machine learning tooling space to figure out why it’s so fragmented: we look to the data analytics stack and software engineering communities to find historical tethers that may be useful. We discuss the moving parts that led to the establishment of a standard, a system of record, and clearly defined roles in analytics and what we can learn from that for machine learning!\n\nWe also dive into the development of tools, workflows, and division of labour as partial exercises in pattern recognition and how this can be at odds with the variance we see in the machine learning landscape, more generally!\n\nTwo take-aways are that we need best practices and we need more standardization.\n\nWe also discussed that, with all our focus and conversations on tools, what conversation we’re missing and Sarah was adamant that we need to be focusing on questions, not solutions, and even questioning what ML is useful for and what it isn’t, diving into a bunch of thoughtful and nuanced examples.\n\nI’m also grateful that Sarah let me take her down a slightly dangerous and self-critical path where we riffed on both our roles in potentially contributing to the tragedy of commons we’re all experiencing in the data tooling landscape, me working in tool building, developer relations, and in marketing, and Sarah in venture capital. ","content_html":"

Hugo speaks with Sarah Catanzaro, General Partner at Amplify Partners, about investing in data science and machine learning tooling and where we see progress happening in the space.

\n\n

Sarah invests in the tools that we both wish we had earlier in our careers: tools that enable data scientists and machine learners to collect, store, manage, analyze, and model data more effectively. As you’ll discover, Sarah identifies as a scientist first and an investor second and still believes that her mission is to enable companies to become data-driven and to generate ROI through machine and statistical learning. In her words, she’s still that cuckoo kid who’s ranting and raving about how data and AI will shift every tide.

\n\n

In this conversation, we talk about what scientific inquiry actually is and the elements of playfulness and seriousness it necessarily involves, and how it can be used to generate business value. We talk about Sarah’s unorthodox path from a data scientist working in defense to her time at Palantir and how that led her to build out a data team and function for a venture capital firm and then to becoming a VC in the data tooling space.

\n\n

We then really dive into the data science and machine learning tooling space to figure out why it’s so fragmented: we look to the data analytics stack and software engineering communities to find historical tethers that may be useful. We discuss the moving parts that led to the establishment of a standard, a system of record, and clearly defined roles in analytics and what we can learn from that for machine learning!

\n\n

We also dive into the development of tools, workflows, and division of labour as partial exercises in pattern recognition and how this can be at odds with the variance we see in the machine learning landscape, more generally!

\n\n

Two take-aways are that we need best practices and we need more standardization.

\n\n

We also discussed that, with all our focus and conversations on tools, what conversation we’re missing and Sarah was adamant that we need to be focusing on questions, not solutions, and even questioning what ML is useful for and what it isn’t, diving into a bunch of thoughtful and nuanced examples.

\n\n

I’m also grateful that Sarah let me take her down a slightly dangerous and self-critical path where we riffed on both our roles in potentially contributing to the tragedy of commons we’re all experiencing in the data tooling landscape, me working in tool building, developer relations, and in marketing, and Sarah in venture capital.

","summary":"Hugo speaks with Sarah Catanzaro, General Partner at Amplify Partners, about investing in data science and machine learning tooling and where we see progress happening in the space.","date_published":"2022-08-19T01:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/4552d501-5bc5-43c9-9246-5dbd221ebd06.mp3","mime_type":"audio/mpeg","size_in_bytes":83101043,"duration_in_seconds":5193}]},{"id":"86c9a94f-4c33-40a8-aa83-50a9e125484b","title":"9: AutoML, Literate Programming, and Data Tooling Cargo Cults","url":"https://vanishinggradients.fireside.fm/9","content_text":"Hugo speaks with Hamel Husain, Head of Data Science at Outerbounds, with extensive experience in data science consulting, at DataRobot, Airbnb, and Github.\n\nIn this conversation, they talk about Hamel's early days in data science, consulting for a wide array of companies, such as Crocs, restaurants, and casinos in Las Vegas, diving into what data science even looked like in 2005 and how you could think about delivering business value using data and analytics back then.\n\nThey talk about his trajectory in moving to data science and machine learning in Silicon Valley, what his expectations were, and what he actually found there.\n\nThey then take a dive into AutoML, discussing what should be automated in Machine learning and what shouldn’t. They talk about software engineering best practices and what aspects it would be useful for data scientists to know about.\n\nThey also got to talk about the importance of literate programming, notebooks, and documentation in data science and ML. All this and more!\n\nLinks\n\n\nHamel on twitter\nThe Outerbounds documentation project repo\nPractical Advice for R in Production\nnbdev: Create delightful python projects using Jupyter Notebooks\n","content_html":"

Hugo speaks with Hamel Husain, Head of Data Science at Outerbounds, with extensive experience in data science consulting, at DataRobot, Airbnb, and Github.

\n\n

In this conversation, they talk about Hamel's early days in data science, consulting for a wide array of companies, such as Crocs, restaurants, and casinos in Las Vegas, diving into what data science even looked like in 2005 and how you could think about delivering business value using data and analytics back then.

\n\n

They talk about his trajectory in moving to data science and machine learning in Silicon Valley, what his expectations were, and what he actually found there.

\n\n

They then take a dive into AutoML, discussing what should be automated in Machine learning and what shouldn’t. They talk about software engineering best practices and what aspects it would be useful for data scientists to know about.

\n\n

They also got to talk about the importance of literate programming, notebooks, and documentation in data science and ML. All this and more!

\n\n

Links

\n\n","summary":"Hugo speaks with Hamel Husain, Head of Data Science at Outerbounds, with extensive experience in data science consulting, at DataRobot, Airbnb, and Github.","date_published":"2022-07-19T23:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/86c9a94f-4c33-40a8-aa83-50a9e125484b.mp3","mime_type":"audio/mpeg","size_in_bytes":97642250,"duration_in_seconds":6102}]},{"id":"fe4aec2a-6f67-4259-ae88-6baefd6f008e","title":"Episode 8: The Open Source Cybernetic Revolution","url":"https://vanishinggradients.fireside.fm/8","content_text":"Hugo speaks with Peter Wang, CEO of Anaconda, about what the value proposition of data science actually is, data not as the new oil, but rather data as toxic, nuclear sludge, the fact that data isn’t real (and what we really have are frozen models), and the future promise of data science.\n\nThey also dive into an experimental conversation around open source software development as a model for the development of human civilization, in the context of developing systems that prize local generativity over global extractive principles. If that’s a mouthful, which it was, or an earful, which it may have been, all will be revealed in the conversation.\n\nLInks\n\n\nPeter on twitter\nAnaconda Nucleus\nJordan Hall on the Jim Rutt Show: Game B\nMeditations On Moloch -- On multipolar traps\nHere Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky\nFinite and Infinite Games by James Carse\nGoverning the Commons: The Evolution of Institutions for Collective Action by Elinor Olstrom\nElinor Ostrom's 8 Principles for Managing A Commmons\nHaunted by Data, a beautiful and mesmerising talk by Pinboard.in founder Maciej Ceglowski\n","content_html":"

Hugo speaks with Peter Wang, CEO of Anaconda, about what the value proposition of data science actually is, data not as the new oil, but rather data as toxic, nuclear sludge, the fact that data isn’t real (and what we really have are frozen models), and the future promise of data science.

\n\n

They also dive into an experimental conversation around open source software development as a model for the development of human civilization, in the context of developing systems that prize local generativity over global extractive principles. If that’s a mouthful, which it was, or an earful, which it may have been, all will be revealed in the conversation.

\n\n

LInks

\n\n","summary":"Hugo speaks with Peter Wang, CEO of Anaconda, about what the value proposition of data science actually is, data not as the new oil, but rather data as toxic, nuclear sludge, the fact that data isn’t real (and what we really have are frozen models), and the future promise of data science, Gifting economies with finite game economics thrust onto them.\r\n\r\nThey also dive into an experimental conversation around open source software development as a model for the development of human civilization, in the context of developing systems that prize local generativity over global extractive principles. If that’s a mouthful, which it was, or an earful, which it may have been, all will be revealed in the conversation.\r\n","date_published":"2022-05-16T15:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/fe4aec2a-6f67-4259-ae88-6baefd6f008e.mp3","mime_type":"audio/mpeg","size_in_bytes":63326903,"duration_in_seconds":3957}]},{"id":"da4fab18-c5fa-460d-9ddf-0c8f1e60f3f8","title":"Episode 7: The Evolution of Python for Data Science","url":"https://vanishinggradients.fireside.fm/7","content_text":"Hugo speaks with Peter Wang, CEO of Anaconda, about how Python became so big in data science, machine learning, and AI. They jump into many of the technical and sociological beginnings of Python being used for data science, a history of PyData, the conda distribution, and NUMFOCUS.\n\nThey also talk about the emergence of online collaborative environments, particularly with respect to open source, and attempt to figure out the movings parts of PyData and why it has had the impact it has, including the fact that many core developers were not computer scientists or software engineers, but rather scientists and researchers building tools that they needed on an as-needed basis\n\nThey also discuss the challenges in getting adoption for Python and the things that the PyData stack solves, those that it doesn’t and what progress is being made there.\n\nPeople who have listened to Hugo podcast for some time may have recognized that he's interested in the sociology of the data science space and he really considered speaking with Peter a fascinating opportunity to delve into how the Pythonic data science space evolved, particularly with respect to tooling, not only because Peter had a front row seat for much of it, but that he was one of several key actors at various different points. On top of this, Hugo wanted to allow Peter’s inner sociologist room to breathe and evolve in this conversation. \n\nWhat happens then is slightly experimental – Peter is a deep, broad, and occasionally hallucinatory thinker and Hugo wanted to explore new spaces with him so we hope you enjoy the experiments they play as they begin to discuss open-source software in the broader context of finite and infinite games and how OSS is a paradigm of humanity’s ability to create generative, nourishing and anti-rivlarous systems where, by anti-rivalrous, we mean things that become more valuable for everyone the more people use them! But we need to be mindful of finite-game dynamics (for example, those driven by corporate incentives) co-opting and parasitizing the generative systems that we build.\n\nThese are all considerations they delve far deeper into in Part 2 of this interview, which will be the next episode of VG, where we also dive into the relationship between OSS, tools, and venture capital, amonh many others things.\n\nLInks\n\n\nPeter on twitter\nAnaconda Nucleus\nCalling out SciPy on diversity (even though it hurts) by Juan Nunez-Iglesias\nHere Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky\nFinite and Infinite Games by James Carse\nGoverning the Commons: The Evolution of Institutions for Collective Action by Elinor Olstrom\nElinor Ostrom's 8 Principles for Managing A Commmons\n","content_html":"

Hugo speaks with Peter Wang, CEO of Anaconda, about how Python became so big in data science, machine learning, and AI. They jump into many of the technical and sociological beginnings of Python being used for data science, a history of PyData, the conda distribution, and NUMFOCUS.

\n\n

They also talk about the emergence of online collaborative environments, particularly with respect to open source, and attempt to figure out the movings parts of PyData and why it has had the impact it has, including the fact that many core developers were not computer scientists or software engineers, but rather scientists and researchers building tools that they needed on an as-needed basis

\n\n

They also discuss the challenges in getting adoption for Python and the things that the PyData stack solves, those that it doesn’t and what progress is being made there.

\n\n

People who have listened to Hugo podcast for some time may have recognized that he's interested in the sociology of the data science space and he really considered speaking with Peter a fascinating opportunity to delve into how the Pythonic data science space evolved, particularly with respect to tooling, not only because Peter had a front row seat for much of it, but that he was one of several key actors at various different points. On top of this, Hugo wanted to allow Peter’s inner sociologist room to breathe and evolve in this conversation.

\n\n

What happens then is slightly experimental – Peter is a deep, broad, and occasionally hallucinatory thinker and Hugo wanted to explore new spaces with him so we hope you enjoy the experiments they play as they begin to discuss open-source software in the broader context of finite and infinite games and how OSS is a paradigm of humanity’s ability to create generative, nourishing and anti-rivlarous systems where, by anti-rivalrous, we mean things that become more valuable for everyone the more people use them! But we need to be mindful of finite-game dynamics (for example, those driven by corporate incentives) co-opting and parasitizing the generative systems that we build.

\n\n

These are all considerations they delve far deeper into in Part 2 of this interview, which will be the next episode of VG, where we also dive into the relationship between OSS, tools, and venture capital, amonh many others things.

\n\n

LInks

\n\n","summary":"Hugo speaks with Peter Wang, CEO of Anaconda, about how Python became so big in data science, machine learning, and AI. They jump into many of the technical and sociological beginnings of Python being used for data science, a history of PyData, the conda distribution, and NUMFOCUS.\r\n","date_published":"2022-05-02T06:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/da4fab18-c5fa-460d-9ddf-0c8f1e60f3f8.mp3","mime_type":"audio/mpeg","size_in_bytes":60022178,"duration_in_seconds":3751}]},{"id":"811a664b-7b02-45b1-8cd7-84155bf4e39d","title":"Episode 6: Bullshit Jobs in Data Science (and what to do about them)","url":"https://vanishinggradients.fireside.fm/6","content_text":"Hugo speaks with Jacqueline Nolis, Chief Product Officer at Saturn Cloud (formerly Head of Data Science), about all types of failure modes in data science, ML, and AI, and they delve into bullshit jobs in data science (yes, that’s a technical term, as you’ll find out) –they discuss the elements that are bullshit, the elements that aren’t, and how to increase the ratio of the latter to the former.\n\nThey also talk about her journey in moving from mainly working in prescriptive analytics building reports in PDFs and power points to deploying machine learning products in production. They delve into her motion from doing data science to designing products for data scientists and how to think about choosing career paths. Jacqueline has been an individual contributor, a team lead, and a principal data scientist so has a lot of valuable experience here. They talk about her experience of transitioning gender while working in data science and they work hard to find a bright vision for the future of this industry!\n\nLinks\n\n\nJacqueline on twitter\nBuilding a Career in Data Science by Jacqueline and Emily Robinson\nSaturn Cloud\nWhy are we so surprised?, a post by Allen Downey on communicating and thinking through uncertainty\nData Mishaps Night!\nThe Trump administration’s “cubic model” of coronavirus deaths, explained by Matthew Yglesias\nWorking Class Deep Learner by Mark Saroufim\n","content_html":"

Hugo speaks with Jacqueline Nolis, Chief Product Officer at Saturn Cloud (formerly Head of Data Science), about all types of failure modes in data science, ML, and AI, and they delve into bullshit jobs in data science (yes, that’s a technical term, as you’ll find out) –they discuss the elements that are bullshit, the elements that aren’t, and how to increase the ratio of the latter to the former.

\n\n

They also talk about her journey in moving from mainly working in prescriptive analytics building reports in PDFs and power points to deploying machine learning products in production. They delve into her motion from doing data science to designing products for data scientists and how to think about choosing career paths. Jacqueline has been an individual contributor, a team lead, and a principal data scientist so has a lot of valuable experience here. They talk about her experience of transitioning gender while working in data science and they work hard to find a bright vision for the future of this industry!

\n\n

Links

\n\n","summary":"Hugo speaks with Jacqueline Nolis, Chief Product Officer at Saturn Cloud (formerly Head of Data Science), about all types of failure modes in data science, ML, and AI, and they delve into bullshit jobs in data science (yes, that’s a technical term, as you’ll find out) –they discuss the elements that are bullshit, the elements that aren’t, and how to increase the ratio of the latter to the former.\r\n","date_published":"2022-04-05T07:00:00.000+10:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/811a664b-7b02-45b1-8cd7-84155bf4e39d.mp3","mime_type":"audio/mpeg","size_in_bytes":83542646,"duration_in_seconds":5221}]},{"id":"9078010f-454b-4bcf-bafc-f54f44e04868","title":"Episode 5: Executive Data Science","url":"https://vanishinggradients.fireside.fm/5","content_text":"Hugo speaks with Jim Savage, the Director of Data Science at Schmidt Futures, about the need for data science in executive training and decision, what data scientists can learn from economists, the perils of \"data for good\", and why you should always be integrating your loss function over your posterior.\n\nJim and Hugo talk about what data science is and isn’t capable of, what can actually deliver value, and what people really enjoy doing: the intersection in this Venn diagram is where we need to focus energy and it may not be quite what you think it is!\n\nThey then dive into Jim's thoughts on what he dubs Executive Data Science. You may be aware of the slicing of the data science and machine learning spaces into descriptive analytics, predictive analytics, and prescriptive analytics but, being the thought surgeon that he is, Jim proposes a different slicing into \n\n(1) tool building OR data science as a product, \n\n(2) tools to automate and augment parts of us, and \n\n(3) what Jim calls Executive Data Science.\n\nJim and Hugo also talk about decision theory, the woeful state of causal inference techniques in contemporary data science, and what techniques it would behoove us all to import from econometrics and economics, more generally. If that’s not enough, they talk about the importance of thinking through the data generating process and things that can go wrong if you don’t. In terms of allowing your data work to inform your decision making, thery also discuss Jim’s maxim “ALWAYS BE INTEGRATING YOUR LOSS FUNCTION OVER YOUR POSTERIOR”\n\nLast but definitively not least, as Jim has worked in the data for good space for much of his career, they talk about what this actually means, with particular reference to fast.ai founder & QUT professor of practice Rachel Thomas’ blog post called “Doing Data Science for Social Good, Responsibly”. Rachel’s post takes as its starting point the following words of Sarah Hooker, a researcher at Google Brain:\n\n\n\"Data for good\" is an imprecise term that says little about who we serve, the tools used, or the goals. Being more precise can help us be more accountable & have a greater positive impact.\n\n\nAnd Jim and I discuss his work in the light of these foundational considerations.\n\nLinks\n\n\nJim on twitter\nWhat Is Causal Inference?An Introduction for Data Scientists by Hugo Bowne-Anderson and Mike Loukides\n Jim's must-watch Data Council talk on Productizing Structural Models\n [Mastering Metrics}(https://www.masteringmetrics.com/) by Angrist and Pischke\n Mostly Harmless Econometrics: An Empiricist's Companion by Angrist and Pischke\n The Book of Why by Judea Pearl\nDecision-Making in a Time of Crisis by Hugo Bowne-Anderson\nDoing Data Science for Social Good, Responsibly by Rachel Thomas\n","content_html":"

Hugo speaks with Jim Savage, the Director of Data Science at Schmidt Futures, about the need for data science in executive training and decision, what data scientists can learn from economists, the perils of "data for good", and why you should always be integrating your loss function over your posterior.

\n\n

Jim and Hugo talk about what data science is and isn’t capable of, what can actually deliver value, and what people really enjoy doing: the intersection in this Venn diagram is where we need to focus energy and it may not be quite what you think it is!

\n\n

They then dive into Jim's thoughts on what he dubs Executive Data Science. You may be aware of the slicing of the data science and machine learning spaces into descriptive analytics, predictive analytics, and prescriptive analytics but, being the thought surgeon that he is, Jim proposes a different slicing into

\n\n

(1) tool building OR data science as a product,

\n\n

(2) tools to automate and augment parts of us, and

\n\n

(3) what Jim calls Executive Data Science.

\n\n

Jim and Hugo also talk about decision theory, the woeful state of causal inference techniques in contemporary data science, and what techniques it would behoove us all to import from econometrics and economics, more generally. If that’s not enough, they talk about the importance of thinking through the data generating process and things that can go wrong if you don’t. In terms of allowing your data work to inform your decision making, thery also discuss Jim’s maxim “ALWAYS BE INTEGRATING YOUR LOSS FUNCTION OVER YOUR POSTERIOR”

\n\n

Last but definitively not least, as Jim has worked in the data for good space for much of his career, they talk about what this actually means, with particular reference to fast.ai founder & QUT professor of practice Rachel Thomas’ blog post called “Doing Data Science for Social Good, Responsibly”. Rachel’s post takes as its starting point the following words of Sarah Hooker, a researcher at Google Brain:

\n\n
\n

"Data for good" is an imprecise term that says little about who we serve, the tools used, or the goals. Being more precise can help us be more accountable & have a greater positive impact.

\n
\n\n

And Jim and I discuss his work in the light of these foundational considerations.

\n\n

Links

\n\n","summary":"Hugo speaks with Jim Savage, the Director of Data Science at Schmidt Futures, about the need for data science in executive training and decision, what data scientists can learn from economists, the perils of \"data for good\", and why you should always be integrating your loss function over your posterior.","date_published":"2022-03-23T16:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/9078010f-454b-4bcf-bafc-f54f44e04868.mp3","mime_type":"audio/mpeg","size_in_bytes":103917601,"duration_in_seconds":6494}]},{"id":"32f4444c-6c16-4411-ab8a-2adbf23b65c8","title":"Episode 4: Machine Learning at T-Mobile","url":"https://vanishinggradients.fireside.fm/4","content_text":"Hugo speaks with Heather Nolis, Principal Machine Learning engineer at T-mobile, about what data science, machine learning, and AI look like at T-mobile, along with Heather’s path from a software development intern there to principal ML engineer running a team of 15.\n\nThey talk about: how to build a DS culture from scratch and what executive-level support looks like, as well as how to demonstrate machine learning value early on from a shark tank style pitch night to the initial investment through to the POC and building out the function; all the great work they do with R and the Tidyverse in production; what it’s like to be a lesbian in tech, and about what it was like to discover she was autistic and how that impacted her work; how to measure and demonstrate success and ROI for the org; some massive data science fails!; how to deal with execs wanting you to use the latest GPT-X – in a fragmented tooling landscape; how to use the simplest technology to deliver the most value.\n\nFinally, the team just hired their first FT ethicist and they speak about how ethics can be embedded in a team and across an institution.\n\nLinks\n\n\nPut R in prod: Tools and guides to put R models into production\nEnterprise Web Services with Neural Networks Using R and TensorFlow\nHeather on twitter \nT-Mobile is hiring!\nHugo's upcoming fireside chat and AMA with Hilary Parker about how to actually produce sustainable business value using machine learning and product management for ML! \n","content_html":"

Hugo speaks with Heather Nolis, Principal Machine Learning engineer at T-mobile, about what data science, machine learning, and AI look like at T-mobile, along with Heather’s path from a software development intern there to principal ML engineer running a team of 15.

\n\n

They talk about: how to build a DS culture from scratch and what executive-level support looks like, as well as how to demonstrate machine learning value early on from a shark tank style pitch night to the initial investment through to the POC and building out the function; all the great work they do with R and the Tidyverse in production; what it’s like to be a lesbian in tech, and about what it was like to discover she was autistic and how that impacted her work; how to measure and demonstrate success and ROI for the org; some massive data science fails!; how to deal with execs wanting you to use the latest GPT-X – in a fragmented tooling landscape; how to use the simplest technology to deliver the most value.

\n\n

Finally, the team just hired their first FT ethicist and they speak about how ethics can be embedded in a team and across an institution.

\n\n

Links

\n\n","summary":"Hugo speaks with Heather Nolis, Principal Machine Learning engineer at T-mobile, about what data science, machine learning, and AI look like at T-mobile, along with Heather’s path from a software development intern there to principal ML engineer running a team of 15.\r\n","date_published":"2022-03-10T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/32f4444c-6c16-4411-ab8a-2adbf23b65c8.mp3","mime_type":"audio/mpeg","size_in_bytes":100002470,"duration_in_seconds":6250}]},{"id":"8f08dc5e-bb75-4fec-9db9-3808cd980ba9","title":"Episode 3: Language Tech For All","url":"https://vanishinggradients.fireside.fm/3","content_text":"Rachael Tatman is a senior developer advocate for Rasa, where she’s helping developers build and deploy ML chatbots using their open source framework.\n\nRachael has a PhD in Linguistics from the University of Washington where her research was on computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle and she’s still a Kaggle Grandmaster.\n\nIn this conversation, Rachael and I talk about the history of NLP and conversational AI//chatbots and we dive into the fascinating tension between rule-based techniques and ML and deep learning – we also talk about how to incorporate machine and human intelligence together by thinking through questions such as “should a response to a human ever be automated?” Spoiler alert: the answer is a resounding NO WAY! \n\nIn this journey, something that becomes apparent is that many of the trends, concepts, questions, and answers, although framed for NLP and chatbots, are applicable to much of data science, more generally.\n\nWe also discuss the data scientist’s responsibility to end-users and stakeholders using, among other things, the lens of considering those whose data you’re working with to be data donors.\n\nWe then consider what globalized language technology looks like and can look like, what we can learn from the history of science here, particularly given that so much training data and models are in English when it accounts for so little of language spoken globally. \n\nLinks\n\n\nRachael's website\nRasa\nSpeech and Language Processing\nby Dan Jurafsky and James H. Martin \n\n\nMasakhane, putting African languages on the #NLP map since 2019\nThe Distributed AI Research Institute, a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence\nThe Algorithmic Justice League, unmasking AI harms and biases\nBlack in AI, increasing the presence and inclusion of Black people in the field of AI by creating space for sharing ideas, fostering collaborations, mentorship and advocacy\nHugo's blog post on his new job and why it's exciting for him to double down on helping scientists do better science\n\n","content_html":"

Rachael Tatman is a senior developer advocate for Rasa, where she’s helping developers build and deploy ML chatbots using their open source framework.

\n\n

Rachael has a PhD in Linguistics from the University of Washington where her research was on computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle and she’s still a Kaggle Grandmaster.

\n\n

In this conversation, Rachael and I talk about the history of NLP and conversational AI//chatbots and we dive into the fascinating tension between rule-based techniques and ML and deep learning – we also talk about how to incorporate machine and human intelligence together by thinking through questions such as “should a response to a human ever be automated?” Spoiler alert: the answer is a resounding NO WAY!

\n\n

In this journey, something that becomes apparent is that many of the trends, concepts, questions, and answers, although framed for NLP and chatbots, are applicable to much of data science, more generally.

\n\n

We also discuss the data scientist’s responsibility to end-users and stakeholders using, among other things, the lens of considering those whose data you’re working with to be data donors.

\n\n

We then consider what globalized language technology looks like and can look like, what we can learn from the history of science here, particularly given that so much training data and models are in English when it accounts for so little of language spoken globally.

\n\n

Links

\n\n","summary":"Hugo speaks with Rachael Tatman about the democratization of natural language processing, conversational AI, and chatbots, including, among other things, the data scientist’s responsibility to end-users and stakeholders.","date_published":"2022-03-01T13:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/8f08dc5e-bb75-4fec-9db9-3808cd980ba9.mp3","mime_type":"audio/mpeg","size_in_bytes":88851890,"duration_in_seconds":5553}]},{"id":"65695b45-10a7-4785-adca-f1aeaa5818bc","title":"Episode 2: Making Data Science Uncool Again","url":"https://vanishinggradients.fireside.fm/2","content_text":"Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, the chair of WAMRI, and is Chief Scientist at platform.ai.\n\nIn this conversation, we’ll be talking about the history of data science, machine learning, and AI, where we’ve come from and where we’re going, how new techniques can be applied to real-world problems, whether it be deep learning to medicine or porting techniques from computer vision to NLP. We’ll also talk about what’s present and what’s missing in the ML skills revolution, what software engineering skills data scientists need to learn, how to cope in a space of such fragmented tooling, and paths for emerging out of the shadow of FAANG. If that’s not enough, we’ll jump into how spreading DS skills around the globe involves serious investments in education, building software, communities, and research, along with diving into the social challenges that the information age and the AI revolution (so to speak) bring with it.\n\nBut to get to all of this, you’ll need to listen to a few minutes of us chatting about chocolate biscuits in Australia!\n\nLinks\n\n\nfast.ai · making neural nets uncool again\nnbdev: create delightful python projects using Jupyter Notebooks\nThe fastai book, published as Jupyter Notebooks\nDeep Learning for Coders with fastai and PyTorch\nThe wonderful and terrifying implications of computers that can learn -- Jeremy' awesome TED talk!\nManna by Marshall Brain\nGhost Work by Mary L. Gray and Siddharth Suri\nUberland by Alex Rosenblat\n","content_html":"

Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, the chair of WAMRI, and is Chief Scientist at platform.ai.

\n\n

In this conversation, we’ll be talking about the history of data science, machine learning, and AI, where we’ve come from and where we’re going, how new techniques can be applied to real-world problems, whether it be deep learning to medicine or porting techniques from computer vision to NLP. We’ll also talk about what’s present and what’s missing in the ML skills revolution, what software engineering skills data scientists need to learn, how to cope in a space of such fragmented tooling, and paths for emerging out of the shadow of FAANG. If that’s not enough, we’ll jump into how spreading DS skills around the globe involves serious investments in education, building software, communities, and research, along with diving into the social challenges that the information age and the AI revolution (so to speak) bring with it.

\n\n

But to get to all of this, you’ll need to listen to a few minutes of us chatting about chocolate biscuits in Australia!

\n\n

Links

\n\n","summary":"Hugo talks with Jeremy Howard about the past, present, and future of data science, machine learning, and AI, with a focus on the democratization of deep learning.","date_published":"2022-02-21T10:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/65695b45-10a7-4785-adca-f1aeaa5818bc.mp3","mime_type":"audio/mpeg","size_in_bytes":101524103,"duration_in_seconds":6345}]},{"id":"a77d732e-f7be-4b71-be2f-fd09a392bd86","title":"Episode 1: Introducing Vanishing Gradients","url":"https://vanishinggradients.fireside.fm/1","content_text":"In this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis!\n\nOriginal music, bleeps, and blops by local Sydney legend PlaneFace!","content_html":"

In this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis!

\n\n

Original music, bleeps, and blops by local Sydney legend PlaneFace!

","summary":"In this episode, Hugo introduces the new data science podcast Vanishing Gradients. ","date_published":"2022-02-16T20:00:00.000+11:00","attachments":[{"url":"https://aphid.fireside.fm/d/1437767933/140c3904-8258-4c39-a698-a112b7077bd7/a77d732e-f7be-4b71-be2f-fd09a392bd86.mp3","mime_type":"audio/mpeg","size_in_bytes":5270212,"duration_in_seconds":329}]}]}