Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R&D with answer.ai. His current focus is on generative AI, flitting between different modalities. He also likes teaching and making courses, having worked with both Hugging Face and fast.ai in these capacities.
Johno recently reminded Hugo how hard everything was 10 years ago: “Want to install TensorFlow? Good luck. Need data? Perhaps try ImageNet. But now you can use big models from Hugging Face with hi-res satellite data and do all of this in a Colab notebook. Or think ecology and vision models… or medicine and multimodal models!”
We talk about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going. We’ll delve into
- What the Generative AI mindset is, in terms of using atomic building blocks, and how it evolved from both the data science and ML mindsets;
- How fast.ai democratized access to deep learning, what successes they had, and what was learned;
- The moving parts now required to make GenAI and ML as accessible as possible;
- The importance of focusing on UX and the application in the world of generative AI and foundation models;
- The skillset and toolkit needed to be an LLM and AI guru;
- What they’re up to at answer.ai to democratize LLMs and foundation models.
LINKS
- The livestream on YouTube
- Zindi, the largest professional network for data scientists in Africa
- A new old kind of R&D lab: Announcing Answer.AI
- Why and how I’m shifting focus to LLMs by Johno Whitaker
- Applying AI to Immune Cell Networks by Rachel Thomas
- Replicate -- a cool place to explore GenAI models, among other things
- Hands-On Generative AI with Transformers and Diffusion Models
- Johno on Twitter
- Hugo on Twitter
- Vanishing Gradients on Twitter
- SciPy 2024 CFP
- Escaping Generative AI Walled Gardens with Omoju Miller, a Vanishing Gradients Livestream