78 subscribers
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
المدونة الصوتية تستحق الاستماع
برعاية


1 QUALIFIED: How Competency Checking and Race Collide at Work with Shari Dunn | 284 33:58
The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI
Manage episode 422209686 series 3444082
After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.
They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.
Show Links:
- Voyage AI
- Stanford Assistant Professor of Computer Science
- Tengyu Ma Key Research Papers:
- Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
- Non-convex optimization for machine learning: design, analysis, and understanding
- Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
- Larger language models do in-context learning differently, 2023
- Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
- On the Optimization Landscape of Tensor Decompositions
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma
Show Notes:
(0:00) Introduction
(1:59) Key points of Tengyu’s research
(4:28) Academia compared to industry
(6:46) Voyage AI overview
(9:44) Enterprise RAG use cases
(15:23) LLM long-term memory and token limitations
(18:03) Agent chaining and data management
(22:01) Improving enterprise RAG
(25:44) Latency budgets
(27:48) Advice for building RAG systems
(31:06) Learnings as an AI founder
(32:55) The role of academia in AI
106 حلقات
Manage episode 422209686 series 3444082
After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.
They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.
Show Links:
- Voyage AI
- Stanford Assistant Professor of Computer Science
- Tengyu Ma Key Research Papers:
- Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
- Non-convex optimization for machine learning: design, analysis, and understanding
- Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
- Larger language models do in-context learning differently, 2023
- Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
- On the Optimization Landscape of Tensor Decompositions
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma
Show Notes:
(0:00) Introduction
(1:59) Key points of Tengyu’s research
(4:28) Academia compared to industry
(6:46) Voyage AI overview
(9:44) Enterprise RAG use cases
(15:23) LLM long-term memory and token limitations
(18:03) Agent chaining and data management
(22:01) Improving enterprise RAG
(25:44) Latency budgets
(27:48) Advice for building RAG systems
(31:06) Learnings as an AI founder
(32:55) The role of academia in AI
106 حلقات
كل الحلقات
×
1 National Security Strategy and AI Evals on the Eve of Superintelligence with Dan Hendrycks 36:24

1 Building the Platform for Scientific Breakthrough, with Noubar Afeyan of Moderna and Flagship Pioneering 40:32

1 Virtual Cell Models, Tahoe-100 and Data for AI-in-Bio with Vevo Therapeutics and the Arc Institute 57:40

1 Building Hard Tech in Hard Markets: Kyle Vogt on Cruise, Twitch, and The Bot Company 38:14

1 How Harvey AI is Changing the Legal Industry with Winston Weinberg 49:35

1 DeepSeek, Deep Research, and 2025 Predictions with Sarah and Elad 22:32

1 Rick Caruso on LA’s Wildfires, Policy Failures, and the Path Forward 27:11

1 What Can We Do About Wildfires? With Convective Capital’s Bill Clerico 37:59

1 How AI Agents Are Transforming Customer Support, with Decagon’s Jesse Zhang 30:09

1 Erik Bernhardsson on Creating Tools That Make AI Feel Effortless 23:36

1 The Best of 2024 with Sarah Guo and Elad Gil 27:07

1 Revolutionizing Customer Success with Agency’s Elias Torres 31:18

1 How Diamond Cooling Could Power the Future of AI, with Akash Systems 42:21

1 Bolt’s Eric Simons on Enabling Everyone to Generate Websites with AI 38:17

1 Model Plateaus and Enterprise AI Adoption with Cohere's Aidan Gomez 44:15
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.