
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
Episode 59: Patterns and Anti-Patterns For Building with AI
Manage episode 508113793 series 3317544
John Berryman (Arcturus Labs; early GitHub Copilot engineer; co-author of Relevant Search and Prompt Engineering for LLMs) has spent years figuring out what makes AI applications actually work in production. In this episode, he shares the “seven deadly sins” of LLM development — and the practical fixes that keep projects from stalling.
From context management to retrieval debugging, John explains the patterns he’s seen succeed, the mistakes to avoid, and why it helps to think of an LLM as an “AI intern” rather than an all-knowing oracle.
We talk through:
- Why chasing perfect accuracy is a dead end
- How to use agents without losing control
- Context engineering: fitting the right information in the window
- Starting simple instead of over-orchestrating
- Separating retrieval from generation in RAG
- Splitting complex extractions into smaller checks
- Knowing when frameworks help — and when they slow you down
A practical guide to avoiding the common traps of LLM development and building systems that actually hold up in production.
LINKS:
- Context Engineering for AI Agents, a free, upcoming lightning lesson from John and Hugo
- The Hidden Simplicity of GenAI Systems, a previous lightning lesson from John and Hugo
- Roaming RAG – RAG without the Vector Database, by John
- Cut the Chit-Chat with Artifacts, by John
- Prompt Engineering for LLMs by John and Albert Ziegler
- Relevant Search by John and Doug Turnbull
- Arcturus Labs
- Watch the podcast on YouTube
- Upcoming Events on Luma
🎓 Learn more:
- Hugo's course (this episode was a guest Q&A from the course): Building LLM Applications for Data Scientists and Software Engineers — https://maven.com/s/course/d56067f338
60 حلقات
Manage episode 508113793 series 3317544
John Berryman (Arcturus Labs; early GitHub Copilot engineer; co-author of Relevant Search and Prompt Engineering for LLMs) has spent years figuring out what makes AI applications actually work in production. In this episode, he shares the “seven deadly sins” of LLM development — and the practical fixes that keep projects from stalling.
From context management to retrieval debugging, John explains the patterns he’s seen succeed, the mistakes to avoid, and why it helps to think of an LLM as an “AI intern” rather than an all-knowing oracle.
We talk through:
- Why chasing perfect accuracy is a dead end
- How to use agents without losing control
- Context engineering: fitting the right information in the window
- Starting simple instead of over-orchestrating
- Separating retrieval from generation in RAG
- Splitting complex extractions into smaller checks
- Knowing when frameworks help — and when they slow you down
A practical guide to avoiding the common traps of LLM development and building systems that actually hold up in production.
LINKS:
- Context Engineering for AI Agents, a free, upcoming lightning lesson from John and Hugo
- The Hidden Simplicity of GenAI Systems, a previous lightning lesson from John and Hugo
- Roaming RAG – RAG without the Vector Database, by John
- Cut the Chit-Chat with Artifacts, by John
- Prompt Engineering for LLMs by John and Albert Ziegler
- Relevant Search by John and Doug Turnbull
- Arcturus Labs
- Watch the podcast on YouTube
- Upcoming Events on Luma
🎓 Learn more:
- Hugo's course (this episode was a guest Q&A from the course): Building LLM Applications for Data Scientists and Software Engineers — https://maven.com/s/course/d56067f338
60 حلقات
Tüm bölümler
×مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.