Artwork

المحتوى المقدم من Lightspeed Venture Partners. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Lightspeed Venture Partners أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

Inside the Black Box: The Urgency of AI Interpretability

1:02:17
 
مشاركة
 

Manage episode 509930947 series 3619430
المحتوى المقدم من Lightspeed Venture Partners. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Lightspeed Venture Partners أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Recorded live at Lightspeed’s offices in San Francisco, this special episode of Generative Now dives into the urgency and promise of AI interpretability. Lightspeed partner Nnamdi Iregbulem spoke with Anthropic researcher Jack Lindsey and Goodfire co-founder and Chief Scientist Tom McGrath, who previously co-founded Google DeepMind’s interpretability team. They discuss opening the black box of modern AI models in order to understand their reliability and spot real-world safety concerns, in order to build AI systems of the future that we can trust.

Episode Chapters:

00:42 Welcome and Introduction

00:36 Overview of Lightspeed and AI Investments

03:19 Event Agenda and Guest Introductions

05:35 Discussion on Interpretability in AI

18:44 Technical Challenges in AI Interpretability

29:42 Advancements in Model Interpretability

30:05 Smarter Models and Interpretability

31:26 Models Doing the Work for Us

32:43 Real-World Applications of Interpretability

34:32 Philanthropics' Approach to Interpretability

39:15 Breakthrough Moments in AI Interpretability

44:41 Challenges and Future Directions

48:18 Neuroscience and Model Training Insights

54:42 Emergent Misalignment and Model Behavior

01:01:30 Concluding Thoughts and Networking

Stay in touch:

The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

  continue reading

89 حلقات

Artwork
iconمشاركة
 
Manage episode 509930947 series 3619430
المحتوى المقدم من Lightspeed Venture Partners. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Lightspeed Venture Partners أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Recorded live at Lightspeed’s offices in San Francisco, this special episode of Generative Now dives into the urgency and promise of AI interpretability. Lightspeed partner Nnamdi Iregbulem spoke with Anthropic researcher Jack Lindsey and Goodfire co-founder and Chief Scientist Tom McGrath, who previously co-founded Google DeepMind’s interpretability team. They discuss opening the black box of modern AI models in order to understand their reliability and spot real-world safety concerns, in order to build AI systems of the future that we can trust.

Episode Chapters:

00:42 Welcome and Introduction

00:36 Overview of Lightspeed and AI Investments

03:19 Event Agenda and Guest Introductions

05:35 Discussion on Interpretability in AI

18:44 Technical Challenges in AI Interpretability

29:42 Advancements in Model Interpretability

30:05 Smarter Models and Interpretability

31:26 Models Doing the Work for Us

32:43 Real-World Applications of Interpretability

34:32 Philanthropics' Approach to Interpretability

39:15 Breakthrough Moments in AI Interpretability

44:41 Challenges and Future Directions

48:18 Neuroscience and Model Training Insights

54:42 Emergent Misalignment and Model Behavior

01:01:30 Concluding Thoughts and Networking

Stay in touch:

The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

  continue reading

89 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع

حقوق الطبع والنشر 2025 | سياسة الخصوصية | شروط الخدمة | | حقوق النشر
استمع إلى هذا العرض أثناء الاستكشاف
تشغيل