Artwork

المحتوى المقدم من EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

Are we ready for human-level AI by 2030? Anthropic's co-founder answers

52:06
 
مشاركة
 

Manage episode 474638077 series 2615510
المحتوى المقدم من EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Anthropic's co-founder and chief scientist Jared Kaplan discusses AI's rapid evolution, the shorter-than-expected timeline to human-level AI, and how Claude's "thinking time" feature represents a new frontier in AI reasoning capabilities.

In this episode you'll hear:

  • Why Jared believes human-level AI is now likely to arrive in 2-3 years instead of by 2030
  • How AI models are developing the ability to handle increasingly complex tasks that would take humans hours or days
  • The importance of constitutional AI and interpretability research as essential guardrails for increasingly powerful systems

Our new show

This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET on Exponential View. You can tune in through my Substack linked below. The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at [email protected].

Timestamps:

(00:00) Episode trailer

(01:27) Jared's updated prediction for reaching human-level intelligence

(08:12) What will limit scaling laws?

(11:13) How long will we wait between model generations?

(16:27) Why test-time scaling is a big deal

(21:59) There’s no reason why DeepSeek can’t be competitive algorithmically

(25:31) Has Anthropic changed their approach to safety vs speed?

(30:08) Managing the paradoxes of AI progress

(32:21) Can interpretability and monitoring really keep AI safe?

(39:43) Are model incentives misaligned with public interests?

(42:36) How should we prepare for electricity-level impact?

(51:15) What Jared is most excited about in the next 12 months

Jared's links:

Azeem's links:

Produced by supermix.io


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

  continue reading

207 حلقات

Artwork
iconمشاركة
 
Manage episode 474638077 series 2615510
المحتوى المقدم من EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Anthropic's co-founder and chief scientist Jared Kaplan discusses AI's rapid evolution, the shorter-than-expected timeline to human-level AI, and how Claude's "thinking time" feature represents a new frontier in AI reasoning capabilities.

In this episode you'll hear:

  • Why Jared believes human-level AI is now likely to arrive in 2-3 years instead of by 2030
  • How AI models are developing the ability to handle increasingly complex tasks that would take humans hours or days
  • The importance of constitutional AI and interpretability research as essential guardrails for increasingly powerful systems

Our new show

This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET on Exponential View. You can tune in through my Substack linked below. The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at [email protected].

Timestamps:

(00:00) Episode trailer

(01:27) Jared's updated prediction for reaching human-level intelligence

(08:12) What will limit scaling laws?

(11:13) How long will we wait between model generations?

(16:27) Why test-time scaling is a big deal

(21:59) There’s no reason why DeepSeek can’t be competitive algorithmically

(25:31) Has Anthropic changed their approach to safety vs speed?

(30:08) Managing the paradoxes of AI progress

(32:21) Can interpretability and monitoring really keep AI safe?

(39:43) Are model incentives misaligned with public interests?

(42:36) How should we prepare for electricity-level impact?

(51:15) What Jared is most excited about in the next 12 months

Jared's links:

Azeem's links:

Produced by supermix.io


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

  continue reading

207 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع

حقوق الطبع والنشر 2025 | سياسة الخصوصية | شروط الخدمة | | حقوق النشر
استمع إلى هذا العرض أثناء الاستكشاف
تشغيل