Artwork

المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon

21:59
 
مشاركة
 

Manage episode 510869585 series 3364760
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
  1. It's politically intractable to get a global halt or globally controlled takeoff.
  2. Superintelligence is moderately likely to be somewhat nice.
  3. We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
  4. We get to ramp up [...]
---
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
  continue reading

632 حلقات

Artwork
iconمشاركة
 
Manage episode 510869585 series 3364760
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
  1. It's politically intractable to get a global halt or globally controlled takeoff.
  2. Superintelligence is moderately likely to be somewhat nice.
  3. We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
  4. We get to ramp up [...]
---
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
  continue reading

632 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع

حقوق الطبع والنشر 2025 | سياسة الخصوصية | شروط الخدمة | | حقوق النشر
استمع إلى هذا العرض أثناء الاستكشاف
تشغيل