Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
“What Is The Alignment Problem?” by johnswentworth
MP3•منزل الحلقة
Manage episode 461626068 series 3364760
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
So we want to align future AGIs. Ultimately we’d like to align them to human values, but in the shorter term we might start with other targets, like e.g. corrigibility.
That problem description all makes sense on a hand-wavy intuitive level, but once we get concrete and dig into technical details… wait, what exactly is the goal again? When we say we want to “align AGI”, what does that mean? And what about these “human values” - it's easy to list things which are importantly not human values (like stated preferences, revealed preferences, etc), but what are we talking about? And don’t even get me started on corrigibility!
Turns out, it's surprisingly tricky to explain what exactly “the alignment problem” refers to. And there's good reasons for that! In this post, I’ll give my current best explanation of what the alignment problem is (including a few variants and the [...]
---
Outline:
(01:27) The Difficulty of Specifying Problems
(01:50) Toy Problem 1: Old MacDonald's New Hen
(04:08) Toy Problem 2: Sorting Bleggs and Rubes
(06:55) Generalization to Alignment
(08:54) But What If The Patterns Don't Hold?
(13:06) Alignment of What?
(14:01) Alignment of a Goal or Purpose
(19:47) Alignment of Basic Agents
(23:51) Alignment of General Intelligence
(27:40) How Does All That Relate To Todays AI?
(31:03) Alignment to What?
(32:01) What are a Humans Values?
(36:14) Other targets
(36:43) Paul!Corrigibility
(39:11) Eliezer!Corrigibility
(40:52) Subproblem!Corrigibility
(42:55) Exercise: Do What I Mean (DWIM)
(43:26) Putting It All Together, and Takeaways
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
January 16th, 2025
Source:
https://www.lesswrong.com/posts/dHNKtQ3vTBxTfTPxu/what-is-the-alignment-problem
---
Narrated by TYPE III AUDIO.
---
…
continue reading
That problem description all makes sense on a hand-wavy intuitive level, but once we get concrete and dig into technical details… wait, what exactly is the goal again? When we say we want to “align AGI”, what does that mean? And what about these “human values” - it's easy to list things which are importantly not human values (like stated preferences, revealed preferences, etc), but what are we talking about? And don’t even get me started on corrigibility!
Turns out, it's surprisingly tricky to explain what exactly “the alignment problem” refers to. And there's good reasons for that! In this post, I’ll give my current best explanation of what the alignment problem is (including a few variants and the [...]
---
Outline:
(01:27) The Difficulty of Specifying Problems
(01:50) Toy Problem 1: Old MacDonald's New Hen
(04:08) Toy Problem 2: Sorting Bleggs and Rubes
(06:55) Generalization to Alignment
(08:54) But What If The Patterns Don't Hold?
(13:06) Alignment of What?
(14:01) Alignment of a Goal or Purpose
(19:47) Alignment of Basic Agents
(23:51) Alignment of General Intelligence
(27:40) How Does All That Relate To Todays AI?
(31:03) Alignment to What?
(32:01) What are a Humans Values?
(36:14) Other targets
(36:43) Paul!Corrigibility
(39:11) Eliezer!Corrigibility
(40:52) Subproblem!Corrigibility
(42:55) Exercise: Do What I Mean (DWIM)
(43:26) Putting It All Together, and Takeaways
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
January 16th, 2025
Source:
https://www.lesswrong.com/posts/dHNKtQ3vTBxTfTPxu/what-is-the-alignment-problem
---
Narrated by TYPE III AUDIO.
---
411 حلقات
MP3•منزل الحلقة
Manage episode 461626068 series 3364760
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
So we want to align future AGIs. Ultimately we’d like to align them to human values, but in the shorter term we might start with other targets, like e.g. corrigibility.
That problem description all makes sense on a hand-wavy intuitive level, but once we get concrete and dig into technical details… wait, what exactly is the goal again? When we say we want to “align AGI”, what does that mean? And what about these “human values” - it's easy to list things which are importantly not human values (like stated preferences, revealed preferences, etc), but what are we talking about? And don’t even get me started on corrigibility!
Turns out, it's surprisingly tricky to explain what exactly “the alignment problem” refers to. And there's good reasons for that! In this post, I’ll give my current best explanation of what the alignment problem is (including a few variants and the [...]
---
Outline:
(01:27) The Difficulty of Specifying Problems
(01:50) Toy Problem 1: Old MacDonald's New Hen
(04:08) Toy Problem 2: Sorting Bleggs and Rubes
(06:55) Generalization to Alignment
(08:54) But What If The Patterns Don't Hold?
(13:06) Alignment of What?
(14:01) Alignment of a Goal or Purpose
(19:47) Alignment of Basic Agents
(23:51) Alignment of General Intelligence
(27:40) How Does All That Relate To Todays AI?
(31:03) Alignment to What?
(32:01) What are a Humans Values?
(36:14) Other targets
(36:43) Paul!Corrigibility
(39:11) Eliezer!Corrigibility
(40:52) Subproblem!Corrigibility
(42:55) Exercise: Do What I Mean (DWIM)
(43:26) Putting It All Together, and Takeaways
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
January 16th, 2025
Source:
https://www.lesswrong.com/posts/dHNKtQ3vTBxTfTPxu/what-is-the-alignment-problem
---
Narrated by TYPE III AUDIO.
---
…
continue reading
That problem description all makes sense on a hand-wavy intuitive level, but once we get concrete and dig into technical details… wait, what exactly is the goal again? When we say we want to “align AGI”, what does that mean? And what about these “human values” - it's easy to list things which are importantly not human values (like stated preferences, revealed preferences, etc), but what are we talking about? And don’t even get me started on corrigibility!
Turns out, it's surprisingly tricky to explain what exactly “the alignment problem” refers to. And there's good reasons for that! In this post, I’ll give my current best explanation of what the alignment problem is (including a few variants and the [...]
---
Outline:
(01:27) The Difficulty of Specifying Problems
(01:50) Toy Problem 1: Old MacDonald's New Hen
(04:08) Toy Problem 2: Sorting Bleggs and Rubes
(06:55) Generalization to Alignment
(08:54) But What If The Patterns Don't Hold?
(13:06) Alignment of What?
(14:01) Alignment of a Goal or Purpose
(19:47) Alignment of Basic Agents
(23:51) Alignment of General Intelligence
(27:40) How Does All That Relate To Todays AI?
(31:03) Alignment to What?
(32:01) What are a Humans Values?
(36:14) Other targets
(36:43) Paul!Corrigibility
(39:11) Eliezer!Corrigibility
(40:52) Subproblem!Corrigibility
(42:55) Exercise: Do What I Mean (DWIM)
(43:26) Putting It All Together, and Takeaways
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
January 16th, 2025
Source:
https://www.lesswrong.com/posts/dHNKtQ3vTBxTfTPxu/what-is-the-alignment-problem
---
Narrated by TYPE III AUDIO.
---
411 حلقات
كل الحلقات
×مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.