Artwork

المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

“Alignment Faking in Large Language Models” by ryan_greenblatt, evhub, Carson Denison, Benjamin Wright, Fabien Roger, Monte M, Sam Marks, Johannes Treutlein, Sam Bowman, Buck

19:35
 
مشاركة
 

Manage episode 456195913 series 3364758
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences.
Abstract
We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...]
---
Outline:
(00:26) Abstract
(02:22) Twitter thread
(05:46) Blog post
(07:46) Experimental setup
(12:06) Further analyses
(15:50) Caveats
(17:23) Conclusion
(18:03) Acknowledgements
(18:14) Career opportunities at Anthropic
(18:47) Career opportunities at Redwood Research
The original text contained 1 footnote which was omitted from this narration.
The original text contained 8 images which were described by AI.
---
First published:
December 18th, 2024
Source:
https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
undefined
undefined
undefined
  continue reading

402 حلقات

Artwork
iconمشاركة
 
Manage episode 456195913 series 3364758
المحتوى المقدم من LessWrong. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة LessWrong أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences.
Abstract
We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...]
---
Outline:
(00:26) Abstract
(02:22) Twitter thread
(05:46) Blog post
(07:46) Experimental setup
(12:06) Further analyses
(15:50) Caveats
(17:23) Conclusion
(18:03) Acknowledgements
(18:14) Career opportunities at Anthropic
(18:47) Career opportunities at Redwood Research
The original text contained 1 footnote which was omitted from this narration.
The original text contained 8 images which were described by AI.
---
First published:
December 18th, 2024
Source:
https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
undefined
undefined
undefined
  continue reading

402 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع

استمع إلى هذا العرض أثناء الاستكشاف
تشغيل