Artwork

المحتوى المقدم من The Nonlinear Fund. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The Nonlinear Fund أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

LW - Two easy things that maybe Just Work to improve AI discourse by jacobjacob

3:09
 
مشاركة
 

Manage episode 422610779 series 3337129
المحتوى المقدم من The Nonlinear Fund. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The Nonlinear Fund أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two easy things that maybe Just Work to improve AI discourse, published by jacobjacob on June 8, 2024 on LessWrong. So, it seems AI discourse on X / Twitter is getting polarised. This is bad. Especially bad is how some engage in deliberate weaponization of discourse, for political ends. At the same time, I observe: AI Twitter is still a small space. There are often important posts that have only ~100 likes, ~10-100 comments, and maybe ~10-30 likes on top comments. Moreover, it seems to me little sane comments, when they do appear, do get upvoted. This is... crazy! Consider this thread: A piece of legislation is being discussed, with major ramifications for regulation of frontier models, and... the quality of discourse hinges on whether 5-10 random folks show up and say some sensible stuff on Twitter!? It took me a while to see these things. I think I had a cached view of "political discourse is hopeless, the masses of trolls are too big for anything to matter, unless you've got some specialised lever or run one of these platforms". I now think I was wrong. Just like I was wrong for many years about the feasibility of public and regulatory support for taking AI risk seriously. This begets the following hypothesis: AI discourse might currently be small enough that we could basically just brute force raise the sanity waterline. No galaxy-brained stuff. Just a flood of folks making... reasonable arguments. It's the dumbest possible plan: let's improve AI discourse by going to places with bad discourse and making good arguments. I recognise this is a pretty strange view, and does counter a lot of priors I've built up hanging around LessWrong for the last couple years. If it works, it's because of a surprising, contingent, state of affairs. In a few months or years the numbers might shake out differently. But for the time being, plausibly the arbitrage is real. Furthermore, there's of course already a built-in feature, with beautiful mechanism design and strong buy-in from leadership, for increasing the sanity waterline: Community Notes. It's a feature that allows users to add "notes" to tweets providing context, and then only shows those notes if ~they get upvoted by people who usually disagree. Yet... outside of massive news like the OpenAI NDA scandal, Community Notes is barely being used for AI discourse. I'd guess it's probably no more interesting reason than that few people use community notes overall, multiplied by few of those people engaging in AI discourse. Again, plausibly, the arbitrage is real. If you think this sounds compelling, here's two easy ways that might just work to improve AI discourse: 1. Make an account on X. When you see invalid or bad faith arguments on AI: reply with valid arguments. Upvote other such replies. 2. Join Community Notes at this link. Start writing and rating posts. (You'll to need to rate some posts before you're allowed to write your own.) And, above all: it doesn't matter what conclusion you argue for; as long as you make valid arguments. Pursue asymmetric strategies, the sword that only cuts if your intention is true. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1690 حلقات

Artwork
iconمشاركة
 
Manage episode 422610779 series 3337129
المحتوى المقدم من The Nonlinear Fund. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The Nonlinear Fund أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two easy things that maybe Just Work to improve AI discourse, published by jacobjacob on June 8, 2024 on LessWrong. So, it seems AI discourse on X / Twitter is getting polarised. This is bad. Especially bad is how some engage in deliberate weaponization of discourse, for political ends. At the same time, I observe: AI Twitter is still a small space. There are often important posts that have only ~100 likes, ~10-100 comments, and maybe ~10-30 likes on top comments. Moreover, it seems to me little sane comments, when they do appear, do get upvoted. This is... crazy! Consider this thread: A piece of legislation is being discussed, with major ramifications for regulation of frontier models, and... the quality of discourse hinges on whether 5-10 random folks show up and say some sensible stuff on Twitter!? It took me a while to see these things. I think I had a cached view of "political discourse is hopeless, the masses of trolls are too big for anything to matter, unless you've got some specialised lever or run one of these platforms". I now think I was wrong. Just like I was wrong for many years about the feasibility of public and regulatory support for taking AI risk seriously. This begets the following hypothesis: AI discourse might currently be small enough that we could basically just brute force raise the sanity waterline. No galaxy-brained stuff. Just a flood of folks making... reasonable arguments. It's the dumbest possible plan: let's improve AI discourse by going to places with bad discourse and making good arguments. I recognise this is a pretty strange view, and does counter a lot of priors I've built up hanging around LessWrong for the last couple years. If it works, it's because of a surprising, contingent, state of affairs. In a few months or years the numbers might shake out differently. But for the time being, plausibly the arbitrage is real. Furthermore, there's of course already a built-in feature, with beautiful mechanism design and strong buy-in from leadership, for increasing the sanity waterline: Community Notes. It's a feature that allows users to add "notes" to tweets providing context, and then only shows those notes if ~they get upvoted by people who usually disagree. Yet... outside of massive news like the OpenAI NDA scandal, Community Notes is barely being used for AI discourse. I'd guess it's probably no more interesting reason than that few people use community notes overall, multiplied by few of those people engaging in AI discourse. Again, plausibly, the arbitrage is real. If you think this sounds compelling, here's two easy ways that might just work to improve AI discourse: 1. Make an account on X. When you see invalid or bad faith arguments on AI: reply with valid arguments. Upvote other such replies. 2. Join Community Notes at this link. Start writing and rating posts. (You'll to need to rate some posts before you're allowed to write your own.) And, above all: it doesn't matter what conclusion you argue for; as long as you make valid arguments. Pursue asymmetric strategies, the sword that only cuts if your intention is true. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1690 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع