Artwork

المحتوى المقدم من Sentience Institute. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Sentience Institute أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

Tobias Baumann of the Center for Reducing Suffering on moral circle expansion, cause prioritization, and reducing risks of astronomical suffering in the long-term future

1:18:40
 
مشاركة
 

Manage episode 295727130 series 2596584
المحتوى المقدم من Sentience Institute. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Sentience Institute أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

“If some beings are excluded from moral consideration then the results are usually quite bad, as evidenced by many forms of both current and historical suffering… I would definitely say that those that don’t have any sort of political representation or power are at risk. That’s true for animals right now; it might be true for artificially sentient beings in the future… And yeah, I think that is a plausible priority. Another candidate would be to work on other broad factors to improve the future such as by trying to fix politics, which is obviously a very, very ambitious goal… [Another candidate would be] trying to shape transformative AI more directly. We’ve talked about the uncertainty there is regarding the development of artificial intelligence, but at least there’s a certain chance that people are right about this being a very crucial technology; and if so, shaping it in the right way is very important obviously.”

  • Tobias Baumann

Expanding humanity’s moral circle to include farmed animals and other sentient beings is a promising strategy for reducing the risk of astronomical suffering in the long-term future. But are there other causes that we could focus on that might be better? And should reducing future suffering actually be our goal?

Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.

Topics discussed in the episode:

  • Why moral circle expansion is a plausible priority for those of us focused on doing good (2:17)
  • Tobias’ view on why we should accept longtermism — the idea that the value of our actions is determined primarily by their impacts on the long-term future (5:50)
  • Are we living at the most important time in history? (14:15)
  • When, if ever, will transformative AI arrive? (20:35)
  • Assuming longtermism, should we prioritize focusing on risks of astronomical suffering in the long-term future (s-risks) or on maximizing the likelihood of positive outcomes? (27:00)
  • What sorts of future beings might be excluded from humanity’s moral circle in the future, and why might this happen? (37:45)
  • What are the main reasons to believe that moral circle expansion might not be a very promising way to have positive impacts on the long-term future? (41:40)
  • Should we focus on other forms of values spreading that might be broadly positive, rather than expanding humanity’s moral circle? (48:55)
  • Beyond values spreading, which other causes should people focused on reducing s-risks consider prioritizing (50:25)
  • Should we expend resources on moral circle expansion and other efforts to reduce s-risk now or just invest our money and resources in order to benefit from compound interest? (1:00:02)
  • If we decide to focus on moral circle expansion, should we focus on the current frontiers of the moral circle, such as farmed animals, or focus more directly on groups of future beings we are concerned about? (1:03:06)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 حلقات

Artwork
iconمشاركة
 
Manage episode 295727130 series 2596584
المحتوى المقدم من Sentience Institute. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Sentience Institute أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

“If some beings are excluded from moral consideration then the results are usually quite bad, as evidenced by many forms of both current and historical suffering… I would definitely say that those that don’t have any sort of political representation or power are at risk. That’s true for animals right now; it might be true for artificially sentient beings in the future… And yeah, I think that is a plausible priority. Another candidate would be to work on other broad factors to improve the future such as by trying to fix politics, which is obviously a very, very ambitious goal… [Another candidate would be] trying to shape transformative AI more directly. We’ve talked about the uncertainty there is regarding the development of artificial intelligence, but at least there’s a certain chance that people are right about this being a very crucial technology; and if so, shaping it in the right way is very important obviously.”

  • Tobias Baumann

Expanding humanity’s moral circle to include farmed animals and other sentient beings is a promising strategy for reducing the risk of astronomical suffering in the long-term future. But are there other causes that we could focus on that might be better? And should reducing future suffering actually be our goal?

Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.

Topics discussed in the episode:

  • Why moral circle expansion is a plausible priority for those of us focused on doing good (2:17)
  • Tobias’ view on why we should accept longtermism — the idea that the value of our actions is determined primarily by their impacts on the long-term future (5:50)
  • Are we living at the most important time in history? (14:15)
  • When, if ever, will transformative AI arrive? (20:35)
  • Assuming longtermism, should we prioritize focusing on risks of astronomical suffering in the long-term future (s-risks) or on maximizing the likelihood of positive outcomes? (27:00)
  • What sorts of future beings might be excluded from humanity’s moral circle in the future, and why might this happen? (37:45)
  • What are the main reasons to believe that moral circle expansion might not be a very promising way to have positive impacts on the long-term future? (41:40)
  • Should we focus on other forms of values spreading that might be broadly positive, rather than expanding humanity’s moral circle? (48:55)
  • Beyond values spreading, which other causes should people focused on reducing s-risks consider prioritizing (50:25)
  • Should we expend resources on moral circle expansion and other efforts to reduce s-risk now or just invest our money and resources in order to benefit from compound interest? (1:00:02)
  • If we decide to focus on moral circle expansion, should we focus on the current frontiers of the moral circle, such as farmed animals, or focus more directly on groups of future beings we are concerned about? (1:03:06)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع

استمع إلى هذا العرض أثناء الاستكشاف
تشغيل