Artwork

المحتوى المقدم من The Gradient. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The Gradient أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

Seth Lazar: Normative Philosophy of Computing

1:50:17
 
مشاركة
 

Manage episode 419835463 series 2975159
المحتوى المقدم من The Gradient. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The Gradient أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Episode 124

You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.

I spoke with Professor Seth Lazar about:

* Why managing near-term and long-term risks isn’t always zero-sum

* How to think through axioms and systems in political philosphy

* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI

Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.

Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:54) Ad read — MLOps conference

* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation

* (03:53) Attention allocation as an independent good (or bad)

* (08:22) Axioms in political philosophy

* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust

* (15:05) AI safety / catastrophic risk concerns

* (22:10) Superintelligence arguments, reasoning about technology

* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?

* (35:55) GPT-2, model weights, related debates

* (39:11) Power and economics—coordination problems, company incentives

* (50:42) Morality tales, relationship between safety and capabilities

* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy

* (1:02:28) What is a feasibility horizon?

* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter

* (1:14:25) Sociotechnical lenses, narrowly technical solutions

* (1:19:47) Experiments for responsibly integrating AI systems into society

* (1:26:53) Helpful/honest/harmless and antagonistic AI systems

* (1:33:35) Managing incentives conducive to developing technology in the public interest

* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia

* (1:46:54) How we can help legitimize and support interdisciplinary work

* (1:50:07) Outro

Links:

* Seth’s Linktree and Twitter

* Resources

* Attention, moral skill, and algorithmic recommendation

* Catastrophic AI Risk slides


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

132 حلقات

Artwork
iconمشاركة
 
Manage episode 419835463 series 2975159
المحتوى المقدم من The Gradient. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The Gradient أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Episode 124

You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.

I spoke with Professor Seth Lazar about:

* Why managing near-term and long-term risks isn’t always zero-sum

* How to think through axioms and systems in political philosphy

* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI

Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.

Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:54) Ad read — MLOps conference

* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation

* (03:53) Attention allocation as an independent good (or bad)

* (08:22) Axioms in political philosophy

* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust

* (15:05) AI safety / catastrophic risk concerns

* (22:10) Superintelligence arguments, reasoning about technology

* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?

* (35:55) GPT-2, model weights, related debates

* (39:11) Power and economics—coordination problems, company incentives

* (50:42) Morality tales, relationship between safety and capabilities

* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy

* (1:02:28) What is a feasibility horizon?

* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter

* (1:14:25) Sociotechnical lenses, narrowly technical solutions

* (1:19:47) Experiments for responsibly integrating AI systems into society

* (1:26:53) Helpful/honest/harmless and antagonistic AI systems

* (1:33:35) Managing incentives conducive to developing technology in the public interest

* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia

* (1:46:54) How we can help legitimize and support interdisciplinary work

* (1:50:07) Outro

Links:

* Seth’s Linktree and Twitter

* Resources

* Attention, moral skill, and algorithmic recommendation

* Catastrophic AI Risk slides


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

132 حلقات

All episodes

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع