18 subscribers
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
المدونة الصوتية تستحق الاستماع
برعاية


1 #697: Building the total experience for customers with AJ Joplin, Forrester 24:10
Eric Schwitzgebel on user perception of the moral status of AI
Manage episode 401251934 series 2596584
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.”
- Eric Schwitzgebel
Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?
Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology. His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World. He blogs at The Splintered Mind.
Topics discussed in the episode:
- Introduction (0:00)
- AI systems must not confuse users about their sentience or moral status introduction (3:14)
- Not confusing experts (5:30)
- Not confusing general users (9:12)
- What would make an AI system sentience or moral standing clear? (13:21)
- Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)
- How would we implement this solution at a policy level? (25:19)
- What happens when some theories of consciousness disagree about AI consciousness? (28:24)
- How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)
- Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)
- How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)
- What was the process for determining what indicator properties to include? (42:58)
- Advantages of the indicator properties approach (44:49)
- Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)
- Where does Eric think we will see sentience first in AI if we do? (50:17)
- Are things like grounding or embodiment essential for understanding and consciousness? (53:35)
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
23 حلقات
Manage episode 401251934 series 2596584
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.”
- Eric Schwitzgebel
Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?
Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology. His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World. He blogs at The Splintered Mind.
Topics discussed in the episode:
- Introduction (0:00)
- AI systems must not confuse users about their sentience or moral status introduction (3:14)
- Not confusing experts (5:30)
- Not confusing general users (9:12)
- What would make an AI system sentience or moral standing clear? (13:21)
- Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)
- How would we implement this solution at a policy level? (25:19)
- What happens when some theories of consciousness disagree about AI consciousness? (28:24)
- How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)
- Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)
- How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)
- What was the process for determining what indicator properties to include? (42:58)
- Advantages of the indicator properties approach (44:49)
- Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)
- Where does Eric think we will see sentience first in AI if we do? (50:17)
- Are things like grounding or embodiment essential for understanding and consciousness? (53:35)
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
23 حلقات
كل الحلقات
×
1 Eric Schwitzgebel on user perception of the moral status of AI 57:47

1 Raphaël Millière on large language models 1:49:27

1 Matti Wilks on human-animal interaction and moral circle expansion 1:06:15


1 Kurt Gray on human-robot interaction and mind perception 59:10

1 Thomas Metzinger on a moratorium on artificial sentience development 1:50:44

1 Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering 1:16:25

1 Tobias Baumann of the Center for Reducing Suffering on moral circle expansion, cause prioritization, and reducing risks of astronomical suffering in the long-term future 1:18:40

1 Jo Anderson of Faunalytics and Saulius Šimčikas of Rethink Priorities on research for effective animal advocacy 1:35:20

1 Ajay Dahiya of The Pollination Project on funding grassroots animal advocacy and inner transformation 1:43:11

1 Oscar Horta of the University of Santiago de Compostela on how we can best help wild animals 1:19:38

1 Oscar Horta of the University of Santiago de Compostela on why we should help wild animals 1:28:50

1 Leah Garcés of Mercy For Animals on factory farm investigations, long-term strategy, and animal advocacy during COVID-19 1:35:58

1 Frank Baumgartner of UNC-Chapel Hill on policy dynamics, lobbying, and issue framing 1:51:10

1 Elliot Swartz of the Good Food Institute on the bottlenecks to the scale-up of cultured meat and plant-based meat 2:21:25

1 Laila Kassam of Animal Think Tank on popular protest movements, mass arrests, and publicity stunts 1:37:51

1 Jayasimha Nuggehalli on capacity building and animal welfare in Asia 1:53:55

1 Lisa Feria of Stray Dog Capital on impact investing and animal-free food tech entrepreneurship 1:43:10

1 Christie Lagally of Rebellyous Foods on scaling up high-quality plant-based foods 1:08:11

1 Kristof Dhont of University of Kent on intergroup contact research and research careers 1:54:51

1 Pei Su of ACTAsia on humane education in China 1:10:53

1 Kevin Schneider of the Nonhuman Rights Project on using litigation to expand the moral circle 2:12:05

1 Ria Rehberg of Veganuary on driving institutional change through online campaigns 1:46:37
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.