انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
77 – Should AI be Explainable?
Manage episode 267688448 series 1328245
If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Show Notes
Topic covered include:
- Why do people worry about the opacity of AI?
- What’s the difference between explainability and transparency?
- What’s the moral value or function of explainable AI?
- Must we distinguish between the ethical value of an explanation and its epistemic value?
- Why is it so technically difficult to make AI explainable?
- Will we ever have a technical solution to the explanation problem?
- Why does Scott think there is Catch 22 involved in insisting on explainable AI?
- When should we insist on explanations and when are they unnecessary?
- Should we insist on using boring AI?
Relevant Links
- Scotts’s webpage
- Scott’s paper “A Misdirected Principle with a Catch: Explicability for AI”
- Scott’s paper “The Value of Transparency: Bulk Data and Authorisation“
- “The Right to an Explanation Explained” by Margot Kaminski
- Episode 36 – Wachter on Algorithms and Explanations
64 حلقات
Manage episode 267688448 series 1328245
If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Show Notes
Topic covered include:
- Why do people worry about the opacity of AI?
- What’s the difference between explainability and transparency?
- What’s the moral value or function of explainable AI?
- Must we distinguish between the ethical value of an explanation and its epistemic value?
- Why is it so technically difficult to make AI explainable?
- Will we ever have a technical solution to the explanation problem?
- Why does Scott think there is Catch 22 involved in insisting on explainable AI?
- When should we insist on explanations and when are they unnecessary?
- Should we insist on using boring AI?
Relevant Links
- Scotts’s webpage
- Scott’s paper “A Misdirected Principle with a Catch: Explicability for AI”
- Scott’s paper “The Value of Transparency: Bulk Data and Authorisation“
- “The Right to an Explanation Explained” by Margot Kaminski
- Episode 36 – Wachter on Algorithms and Explanations
64 حلقات
كل الحلقات
×مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.