Artwork

المحتوى المقدم من Jason Edwards. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Jason Edwards أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

Episode 35 — Transparency and Explainability

31:10
 
مشاركة
 

Manage episode 505486186 series 3689029
المحتوى المقدم من Jason Edwards. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Jason Edwards أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

AI systems are powerful, but when their outputs cannot be understood, they risk losing trust. This episode explores transparency and explainability as core qualities for responsible AI. We begin by distinguishing between transparency — openness about how systems are designed and trained — and explainability, which focuses on how specific decisions or predictions are made. White-box models like decision trees and linear regression are contrasted with black-box systems like deep neural networks, which achieve high accuracy but resist easy interpretation. Post-hoc techniques such as LIME and SHAP are introduced as tools for interpreting complex models, while documentation practices like model cards and datasheets add accountability.

We also consider why explainability matters in practice. In healthcare, clinicians need to understand AI recommendations for patient safety. In finance, lending models must be explainable to comply with laws that protect consumers from discrimination. In government, algorithmic decisions that affect rights and opportunities must be transparent to uphold democratic accountability. Challenges include balancing interpretability with performance, ensuring explanations are meaningful to non-technical users, and avoiding superficial “explanations” that obscure deeper problems. By the end, listeners will understand that transparency and explainability are not optional extras — they are prerequisites for building AI systems that are trustworthy, auditable, and aligned with human values. Produced by BareMetalCyber.com, where you’ll find more cyber prepcasts, books, and information to strengthen your certification path.

  continue reading

49 حلقات

Artwork
iconمشاركة
 
Manage episode 505486186 series 3689029
المحتوى المقدم من Jason Edwards. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Jason Edwards أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

AI systems are powerful, but when their outputs cannot be understood, they risk losing trust. This episode explores transparency and explainability as core qualities for responsible AI. We begin by distinguishing between transparency — openness about how systems are designed and trained — and explainability, which focuses on how specific decisions or predictions are made. White-box models like decision trees and linear regression are contrasted with black-box systems like deep neural networks, which achieve high accuracy but resist easy interpretation. Post-hoc techniques such as LIME and SHAP are introduced as tools for interpreting complex models, while documentation practices like model cards and datasheets add accountability.

We also consider why explainability matters in practice. In healthcare, clinicians need to understand AI recommendations for patient safety. In finance, lending models must be explainable to comply with laws that protect consumers from discrimination. In government, algorithmic decisions that affect rights and opportunities must be transparent to uphold democratic accountability. Challenges include balancing interpretability with performance, ensuring explanations are meaningful to non-technical users, and avoiding superficial “explanations” that obscure deeper problems. By the end, listeners will understand that transparency and explainability are not optional extras — they are prerequisites for building AI systems that are trustworthy, auditable, and aligned with human values. Produced by BareMetalCyber.com, where you’ll find more cyber prepcasts, books, and information to strengthen your certification path.

  continue reading

49 حلقات

كل الحلقات

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع

حقوق الطبع والنشر 2025 | سياسة الخصوصية | شروط الخدمة | | حقوق النشر
استمع إلى هذا العرض أثناء الاستكشاف
تشغيل