Artwork

المحتوى المقدم من Bruce Bracken. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Bruce Bracken أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

Unlocking Backdoor AI Poisoning with Dmitrijs Trizna

46:53
 
مشاركة
 

Manage episode 428128796 series 3486243
المحتوى المقدم من Bruce Bracken. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Bruce Bracken أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Dmitrijs Trizna, Security Researcher at Microsoft joins Nic Fillingham on this week's episode of The BlueHat Podcast. Dmitrijs explains his role at Microsoft, focusing on AI-based cyber threat detection for Kubernetes and Linux platforms. Dmitrijs explores the complex landscape of securing AI systems, focusing on the emerging challenges of Trustworthy AI. He delves into how threat actors exploit vulnerabilities through techniques like backdoor poisoning, using gradual benign inputs to deceive AI models. Dmitrijs highlights the multidisciplinary approach required for effective AI security, combining AI expertise with rigorous security practices. He also discusses the resilience of gradient-boosted decision trees against such attacks and shares insights from his recent presentation at Blue Hat India, where he noted a strong interest in AI security.

In This Episode You Will Learn:

  • The concept of Trustworthy AI and its importance in today's technology landscape
  • How threat actors exploit AI vulnerabilities using backdoor poisoning techniques
  • The role of frequency and unusual inputs in compromising AI model integrity

Some Questions We Ask:

  • Could you elaborate on the resilience of gradient-boosted decision trees in AI security?
  • What interdisciplinary approaches are necessary for effective AI security?
  • How do we determine acceptable thresholds for AI model degradation in security contexts?

Resources:

View Dmitrijs Trizna on LinkedIn

View Wendy Zenone on LinkedIn

View Nic Fillingham on LinkedIn

Related Microsoft Podcasts:

Discover and follow other Microsoft podcasts at microsoft.com/podcasts

The BlueHat Podcast is produced by Microsoft and distributed as part of N2K media network.

  continue reading

41 حلقات

Artwork
iconمشاركة
 
Manage episode 428128796 series 3486243
المحتوى المقدم من Bruce Bracken. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Bruce Bracken أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Dmitrijs Trizna, Security Researcher at Microsoft joins Nic Fillingham on this week's episode of The BlueHat Podcast. Dmitrijs explains his role at Microsoft, focusing on AI-based cyber threat detection for Kubernetes and Linux platforms. Dmitrijs explores the complex landscape of securing AI systems, focusing on the emerging challenges of Trustworthy AI. He delves into how threat actors exploit vulnerabilities through techniques like backdoor poisoning, using gradual benign inputs to deceive AI models. Dmitrijs highlights the multidisciplinary approach required for effective AI security, combining AI expertise with rigorous security practices. He also discusses the resilience of gradient-boosted decision trees against such attacks and shares insights from his recent presentation at Blue Hat India, where he noted a strong interest in AI security.

In This Episode You Will Learn:

  • The concept of Trustworthy AI and its importance in today's technology landscape
  • How threat actors exploit AI vulnerabilities using backdoor poisoning techniques
  • The role of frequency and unusual inputs in compromising AI model integrity

Some Questions We Ask:

  • Could you elaborate on the resilience of gradient-boosted decision trees in AI security?
  • What interdisciplinary approaches are necessary for effective AI security?
  • How do we determine acceptable thresholds for AI model degradation in security contexts?

Resources:

View Dmitrijs Trizna on LinkedIn

View Wendy Zenone on LinkedIn

View Nic Fillingham on LinkedIn

Related Microsoft Podcasts:

Discover and follow other Microsoft podcasts at microsoft.com/podcasts

The BlueHat Podcast is produced by Microsoft and distributed as part of N2K media network.

  continue reading

41 حلقات

Усі епізоди

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع