Artwork

المحتوى المقدم من Demetrios Brinkmann. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Demetrios Brinkmann أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !

Detecting Harmful Content at Scale // Matar Haller // #246

51:27
 
مشاركة
 

Manage episode 428042439 series 3241972
المحتوى المقدم من Demetrios Brinkmann. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Demetrios Brinkmann أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with.

AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #246 with Matar Haller, VP of Data & AI at ActiveFence. // Abstract One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection? // Bio Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links activefence.comhttps://www.youtube.com/@ActiveFence --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/ Timestamps: [00:00] Matar's preferred coffee [00:13] Takeaways [01:39] The talk that stood out [06:15] Online hate speech challenges [08:13] Evaluate harmful media API [09:58] Content moderation: AI models [11:36] Optimizing speed and accuracy [13:36] Cultural reference AI training [15:55] Functional Tests [20:05] Continuous adaptation of AI [26:43] AI detection concerns [29:12] Fine-Tuned vs Off-the-Shelf [32:04] Monitoring Transformer Model Hallucinations [34:08] Auditing process ensures accuracy [38:38] Testing strategies for ML [40:05] Modeling hate speech deployment [42:19] Improving production code quality [43:52] Finding balance in Moderation [47:23] Model's expertise: Cultural Sensitivity [50:26] Wrap up

  continue reading

389 حلقات

Artwork
iconمشاركة
 
Manage episode 428042439 series 3241972
المحتوى المقدم من Demetrios Brinkmann. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة Demetrios Brinkmann أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.

Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with.

AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #246 with Matar Haller, VP of Data & AI at ActiveFence. // Abstract One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection? // Bio Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links activefence.comhttps://www.youtube.com/@ActiveFence --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/ Timestamps: [00:00] Matar's preferred coffee [00:13] Takeaways [01:39] The talk that stood out [06:15] Online hate speech challenges [08:13] Evaluate harmful media API [09:58] Content moderation: AI models [11:36] Optimizing speed and accuracy [13:36] Cultural reference AI training [15:55] Functional Tests [20:05] Continuous adaptation of AI [26:43] AI detection concerns [29:12] Fine-Tuned vs Off-the-Shelf [32:04] Monitoring Transformer Model Hallucinations [34:08] Auditing process ensures accuracy [38:38] Testing strategies for ML [40:05] Modeling hate speech deployment [42:19] Improving production code quality [43:52] Finding balance in Moderation [47:23] Model's expertise: Cultural Sensitivity [50:26] Wrap up

  continue reading

389 حلقات

Όλα τα επεισόδια

×
 
Loading …

مرحبًا بك في مشغل أف ام!

يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.

 

دليل مرجعي سريع