Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today. Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
…
continue reading
1
Exploring Generative AI Risk Assessment and Regulatory Compliance
37:37
37:37
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
37:37
In this episode of the MLSecOps Podcast we have the honor of talking with David Rosenthal, Partner at VISCHER (Swiss Law, Tax & Compliance). David is also an author & former software developer, and lectures at ETH Zürich & the University of Basel. He has more than 25 years of experience in data & technology law and kindly joined the show to discuss…
…
continue reading
1
MLSecOps Culture: Considerations for AI Development and Security Teams
38:44
38:44
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
38:44
In this episode, we had the pleasure of welcoming Co-Founder and CISO of Weights & Biases, Chris Van Pelt, to the MLSecOps Podcast. Chris discusses a range of topics with hosts Badar Ahmed and Diana Kelley, including the history of how W&B was formed, building a culture of security & knowledge sharing across teams in an organization, real-world ML …
…
continue reading
1
Practical Offensive and Adversarial ML for Red Teams
35:24
35:24
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
35:24
Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood. Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, an…
…
continue reading
1
Expert Talk from RSA Conference: Securing Generative AI
25:42
25:42
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
25:42
In this episode, host Neal Swaelens (EMEA Director of Business Development, Protect AI) catches up with Ken Huang, CISSP at RSAC 2024 to talk about security for generative AI. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk …
…
continue reading
1
Practical Foundations for Securing AI
38:10
38:10
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
38:10
In this episode of the MLSecOps Podcast, we delve into the critical world of security for AI and machine learning with our guest Ron F. Del Rosario, Chief Security Architect and AI/ML Security Lead at SAP ISBN. The discussion highlights the contextual knowledge gap between ML practitioners and cybersecurity professionals, emphasizing the importance…
…
continue reading
1
Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex
31:04
31:04
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
31:04
In this episode of the MLSecOps Podcast, host Neal Swaelens, along with co-host Oleksandr Yaremchuk, sit down with special guest Simon Suo, co-founder and CTO of LlamaIndex. Simon shares insights into the development of LlamaIndex, a leading data framework for orchestrating data in large language models (LLMs). Drawing from his background in the se…
…
continue reading
1
AI Threat Research: Spotlight on the Huntr Community
31:48
31:48
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
31:48
Learn about the world’s first bug bounty platform for AI & machine learning, huntr, including how to get involved! This week’s featured guests are leaders from the huntr community (brought to you by Protect AI): Dan McInerney, Lead AI Threat Researcher Marcello Salvati, Sr. Engineer & Researcher Madison Vorbrich, Community Manager Thanks for listen…
…
continue reading
1
Securing AI: The Role of People, Processes & Tools in MLSecOps
37:16
37:16
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
37:16
In this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO). The group's discussion …
…
continue reading
1
ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance
35:30
35:30
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
35:30
In this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores: Are any ReDoS vulnerab…
…
continue reading
1
Finding a Balance: LLMs, Innovation, and Security
41:56
41:56
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
41:56
In this episode of The MLSecOps Podcast, special guest, Sandy Dunn, joins us to discuss the dynamic world of large language models (LLMs) and the equilibrium of innovation and security. Co-hosts, Daryan “D” Dehghanpisheh and Dan McInerney talk with Sandy about the nuanced challenges organizations face in managing LLMs while mitigating AI risks. Exp…
…
continue reading
1
Secure AI Implementation and Governance
38:37
38:37
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
38:37
In this episode of The MLSecOps Podcast, Nick James, CEO of WhitegloveAI dives in with show host, Chris King, Head of Product at Protect AI, to offer enlightening insights surrounding: - AI Governance - ISO - International Organization for Standardization ISO/IEC 42001:2023-Information Technology, Artificial Intelligence Management System - Continu…
…
continue reading
1
Risk Management and Enhanced Security Practices for AI Systems
38:08
38:08
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
38:08
In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information S…
…
continue reading
1
Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations
41:19
41:19
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
41:19
In this episode, co-hosts Badar Ahmed and Daryan Dehghanpisheh are joined by Drew Farris (Principal, Booz Allen Hamilton) and Edward Raff (Chief Scientist, Booz Allen Hamilton) to discuss themes from their paper, "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks," co-authored with Michael Benaroch. Thanks for listening! Fin…
…
continue reading
1
From Risk to Responsibility: Violet Teaming in AI; With Guest: Alexander Titus
43:20
43:20
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
43:20
In this episode, the founder and CEO of The In Vivo Group, Alexander Titus, joins show hosts Diana Kelley and Daryan Dehghanpisheh to discuss themes from his forward-thinking paper, "The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward," authored with Adam H. Russell. Thanks for listening! Find more epis…
…
continue reading
1
Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP
39:45
39:45
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
39:45
*This episode is also available in video format! Click to watch the full YouTube video.* Welcome to Season 2 of The MLSecOps Podcast! In this episode, we joined Strategic Technology Branch Chief, Martin Stanley, CISSP, from the Cybersecurity and Infrastructure Security Agency (CISA), to celebrate 20 years of Cybersecurity Awareness Month, as well a…
…
continue reading
1
AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 2)
42:28
42:28
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
42:28
*This episode is also available in video format! Click to watch the full YouTube video.* Welcome back, everyone, to The MLSecOps Podcast. We’re thrilled to have you with us for Part 2 of our two-part season finale, as we wrap up Season 1 and look forward to an exciting and revamped Season 2. In this two-part season recap, we’ve been revisiting some…
…
continue reading
1
AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 1)
37:10
37:10
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
37:10
*This episode is also available in video format! Click to watch the full YouTube video.* Welcome to the final episode of the first season of The MLSecOps Podcast, brought to you by the team at Protect AI. In this two-part episode, we’ll be taking a look back at some favorite highlights from the season where we dove deep into machine learning securi…
…
continue reading
1
A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer
29:25
29:25
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
29:25
Joining us for the first time as a guest host is Protect AI’s CEO and founder, Ian Swanson. Ian is joined this week by Rob van der Veer, a pioneer in AI and security. Rob gave a presentation at Global AppSec Dublin earlier this year called “Attacking and Protecting Artificial Intelligence” which was a large inspiration for this episode. In it, Rob …
…
continue reading
1
ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt
35:33
35:33
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
35:33
This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equit…
…
continue reading
1
Exploring AI/ML Security Risks: At Black Hat USA 2023 with Protect AI
35:20
35:20
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
35:20
Watch the video for this episode at: https://mlsecops.com/podcast/exploring-ai/ml-security-risks-at-black-hat-usa-2023 This episode of The MLSecOps Podcast features expert security leaders who sat down at Black Hat USA 2023 last week with team members from Protect AI to talk about various facets of AI and machine learning security: - What is the ov…
…
continue reading
1
Everything You Need to Know About Hacker Summer Camp 2023
38:59
38:59
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
38:59
Welcome back to The MLSecOps Podcast for this week's episode, “Everything You Need to Know About Hacker Summer Camp 2023.” This week, our show is hosted by Protect AI's Chief Information Security Officer, Diana Kelley, and Diana talks with two more longtime security experts, Chloé Messdaghi and Dan McInerney, about all things related to what the se…
…
continue reading
1
Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul
46:44
46:44
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
46:44
Welcome to The MLSecOps Podcast, where we dive deep into the world of machine learning security operations. In this episode, we talk with the renowned Katharine Jarmul. Katharine is a Principal Data Scientist at Thoughtworks, and the author of the popular new book, Practical Data Privacy. Katharine also writes a blog titled, Probably Private, where…
…
continue reading
1
The Intersection of MLSecOps and DataPrepOps; With Guest: Jennifer Prendki, PhD
34:40
34:40
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
34:40
On this week’s episode from The MLSecOps Podcast, we have the pleasure of hearing from Dr. Jennifer Prendki, founder and CEO of Alectio - The DataPrepOps Company. Alectio’s name comes from a blend of the acronym “AL,” standing for Active Learning, and the Latin term for the word “selection,” which is “lectio.” In this episode, Dr. Prendki defines D…
…
continue reading
1
The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST
30:30
30:30
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
30:30
In this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind t…
…
continue reading
1
Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal
39:16
39:16
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
39:16
In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal. Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs). In this highly…
…
continue reading
1
Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake
36:14
36:14
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
36:14
This talk makes it increasingly clear. The time for machine learning security operations - MLSecOps - is now. In “Indirect Prompt Injections and Threat Modeling of LLM Applications,” (transcript here -> https://bit.ly/45DYMAG) we dive deep into the world of large language model (LLM) attacks and security. Our conversation with esteemed cyber securi…
…
continue reading
1
Responsible AI: Defining, Implementing, and Navigating the Future; With Guest: Diya Wynn
33:17
33:17
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
33:17
In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI. Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible A…
…
continue reading
1
ML Security: AI Incident Response Plans and Enterprise Risk Culture; With Guest: Patrick Hall
38:49
38:49
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
38:49
In this episode of The MLSecOps Podcast, Patrick Hall, co-founder of BNH.AI and author of "Machine Learning for High-Risk Applications," discusses the importance of “responsible AI” implementation and risk management. He also shares real-world examples of incidents resulting from the lack of proper AI and machine learning risk management; supportin…
…
continue reading
1
AI Audits: Uncovering Risks in ML Systems; With Guest: Shea Brown, PhD
41:02
41:02
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
41:02
Shea Brown, PhD explores with us the “W’s” and security practices related to AI and algorithm audits. What is included in an AI audit? Who is requesting AI audits and, conversely, who isn’t requesting them but should be? When should organizations request a third party audit of their AI/ML systems and machine learning algorithms? Why should they do …
…
continue reading
1
MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger
40:29
40:29
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
40:29
Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com. In this epis…
…
continue reading
1
MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD
39:48
39:48
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
39:48
This week The MLSecOps Podcast talks with Dr. Christina Liaghati, AI Strategy Execution & Operations Manager of the AI & Autonomy Innovation Center at MITRE. Chris King, Head of Product at Protect AI, guest-hosts with regular co-host D Dehghanpisheh this week. D and Chris discuss various AI and machine learning security topics with Dr. Liaghati, in…
…
continue reading
1
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
39:22
39:22
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
39:22
What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how? The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center …
…
continue reading
1
Just How Practical Are Data Poisoning Attacks? With Guest: Dr. Florian Tramèr
47:35
47:35
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
47:35
ETH Zürich's Assistant Professor of Computer Science, Dr. Florian Tramèr, joins us to talk about data poisoning attacks and the intersection of Adversarial ML and MLSecOps (machine learning security operations). Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: P…
…
continue reading
1
A Closer Look at "Adversarial Robustness for Machine Learning" With Guest: Pin-Yu Chen
38:39
38:39
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
38:39
In this episode of The MLSecOps podcast, the co-hosts interview Pin-Yu Chen, Principal Research Scientist at IBM Research, about his book co-authored with Cho-Jui Hsieh, "Adversarial Robustness for Machine Learning." Chen explores the vulnerabilities of machine learning (ML) models to adversarial attacks and provides examples of how to enhance thei…
…
continue reading
1
A Closer Look at "Securing AIML Systems in the Age of Information Warfare" With Guest: Disesdi Susanna Cox
30:50
30:50
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
30:50
Security researcher, AI/ML architect, & former political operative, Disesdi Susanna Cox, talks with us about her research, some of which can be accessed via her website: anglesofattack.io. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to…
…
continue reading
…
continue reading
For my friends
…
continue reading