
SAFE-AI: Fortifying the Future of AI Security
Failed to add items
Add to Cart failed.
Add to Wish List failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
This podcast explores MITRE's SAFE-AI framework, a comprehensive guide for securing AI-enabled systems, developed by authors such as J. Kressel and R. Perrella. It builds upon established NIST standards and the MITRE Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS)™ framework, emphasizing the thorough evaluation of risks introduced by AI technologies. The need for SAFE-AI arises from AI's inherent dependency on data and learning processes, contributing to an expanded attack surface through issues like adversarial inputs, poisoning, exploiting automated decision-making, and supply chain vulnerabilities. By systematically identifying and addressing AI-specific threats and concerns across Environment, AI Platform, AI Model, and AI Data elements, SAFE-AI strengthens security control selection and assessment processes to ensure trustworthy AI-enabled systems.
www.compliancehub.wiki/navigating-the-ai-security-landscape-a-deep-dive-into-mitres-safe-ai-framework-for-compliance
Sponsors: https://airiskassess.com
https://cloudassess.vibehack.dev