Protect AI Introduces Guardian: Securing Open-Source ML Models from Malicious Code
A Secure Gateway for the Democratized AI/ML Era
In the realm of digital transformation, artificial intelligence (AI) and machine learning (ML) have emerged as transformative forces, revolutionizing industries and reshaping our world. However, this rapid adoption of AI/ML models has also brought forth a new set of security challenges. The democratization of AI/ML, driven by accessible open-source ‘Foundational Models’ on platforms like Hugging Face, has introduced vulnerabilities that can be exploited by malicious actors.
Model Serialization Attacks: A Trojan Horse in the Digital Age
One of the primary security risks associated with open-source ML models is the Model Serialization attack. This attack involves embedding malware code within an ML model during serialization (saving), creating a modern-day Trojan Horse. When deployed, these compromised models can execute malicious code, leading to data theft, credential compromise, data poisoning, and the subversion of AI systems.
Introducing Guardian: A Secure Gateway for Open-Source ML Models
Protect AI, a trailblazer in AI security and MLSecOps, has unveiled Guardian, an industry-defining secure gateway that addresses the security challenges posed by open-source ML models. Guardian is designed to enforce security policies on ML models, preventing malicious code from infiltrating an organization’s environment.
Built upon ModelScan, Protect AI’s open-source tool for scanning ML models for potential attacks, Guardian offers a comprehensive suite of features to ensure the security of open-source ML models:
– Proprietary Vulnerability Scanners: Guardian employs specialized vulnerability scanners, including a dedicated scanner for Keras lambda layers, to proactively scan open-source models for malicious code. This ensures the use of secure, policy-compliant models in organizational networks.
– Advanced Access Control and Dashboards: Guardian provides granular access control features and comprehensive dashboards, empowering security teams with control over model entry and visibility into model origins, creators, and licensing.
– Seamless Integration: Guardian seamlessly integrates with existing security frameworks and complements Protect AI’s Radar for extensive AIML threat surface visibility in organizations.
Protect AI: Leading the Charge in AI Security
With the introduction of Guardian, Protect AI solidifies its position as a leader in AI security and MLSecOps. The company’s comprehensive platform enables enterprises to see, know, and manage security risks across enterprise AI environments, empowering them to develop, deploy, and manage secure, compliant, and operationally efficient AI applications.
Protect AI is committed to pioneering the adoption of MLSecOps practices and leading the charge towards a safer AI-powered world. To learn more about Guardian and other offerings aimed at securing the future of AI, visit the Protect AI website or connect with the company on LinkedIn and Twitter.
About Protect AI
Protect AI is a leading provider of AI security solutions, enabling organizations to see, know, and manage security risks in their AI/ML environments. The company’s comprehensive platform provides visibility into the AI/ML attack surface, detects unique security threats, and remediates vulnerabilities. Protect AI is headquartered in Seattle, Washington, and is backed by leading investors, including Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures, and Salesforce Ventures. For more information, visit the Protect AI website or follow the company on LinkedIn and Twitter.