Guardian: Securing AI/ML Models from Malicious Code
The rapid advancement of Artificial Intelligence (AI) and Machine Learning (ML) has revolutionized various industries and sectors. However, with the growing democratization of AI/ML, security concerns have emerged, particularly regarding the potential for malicious code to infiltrate ML models. To address these concerns, Protect AI has introduced Guardian, a robust solution that empowers organizations to enforce security policies on ML models, preventing malicious code from entering their environments.
Foundation: ModelScan and Open-Source Accessibility
Guardian is built upon the foundation of ModelScan, an open-source tool developed by Protect AI. ModelScan plays a crucial role in scanning machine learning models to identify unsafe code. Guardian combines the strengths of Protect AI’s open-source offering with enterprise-level enforcement and management of model security. Additionally, it extends coverage with proprietary scanning capabilities, ensuring comprehensive protection.
The increasing accessibility of open-source ‘Foundational Models’ on platforms like Hugging Face has contributed to the democratization of AI/ML. These models, downloaded millions of times monthly, are instrumental in powering a wide range of AI applications. However, this trend also introduces security risks, as the open exchange of files on these repositories can lead to the unintended spread of malicious software among users.
Addressing the Security Risks of Openly Shared Models
Ian Swanson, CEO of Protect AI, emphasizes the importance of treating ML models as valuable assets within an organization’s infrastructure. He highlights the need for rigorous scanning of these models for viruses and malicious code, similar to the level of scrutiny applied to PDF files before their usage. With thousands of models being downloaded millions of times from Hugging Face monthly, the risk of encountering dangerous code within these models becomes significant. Guardian empowers customers to regain control over open-source model security.
Openly shared machine learning models pose a critical risk to organizations, known as the Model Serialization attack. This attack involves adding malicious code to a model’s contents during serialization (saving) and before distribution. This modern version of the Trojan Horse allows unseen malicious code to be executed once integrated into a model, potentially leading to data theft, credential compromise, data poisoning, and other malicious activities. These risks are prevalent in models hosted on large repositories like Hugging Face.
ModelScan: Identifying Unsafe Models and Proactive Scanning
In 2023, Protect AI launched ModelScan, an open-source tool dedicated to scanning AI/ML models for potential attacks. This tool plays a pivotal role in securing systems from supply chain attacks. Since its launch, Protect AI has utilized ModelScan to evaluate over 400,000 models hosted on Hugging Face, identifying unsafe models and refreshing this knowledge base nightly.
The findings revealed that over 3300 models had the capability to execute rogue code. Despite this discovery, these models continue to be downloaded and deployed into ML environments without the necessary security tools to scan for risks prior to adoption.
Guardian: A Secure Gateway for Model Security
Unlike other open-source alternatives, Protect AI’s Guardian acts as a secure gateway, bridging ML development and deployment processes that utilize Hugging Face and other model repositories. It employs proprietary vulnerability scanners, including a specialized scanner for Keras lambda layers. This enables proactive scanning of open-source models for malicious code, ensuring the use of secure, policy-compliant models within organizational networks.
Advanced Features and Integration
Guardian offers advanced access control features and dashboards, granting security teams control over model entry and providing comprehensive insights into model origins, creators, and licensing. Its seamless integration with existing security frameworks and complementarity with Protect AI’s Radar further enhance AIML threat surface visibility in organizations.
Conclusion
Guardian, powered by ModelScan and Protect AI’s expertise in ML security, represents a significant advancement in safeguarding AI/ML models from malicious code. By enforcing security policies, providing comprehensive scanning capabilities, and offering advanced access control features, Guardian empowers organizations to confidently adopt and deploy open-source ML models, mitigating the risks associated with Model Serialization attacks and ensuring the integrity of their AI/ML environments.