Noma Security Lands $100 Million Series B to Fortify Autonomous AI Agents
The year is 2025, and the world of artificial intelligence is buzzing with a new kind of power: agentic AI. These aren’t just chatbots that answer your questions; they are autonomous agents, software entities that can take initiative, run complex workflows, and adapt their strategies without needing constant human direction. Imagine AI agents streamlining your company’s supply chain, personalizing customer experiences in real-time, or even assisting in groundbreaking scientific research. Agentic AI is transforming industries like finance, retail, and healthcare by automating tasks that were once thought impossible for machines. But with this incredible power comes a new set of cybersecurity challenges. The very autonomy that makes these AI agents so valuable also makes them a potential target for malicious actors if not properly secured and governed. Recognizing this critical need, Noma Security, a company dedicated to safeguarding the burgeoning field of AI, has successfully raised a substantial $100 million in Series B funding. This significant investment, spearheaded by Evolution Equity Partners and bolstered by continued support from existing investors Ballistic Ventures and Glilot Capital Partners, signals a major commitment to addressing the complex security and governance demands of this rapidly evolving AI landscape. [1, 2, 3, 4]
Noma Security’s Mission: Building Trust in AI at Speed
Noma Security’s core mission is to instill confidence in the widespread adoption of AI technologies, allowing businesses to leverage AI at the rapid pace that today’s markets demand. [5] The company’s unified platform is meticulously designed to make AI security and governance as seamless and automatic as the use of AI itself. [5] They understand that traditional security tools, which were built for static applications and human users, are simply not equipped to handle the dynamic and autonomous nature of AI agents. [6] Noma has pioneered a new approach, offering comprehensive, end-to-end solutions that secure the entire data and AI lifecycle, from the initial development stages all the way through to production environments. [7, 8] This holistic approach tackles critical blind spots that often go unnoticed by traditional application security (AppSec) teams when they deal with data and AI processes. [9] Noma’s platform integrates robust security measures across data supply chains, Machine Learning Operations (MLOps), and open-source AI components. This not only enhances visibility but also significantly improves an organization’s overall AI security posture, ensuring compliance with emerging AI regulations as well. [9]
The Growing Imperative for AI Agent Security
The rapid adoption of agentic AI presents a unique set of security challenges that traditional cybersecurity frameworks are simply not built to handle. [10, 6, 11] Unlike generative AI tools, which primarily respond to user prompts, AI agents can proactively initiate tasks and make decisions autonomously. [12] This autonomy dramatically expands the potential attack surface, opening doors to novel vulnerabilities and exploits. [6] For example, AI agents often require access to sensitive user data, critical APIs, and various enterprise applications, making them prime targets for malicious actors. [6] The inherent complexity of their internal workings, which can involve intricate sequences of processes managed by Large Language Models (LLMs) for tasks like prompt reformatting and planning, further complicates security efforts. [10] Moreover, the fact that AI agents can operate across a variety of environments—from development and deployment to ongoing execution—adds another layer of complexity that needs careful management.
Key Risks and Vulnerabilities of Agentic AI
The inherent nature of AI agents exposes them to a range of specific threats that demand specialized security solutions. These risks include:
Prompt Injection and Manipulation
One of the most pervasive threats is prompt injection, where attackers embed deceptive or hidden instructions within user input or contextual data to hijack the agent’s behavior. This can lead to the leakage of confidential data or the execution of unauthorized actions, making it challenging to detect early. [13] Keeping sensitive credentials like API tokens out of prompts is also crucial, as traditional Data Loss Prevention (DLP) tools struggle with the unstructured and varied nature of prompt data. [13]
Data Exposure and Privacy Concerns
AI agents’ ability to access and process vast amounts of organizational data creates significant data security and privacy risks. Users may unknowingly share sensitive or proprietary information with these agents without a clear understanding of what data is safe to disclose. [12] This can lead to inadvertent data exposure or breaches, especially in large organizations where controlling data flows is complex. [15] Furthermore, sending data to external servers for processing raises concerns about data residency, retention, and compliance with regulations like GDPR or HIPAA. [12]
Memory Poisoning and Data Integrity
Attackers can exploit the adaptability of AI agents by manipulating their memory with false or malicious data, a technique known as memory poisoning. [13] This corrupts what the agent “remembers,” leading it to take incorrect or unsafe actions, often without triggering alerts. [13] Validating all data sources, particularly those from user-generated inputs, is a key defense against this threat. [13]
Unauthorized Access and Privilege Misuse
AI agents typically operate with the same permission level as the user who runs them, unless explicitly restricted. [15] Over-privileged users could configure agents to perform tasks on servers where they hold administrative rights, creating new attack surfaces, data breaches, or unauthorized system access. [15] Credential hijacking, where attackers gain unauthorized access to agent credentials (like passwords or tokens), is another significant risk, potentially allowing misuse of sensitive systems or data. [15]
The Challenge of “Shadow AI Agents”
A significant concern is the emergence of “Shadow AI agents”—unauthorized and unseen AI agents operating without proper IT or security oversight. [11] This lack of visibility poses substantial security risks, as these agents can operate unchecked, introducing threats in unexpected places. [11] Organizations often lack visibility into these agents, which can be deployed by development teams or through SaaS applications and operating system tools. [11] Without proper IT and security processes, the adoption of these agents can occur without adequate controls.
Noma Security’s Strategic Approach to Mitigation
Noma Security’s platform is engineered to confront these multifaceted challenges head-on. [5, 4] Its capabilities include:
Comprehensive AI Discovery and Governance
The platform eliminates blind spots by providing continuous discovery and inventory of all AI resources. This includes code, data pipelines, models, runtime applications, and third-party agents. [17, 4] This comprehensive inventory is crucial for establishing a baseline understanding of the AI landscape within an organization.
Proactive AI Security Risk Management
Noma enhances an organization’s AI security posture by continuously scanning for infrastructure misconfigurations, supply chain vulnerabilities, model risks, and compliance gaps. [17, 4] This proactive approach aims to identify and remediate security risks before they can be exploited.
AI Runtime Protection
The platform offers real-time monitoring and control for autonomous AI systems. It is designed to detect and block prompt attacks, harmful content, sensitive data leaks, privacy violations, and rogue AI agent actions as they occur. [17, 4] This runtime protection acts as a crucial safeguard against immediate threats.
AI Compliance Simplified
Noma Security helps align enterprise AI security with established security and compliance frameworks, including the OWASP Top 10 for LLMs, MITRE ATLAS, and emerging AI regulations such as the EU AI Act. [13, 17] This ensures that AI deployments meet necessary regulatory and policy requirements.
Funding and Future Growth
The $100 million Series B funding round, led by Evolution Equity Partners, underscores the significant market demand and investor confidence in Noma Security’s mission. [1, 2, 18, 3, 4] This investment follows a successful $32 million Series A round in late 2024, bringing the total capital raised to $132 million. [19, 18, 8] The company plans to strategically utilize these funds to accelerate its platform development, expand its R&D capabilities—aiming to triple them—and scale its sales and customer success operations globally. [19, 1, 18, 3, 4] A key focus will be on increasing its employee base, particularly in the United States, to bolster go-to-market operations, including sales, marketing, and customer success. [19] Noma Security also intends to strengthen its product, R&D, and research teams, with a notable presence in Tel Aviv. [19, 1] The company has demonstrated remarkable growth, increasing its Annual Recurring Revenue (ARR) by over 1,300% in the past year with a growing portfolio of customers across diverse industries. [1, 2, 18, 3, 4]
Industry Recognition and Leadership
Noma Security has quickly established itself as a leader in the AI security category, recognized for its rapid growth and comprehensive platform. [1, 2, 18] The company has garnered support from prominent investors and industry leaders, including CISOs from major corporations like Google DeepMind, McAfee, and BNP Paribas, who recognize the critical need for robust AI security solutions. [5, 8] Noma has also been acknowledged by Gartner as a leader in AI Trust, Risk, and Security Management (AI TRiSM). [5, 8] This recognition highlights the industry’s acknowledgment of Noma’s pivotal role in enabling enterprises to adopt AI innovation confidently and securely. [1, 2, 18, 3, 4]
The Evolving Landscape of AI Governance and Safety
The escalating capabilities of AI agents necessitate a strong emphasis on governance, control, and safety. [5] Principles such as transparency, controllability, accountability, safety, and value alignment are paramount in ensuring AI agents operate within desired parameters and align with human intentions. [20] This robust governance framework is not merely a compliance measure but a fundamental requirement for AI systems to remain beneficial rather than dangerous. [20] As the complexity and autonomy of AI agents increase, the quality of governance will increasingly determine their positive or negative impact. [20] The development of “Guardian Agents”—specialized agents designed to oversee and manage the behavior of other AI agents—is emerging as a key strategy to ensure AI actions align with organizational goals and comply with safety, security, and ethical standards. [20] Noma Security’s platform inherently supports these governance principles by providing the necessary visibility, control, and enforcement mechanisms. [16]
Conclusion: Securing the Future of Autonomous AI
Noma Security’s significant funding round marks a pivotal moment in the journey towards widespread, safe, and secure adoption of agentic AI. [5, 3] The company’s commitment to providing a unified platform for AI and agent security addresses a critical gap in the current cybersecurity landscape. [3] By empowering enterprises with the tools to discover, manage, and protect their AI assets, Noma is enabling organizations to harness the transformative power of AI without compromising security or compliance. [4] As AI agents become increasingly integrated into the fabric of business operations, Noma Security is poised to play a vital role in setting the standard for AI governance and ensuring that the future of autonomous intelligence is both innovative and secure. [5, 9, 3]