News broke last week that OpenAI was hacked in early 2023.
Although the hacker didn’t gain access to the company’s code or AI systems (they just viewed internal employee forums), the breach is a reminder of how AI models are one of the hottest attack targets right now.
A host of machine learning security (MLSec) platforms have emerged to defend them, with some safeguarding models from outside threats like data poisoning and prompt injection, and others helping prevent employees from leaking sensitive data into AI tools.
MLSec platforms make up one of the fastest-growing markets in the generative AI infrastructure space, with their collective employee headcount more than doubling YoY, as of May.
Below, we dig into the machine learning security market and what buyers need to know to navigate it.
Several forces are driving growth in the market, including:
Cyber threats becoming more sophisticated and frequent as a result of AI.
Enterprises’ widespread adoption of generative AI tools, introducing new vulnerabilities like employees using third-party AI models.
More stringent security regulation, like the SEC’s recent rules governing cyber risk management and incident disclosure.
Furthermore, many businesses lack the time or resources to build security solutions as quickly as they need them, as this software buyer we spoke with indicated.
But third-party security platforms pose a risk in terms of data privacy, especially for highly regulated industries like finance or healthcare.
To mitigate this risk, some startups offer ways to secure models without ever touching the underlying data. In our conversations with software buyers, this was cited as a key win reason for HiddenLayer.
The startup is a leader in the highly competitive market, as seen in the MLSec market report graphic below.
To understand what’s happening under the hood of individual vendors, buyers can track companies’ Mosaic scores — our proprietary algorithm measuring tech company health — over time and in relation to competitors.
For instance, take Patronus AI and Arthur, which both offer LLM-focused tools to anticipate threats, detect mistakes, and more.
Patronus AI, which was founded in 2023 by former AI engineers at Meta, surpassed Arthur earlier this year in terms of its Mosaic score, thanks to its growing commercial network (including a recent partnership with MongoDB) and bolstered by its Series A in May.
However, Patronus AI is still early in its development and is validating its solution, as indicated by its Commercial Maturity score.
P.S. Securing models is just one piece of the puzzle. Explore the 130+ startups helping enterprises build and deploy AI models in our MLOps market map.