Cybersecurity experts have recently identified nearly two dozen vulnerabilities across 15 popular machine learning (ML) open-source projects. These flaws, present in both server- and client-side configurations, reveal critical weaknesses that could expose organizations to significant cyber risks, according to a recent report from JFrog, a software supply chain security firm.
Among these findings, the server-side vulnerabilities pose particularly high risks, as they could empower attackers to seize control of essential infrastructure—such as model registries, ML databases, and ML pipelines—critical to an organization’s AI and ML operations.
The identified vulnerabilities were present in widely used ML tools like Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI. These flaws have been categorized to highlight various types of attacks, including remote hijacking of ML registries, compromising ML database frameworks, and overtaking ML pipelines.
Below is a breakdown of key security flaws identified:
- CVE-2024-7340 (CVSS score: 8.8): This directory traversal vulnerability in the Weave ML toolkit permits unauthorized access to read files across the entire filesystem. A low-level authenticated user could exploit this flaw to escalate their privileges to admin by accessing the “api_keys.ibd” file. This issue has been patched in Weave version 0.50.8.
- ZenML MLOps Framework: An access control weakness in ZenML allows a user with minimal permissions to upgrade their access to full admin privileges. By exploiting this flaw, attackers gain the ability to manipulate or access the Secret Store, despite the absence of a CVE identifier.
- CVE-2024-6507 (CVSS score: 8.1): Found in Deep Lake, an AI-optimized database, this command injection flaw could allow attackers to execute system commands while uploading a remote dataset from Kaggle, owing to improper input sanitization. The issue is resolved in version 3.9.11.
- CVE-2024-5565 (CVSS score: 8.1): In Vanna.AI, a prompt injection vulnerability opens the door for attackers to achieve remote code execution on the host system.
- CVE-2024-45187 (CVSS score: 7.1): Within Mage AI, an incorrect privilege assignment vulnerability grants guest users elevated privileges, allowing them to execute arbitrary code via the Mage AI terminal server. These privileges can remain active for up to 30 days after a user is marked for deletion.
- CVE-2024-45188, CVE-2024-45189, CVE-2024-45190 (CVSS scores: 6.5): These path traversal vulnerabilities in Mage AI allow remote users with “Viewer” access to read arbitrary files from the Mage server by exploiting the “File Content,” “Git Content,” and “Pipeline Interaction” functions.
According to JFrog, these vulnerabilities underscore the potential consequences of compromised MLOps pipelines, as these systems have extensive access to ML datasets, model training resources, and model deployment channels. Exploiting an ML pipeline could thus lead to severe breaches, including backdooring models or poisoning datasets.
This disclosure follows an earlier report by JFrog that identified over 20 vulnerabilities in MLOps platforms. Alongside these findings, cybersecurity researchers have introduced a defense framework called Mantis, designed to use prompt injection to neutralize cyber attacks against large language models (LLMs) with an effectiveness rate exceeding 95%.
A team from George Mason University described how Mantis works: upon detecting an automated cyber attack, it injects carefully crafted responses that cause the attacker’s LLM to sabotage its own operation. Mantis may even compromise the attacker’s machine by deploying vulnerable decoy services that attract the attacker and using dynamic prompt injections to destabilize the malicious LLM autonomously.
In an era where ML-driven systems are increasingly embedded into critical operations, understanding and addressing these vulnerabilities is vital for organizations aiming to safeguard their ML and AI assets.