Cybersecurity savants have unearthed a pivotal security lapse in Replicate, an AI-as-a-service provider, which could have permitted nefarious entities to infiltrate proprietary AI frameworks and confidential data.
“Exploitation of this chink could have enabled illicit access to the AI prompts and outputs of all users on Replicate’s platform,” declared cloud security firm Wiz in a recently unveiled dossier.
The root of the problem lies in the typical packaging of AI models, which facilitates arbitrary code execution—a loophole that could be weaponized for cross-tenant assaults via a malicious model.
Replicate utilizes an open-source tool known as Cog to encapsulate and bundle machine learning models, deployable in self-hosted environments or on Replicate’s infrastructure.
Wiz demonstrated the creation of a malevolent Cog container, uploading it to Replicate, and subsequently employing it to execute remote code on the service’s infrastructure with enhanced privileges.
“We posit this technique of code execution is prevalent, with enterprises running AI models from dubious sources, disregarding the potential malice embedded in such code,” remarked security researchers Shir Tamari and Sagi Tzadik.
The assault method employed an extant TCP connection linked to a Redis server instance within a Kubernetes cluster hosted on the Google Cloud Platform, to inject arbitrary commands.
Furthermore, the centralized Redis server, functioning as a queue for managing manifold customer requests and responses, could be exploited to facilitate cross-tenant attacks by tampering with processes to introduce rogue tasks, potentially skewing the outputs of other clients’ models.
These rogue interventions not only compromise the sanctity of the AI models but also pose substantial risks to the precision and dependability of AI-generated results.
“An intruder could have interrogated the private AI models of clients, potentially unearthing proprietary knowledge or sensitive data integral to the model training,” the researchers noted. “Additionally, intercepting prompts might have revealed sensitive information, including personally identifiable information (PII).”
This vulnerability, responsibly disclosed in January 2024, has since been mitigated by Replicate. There is no evidence indicating the flaw was exploited in the wild to breach client data.
The revelation follows Wiz’s earlier disclosure, a month prior, of patched vulnerabilities in platforms like Hugging Face, which could have allowed threat actors to elevate privileges, access other clients’ models, and even seize control of continuous integration and continuous deployment (CI/CD) pipelines.
“Malicious models pose a significant threat to AI ecosystems, particularly for AI-as-a-service providers, as attackers might leverage these models to execute cross-tenant attacks,” the researchers concluded.
“The potential repercussions are catastrophic, as assailants could potentially access millions of private AI models and applications stored within AI-as-a-service providers.”