Cyber security news for all

More

    Unveiling of ‘LLMjacking’ Plot: Targeting Cloud-Based AI Models with Stolen Credentials

    In a significant revelation, cybersecurity experts have unearthed a fresh assault tactic utilizing purloined cloud access details to zero in on cloud-hosted extensive language model (LLM) services, aiming to peddle entry to other malicious actors.

    Dubbed ‘LLMjacking’ by the Sysdig Threat Research Team, this method has surfaced as a concerning threat in the digital realm.

    “After breaching the initial access point, the malefactors pilfered cloud authentication particulars, subsequently infiltrating the cloud milieu, where they endeavored to reach local LLM models hosted by cloud providers,” elucidated security analyst Alessandro Brucato. “In this particular incident, the focus was on a local Claude (v2/v3) LLM model from Anthropic.”

    The pathway of intrusion to execute this plot entails exploiting a system harboring a vulnerable iteration of the Laravel Framework (e.g., CVE-2021-3129), followed by seizing Amazon Web Services (AWS) credentials to infiltrate the LLM services.

    Instruments employed include an open-source Python script tasked with scrutinizing and validating keys for a spectrum of offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI, among others.

    “During the validation phase, no genuine LLM queries were executed,” Brucato clarified. “Rather, the focus was on discerning the capabilities of the obtained credentials and any associated quotas.”

    Furthermore, the keychecker boasts integration with an additional open-source tool dubbed oai-reverse-proxy, functioning as a reverse proxy server for LLM APIs. This implies that the threat actors likely furnish access to the compromised accounts sans revealing the underlying credentials.

    “In the event that the attackers were compiling a repository of valuable credentials with the intention to vend access to accessible LLM models, a reverse proxy of this nature could facilitate monetization of their endeavors,” Brucato elaborated.

    Moreover, the assailants have been noted probing logging configurations, ostensibly in a bid to circumvent detection whilst leveraging the compromised credentials for executing their directives.

    This development marks a deviation from conventional attacks centered on prompt insertions and model contamination, instead affording attackers the avenue to profit from their LLM access while the cloud account owner bears the financial brunt without their knowledge or consent.

    According to Sysdig, such an attack could potentially accrue over $46,000 in LLM consumption expenses per diem for the victim.

    “The utilization of LLM services can entail substantial costs, contingent upon the model and the volume of tokens supplied,” Brucato emphasized. “By optimizing the quota thresholds, assailants can also impede the compromised entity from legitimately utilizing models, thereby disrupting business continuity.”

    In light of these revelations, organizations are advised to implement comprehensive logging mechanisms and monitor cloud logs meticulously for indicators of suspicious or unauthorized activities. Additionally, robust vulnerability management protocols are imperative to thwart unauthorized access attempts at the outset.

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here