Cyber security news for all

More

    Unveiling Potential Risks Posed by External ChatGPT Add-ons

    Observations made by cybersecurity analysts indicate that third-party extensions accessible for implementation alongside OpenAI ChatGPT could introduce a fresh avenue for malicious actors seeking illicit entry into confidential data.

    Recent investigations conducted by Salt Labs have uncovered vulnerabilities within ChatGPT itself and within its peripheral environment, potentially enabling unauthorized installation of detrimental extensions and subsequent account seizures on external platforms such as GitHub.

    Named ChatGPT add-ons, these utilities are devised to function atop the expansive language model (ELM), serving the purpose of accessing current data, executing computations, or interfacing with external services.

    In response, OpenAI has introduced GPTs, bespoke iterations of ChatGPT tailored to specific functions, thereby diminishing reliance on third-party services. Effective March 19, 2024, ChatGPT users will be deprived of the ability to integrate new extensions or engage in preexisting extension-based interactions.

    A critical vulnerability unearthed by Salt Labs pertains to exploiting the OAuth mechanism to deceive users into installing arbitrary extensions, leveraging ChatGPT’s failure to verify user initiation of extension installation.

    This flaw potentially facilitates interception and extraction of all data exchanged by the victim, which could encompass proprietary information.

    Cybersecurity Additionally, Salt Labs has identified shortcomings in PluginLab that could be weaponized by threat actors to execute zero-click account takeover maneuvers, thereby compromising an organization’s external account, such as those hosted on GitHub, and accessing their source code repositories.

    Elucidating the vulnerability, security researcher Aviad Carmel elucidated, “[The endpoint] ‘auth.pluginlab[.]ai/oauth/authorized’ does not authenticate the request, which enables the attacker to substitute another memberId (i.e., the victim) and obtain a code representative of the victim. Subsequently, utilizing this code, the attacker can leverage ChatGPT to access the victim’s GitHub account.”

    The victim’s memberId can be acquired by querying the endpoint “auth.pluginlab[.]ai/members/requestMagicEmailCode.” No evidence has emerged suggesting compromise of user data due to this vulnerability.

    Furthermore, various extensions, including Kesem AI, have been discovered to harbor an OAuth redirection manipulation flaw, enabling attackers to pilfer the credentials associated with the extension by disseminating specially crafted links to victims.

    These revelations come in the wake of Imperva’s disclosure of two cross-site scripting (XSS) vulnerabilities within ChatGPT, which, when combined, could be leveraged to assume control of any account.

    In December 2023, security researcher Johann Rehberger demonstrated how malicious entities could fabricate customized GPTs capable of phishing for user credentials and transmitting the pilfered data to an external server.

    Emergence of Novel Remote Keylogging Assaults Targeting AI Assistants These findings coincide with recent research highlighting a side-channel attack on expansive language models (ELMs), utilizing token length as an undercover method to extract encrypted responses from AI assistants across the internet.

    A consortium of academics from Ben-Gurion University and Offensive AI Research Lab elaborated, “ELMs generate and transmit responses in tokenized format, with each token being relayed sequentially from the server to the user. Despite encryption, this sequential transmission inadvertently exposes a new side-channel: the token-length side-channel. Despite encryption, the size of the packets can reveal the length of the tokens, potentially enabling network-based attackers to deduce sensitive and confidential information shared during private AI assistant interactions.”

    This feat is achieved through a token inference technique, which entails training an ELM model capable of correlating token length sequences with their corresponding natural language counterparts, thereby deciphering encrypted traffic.

    In essence, the crux of this method involves intercepting real-time chat responses from an ELM provider, utilizing network packet headers to deduce token length, extracting and parsing textual segments, and leveraging a bespoke ELM to interpret the response.

    ChatGPT Extensions Essential prerequisites for executing this attack include an AI chat client operating in streaming mode and an adversary capable of capturing network traffic between the client and the AI chatbot.

    To mitigate the efficacy of such side-channel attacks, it is advised that companies developing AI assistants integrate random padding to obfuscate token length, transmit tokens in larger clusters rather than individually, and deliver complete responses in a single transmission, rather than token-by-token.

    “Striking a balance between security, usability, and performance poses a multifaceted challenge necessitating careful deliberation,” concluded the researchers.

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here