Ad
News
Blockchain security firm warns of AI code poisoning risk after OpenAI’s ChatGPT recommends scam API Blockchain security firm warns of AI code poisoning risk after OpenAI’s ChatGPT recommends scam API

Blockchain security firm warns of AI code poisoning risk after OpenAI’s ChatGPT recommends scam API

Blockchain security experts warn that the risk of AI poisoning has emerged, threatening trust in AI technologies.

Blockchain security firm warns of AI code poisoning risk after OpenAI’s ChatGPT recommends scam API

Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.

Join Japan's Web3 Evolution Today

Yu Xian, founder of the blockchain security firm Slowmist, has raised alarms about a rising threat known as AI code poisoning.

This attack type involves injecting harmful code into the training data of AI models, which can pose risks for users who depend on these tools for technical tasks.

The incident

The issue gained attention after a troubling incident involving OpenAI’s ChatGPT. On Nov. 21, a crypto trader named “r_cky0” reported losing $2,500 in digital assets after seeking ChatGPT’s help to create a bot for Solana-based memecoin generator Pump.fun.

However, the chatbot recommended a fraudulent Solana API website, which led to the theft of the user’s private keys. The victim noted that within 30 minutes of using the malicious API, all assets were drained to a wallet linked to the scam.

[Editor’s Note: ChatGPT appears to have recommended the API after running a search using the new SearchGPT as a ‘sources’ section can be seen in the screenshot. Therefore, it does not seem to be a case of AI poisoning but a failure of the AI to recognize scam links in search results.]

AI scam link API (Source: X)
AI scam link API (Source: X)

Further investigation revealed this address consistently receives stolen tokens, reinforcing suspicions that it belongs to a fraudster.

The Slowmist founder noted that the fraudulent API’s domain name was registered two months ago, suggesting the attack was premeditated. Xian furthered that the website lacked detailed content, consisting only of documents and code repositories.

While the poisoning appears deliberate, no evidence suggests OpenAI intentionally integrated the malicious data into ChatGPT’s training, with the result likely coming from SearchGPT.

Implications

Blockchain security firm Scam Sniffer noted that this incident illustrates how scammers pollute AI training data with harmful crypto code. The firm said that a GitHub user, “solanaapisdev,” has recently created multiple repositories to manipulate AI models to generate fraudulent outputs in recent months.

AI tools like ChatGPT, now used by hundreds of millions, face increasing challenges as attackers find new ways to exploit them.

Xian cautioned crypto users about the risks tied to large language models (LLMs) like GPT. He emphasized that once a theoretical risk, AI poisoning has now materialized into a real threat. So, without more robust defenses, incidents like this could undermine trust in AI-driven tools and expose users to further financial losses.

Mentioned in this article
Posted In: , AI, Crime, Technology