很多人好奇地问我:“衣老师,AI 又不是生物,它又不会自己吃东西,怎么会中毒呢?”其实,AI 的“食物”就是网络上的海量数据。所谓的“投毒”,就是黑色产业链中的恶意攻击者,故意往这些数据里塞进虚假信息、伪造的专家评价,甚至是带有误导性的图像。
这就好比一个正在识字的孩子,如果他读的书全是错的,那他长大了说的话、做的事肯定也是错的。现在的黑产不再发那种一眼就能看穿的小广告,而是把虚假宣传伪装成权威的知识,“喂”给 AI 的训练数据库。
黑产为什么要费这么大力气投毒?为他们要针对 GEO(Generative Engine Optimization),也就是“生成引擎优化”。 以前强调 SEO (Search Engine Optimization) ,是为了让网页排在搜索结果的第一页;现在他们针对 GEO,是为了让 AI 在生成答案时,直接把他们的劣质产品当成“唯一推荐”。
在 AIGC 文图学 的视角下,这是“输入端的文本污染”。AI 生成的内容其实是它学到的“文本”的镜像。如果源头脏了,生成出来的世界就是有毒的。这种欺骗最可怕的地方在于,它利用了我们对“算法中立”的信任。它消解了我们的警惕心,让我们觉得这是“科技”给出的真理,其实那是黑产花钱买断的广告。
AI投毒入侵的方式是在 AI 学习的“关键词”和“反馈逻辑”里动手脚。
首先是“关键词饱和攻击”。黑产利用成千上万的机器人账号,在全网发布大量带有特定词汇的虚假文章。比如,想推销某款劣质护肤品,他们就疯狂制造它和“美白”、“安全”、“专家推荐”这些关键词的关联。当 AI 扫描全网文本时,它会被这种巨大的数量优势所欺骗,误以为这就是真实的“社会共识”。
第二是“视觉文本欺骗”。他们用 AI 生成看起来极其专业的实验室对比图、伪造的荣誉证书,甚至是根本不存在的科研现场。在文图学的逻辑里,图像也是一种文本。这些“视觉文本”被 AI 抓取并转化为逻辑证据后,AI 就会在回答你时,信誓旦旦地把这些假证据当成事实。
谁能通过 GEO 投毒成功,谁就掌控了流量的生杀大权。充斥虚假文案和图像的互文互证, 让 AI 大语言模型陷入预先埋伏的圈套。
两年前,AI 科技还不完全成熟,我们嘲笑它“一本正经地胡说八道”。现在,AI 的能力越来越强大,我们也就逐渐对它失去了防备之心。我们开始信任AI,我们以为它没有立场,没有私心,没有人类那种会说谎、追求现实利益的欲望和野心。甚至于有人会把AI当成知识的整理者、真理的传递者。
意识到 AI 可能被投毒,对我们来说是一个很重大的警醒。别以为 AI 反射的是一面干净的镜子。它映照的,可能是有人花了大价钱布置好的舞台,舞台上演出的,是被设计出的结果,一步步地引导我们看到被安排过的选择。
无论是在互联网上搜索,或是在 AI 模式中提问,只匆匆选前几个建议的话,不只是听信胡说八道的损失,而是盲目甘之如饴的中毒。
2026年3月28日,新加坡《联合早报》“上善若水”专栏
How Is AI Being Poisoned?
I Lo-fen
A topic that has recently been especially prominent in China is the annual 3.15 Gala. March 15 is World Consumer Rights Day, a day when society turns its attention to unscrupulous businesses that cheat consumers. But this year, the 3.15 Gala introduced a chilling new term that sent a shiver down everyone’s spine: “AI poisoning.” Have you ever considered that the AI assistant you trust every day might actually be lying to you?
Many people ask me curiously, “Professor Yi, AI isn’t a living organism. It doesn’t eat anything. So how can it be poisoned?” In fact, AI’s “food” is the massive volume of data available on the internet. What is meant by “poisoning” is that malicious actors in black-market industries deliberately inject false information, fabricated expert reviews, and even misleading images into these data streams.
It is like a child who is learning to read: if all the books the child reads are wrong, then what the child says and does when grown up will also be wrong. Today’s black-market operators no longer rely on the kind of crude advertisements that can be spotted at a glance. Instead, they disguise false publicity as authoritative knowledge and “feed” it into the databases used to train AI.
Why do these bad actors go to such lengths to poison AI? Because they are targeting GEO (Generative Engine Optimization). In the past, the focus was on SEO (Search Engine Optimization), which aimed to push webpages onto the first page of search results. Now they are targeting GEO in order to make AI directly present their inferior products as the “only recommendation” when generating answers.
From the perspective of Text and Image Studies on AIGC, this is a form of “textual pollution at the input end.” The content generated by AI is essentially a mirror of the “texts” it has learned from. If the source is contaminated, then the world it generates will also be toxic. The most frightening aspect of this deception is that it exploits our trust in the supposed neutrality of algorithms. It dissolves our vigilance and makes us believe that this is the truth delivered by “technology,” when in fact it is advertising bought and paid for by black-market operators.
The way AI poisoning infiltrates the system is by tampering with the “keywords” AI learns from and the “feedback logic” it relies on.
The first method is keyword saturation attacks. Black-market operators use thousands upon thousands of bot accounts to flood the internet with fake articles containing specific terms. For example, if they want to sell a low-quality skincare product, they will aggressively manufacture associations between it and keywords such as “whitening,” “safe,” and “expert-recommended.” When AI scans the internet’s texts, it is deceived by this overwhelming numerical advantage and mistakes it for genuine “social consensus.”
The second method is visual-text deception. They use AI to generate what appear to be highly professional laboratory comparison charts, forged certificates of honor, and even entirely fictional research scenes. In the logic of Text and Image Studies, images are also a form of text. Once these “visual texts” are scraped by AI and converted into logical evidence, the AI will confidently present these fake materials as facts when answering your questions.
Whoever succeeds in poisoning GEO gains the power to control the life and death of online traffic. The mutual reinforcement of false copywriting and fabricated images traps large language models in an ambush laid in advance.
Two years ago, when AI technology was still not fully mature, we mocked it for “speaking nonsense with a straight face.” Now, as AI grows more powerful, we have gradually lowered our guard against it. We begin to trust AI. We assume it has no position, no selfish motives, none of the human tendencies to lie or to pursue practical interests, desire, or ambition. Some people even treat AI as an organizer of knowledge and a transmitter of truth.
Realizing that AI itself can be poisoned is therefore a major wake-up call. Do not assume that AI reflects a clean mirror. What it may actually be reflecting is a stage that someone has spent a great deal of money to construct in advance. And what is performed on that stage is a designed outcome, guiding us step by step toward choices that have already been arranged for us.
Whether we are searching on the internet or asking questions in AI mode, if we merely rush to accept the first few suggestions, the problem is not only the loss caused by believing nonsense. It is also the kind of poisoning we swallow willingly and blindly.
“Shangshan Ruoshui” column, Lianhe Zaobao, Singapore
March 28, 2026


