2026/04/11

三招“ AI 投毒”防身术Three Self-Defense Moves Against “AI Poisoning”

 


三招 AI 投毒防身术Three Self-Defense Moves Against “AI Poisoning”

 

衣若芬

 

谈到"AI投毒"——有人在系统性地往AI的知识源头里掺假,利用的恰恰是我们对算法的信任。那么面对这种侵入和污染,我们能做什么?

正好我正在写关于 AIGC 文图学的专书,书里提到的方法论可以派上用场,我称之为“三招 AI 投毒防身术”。作为普通消费者,面对到处都是 GEO Generative Engine Optimization,生成引擎优化)痕迹的内容,我们可以靠“逻辑反侦察”,保护自己,做 AI 时代不被算法收割的清醒人。

第一个动作:问完一个AI,再去问另一个。

黑产的GEO投毒,往往是针对特定平台或特定算法下手的。如果你只问一个AI,那你是在走一条被人提前布置好的路。

做法很简单:同一个问题,换一个AI再问一遍。把同一个问题丢给ChatGPT,再丢给Deepseek,或者其他你用得上的模型,看看答案是否一致。如果不同模型给出的结论差异很大,那就是一个信号,最好停下来想一想。更值得注意的是,如果某一个AI表现得异常热情——满腔诚意地推荐同一个品牌,措辞也出奇地相似,那种“激昂”的"热情",就是你应该警惕的部分。

正常的知识,不同来源都能印证。被人为制造出来的"共识",换个角度一照就会露出破绽。

第二个动作:看完AI给的完美图,去找那张图的"差评"

这一招,我称之为"文图互证"。图像是一种文本,文本是可以被读、被核实、被质疑的。

AI推荐某个产品,通常会附上图——或者你去搜索,它会让你看到一些极其完美的展示图:光线完美,角度完美,使用效果完美。这种完美,看起来就很假。真实的物理世界,是有烟火气的。买家秀的光线不会那么好,模特儿的皮肤不会那么均匀,消费者使用的感受不会那么一边倒。

做法:看完AI推荐的图之后,去实体店查看,至少也要去社交平台搜这个产品的真实买家照片和消费者体验纪录。如果搜不到任何真实的使用痕迹,只有整齐划一的"好评",那它极大概率是一个被制造出来的形象,而不是一个实际存在的东西。

第三个动作:问AI一句话——"你的根据是什么?"

这是成本最低、也最容易被忽略的一步。

AI给出一个建议或一个结论,不要就此打住。追问它:你的依据是什么?这个信息来自哪里?

AI会给出相对可追溯的来源;被投毒的AI内容,往往在这一步就露馅,它可能给一个你从未听过的自媒体名称;或者一个模糊的"研究表明",根本无从核实。这时候你需要做的,是真的去查:那个来源存在吗?那篇研究是真实发表过的吗?那位挂保证推荐的"专家",在这个领域里是真正能信赖的人吗?真的有这个人吗?

很多人觉得这样太麻烦。但这其实只需要一两分钟,而它省下的,可能是你付出的金钱、健康,或者更难追回的判断力。

这三个动作,说穿了,不是针对AI的,而是我们本来就应该有的习惯。读一篇文章,我们会问作者是谁;看一条新闻,我们会想这个媒体可信吗;买一样东西,我们会找朋友问问有没有人用过。这些习惯,在我们开始用AI之后,很多人悄悄地放弃了。

因为AI的回答太流畅,太自信,太像一个什么都知道的朋友,让人不好意思再追问。

但正是这种"不好意思",给了投毒者可乘之机。

养成这三个动作,不是对科技的不信任,而是对自己的诚实。你愿意花时间核实,说明你知道真相是有价值的。这种珍视,才是任何投毒都无法轻易穿透的东西。

AIGC文图学 告诉我们:技术可以生成答案,但只有人类能判断价值。 你的批判性思维,才是对抗黑产投毒最坚固的防火墙。

保护你的真实权益,就是保护你作为人的尊严。

 

2026411,新加坡《联合早报》“上善若水”专栏

 

Three Self-Defense Moves Against “AI Poisoning”

I Lo-fen

When we talk about “AI poisoning,” we mean that people are systematically injecting falsehoods into the very knowledge sources AI relies on, exploiting precisely our trust in algorithms. So in the face of this kind of intrusion and contamination, what can we do?

As it happens, I am currently writing a book on Text and Image Studies on AIGC, and the methodology discussed there can be put to use here. I call it the “three self-defense moves against AI poisoning.” As ordinary consumers, when faced with content everywhere marked by traces of GEO (Generative Engine Optimization), we can rely on “logical counter-reconnaissance” to protect ourselves and stay clear-headed in the AI age, rather than letting algorithms prey on us.

The first move: after asking one AI, go ask another.

GEO poisoning in the gray market often targets a specific platform or a specific algorithm. If you only ask one AI, you are walking down a road someone may already have laid out for you.

The method is simple: ask the same question to a different AI. Put the same question to ChatGPT, then to DeepSeek, or any other model available to you, and see whether the answers match. If different models produce conclusions that differ widely, that is a signal that you should pause and think. Even more worth noticing is when one AI seems unusually enthusiastic—wholeheartedly recommending the same brand, with wording that is strikingly similar. That kind of fervent “enthusiasm” is exactly what you should be wary of.

With normal knowledge, different sources can corroborate one another. Artificially manufactured “consensus,” however, will reveal its cracks as soon as you shine a different light on it.

The second move: after looking at the perfect image AI gives you, go look for that image’s “bad reviews.”

I call this move “mutual verification between text and image.” An image is a kind of text, and text can be read, verified, and questioned.

When AI recommends a product, it usually comes with images—or when you search for it, you are shown extremely polished display photos: perfect lighting, perfect angles, perfect results in use. That kind of perfection already looks fake. The real physical world has texture and messiness. Customer photos do not have such flawless lighting, models’ skin is not that even, and consumers’ experiences are never so uniformly positive.

What should you do? After looking at the images recommended by AI, go check the product in a physical store if possible, or at the very least search social media platforms for real buyer photos and actual user experience records. If you cannot find any real traces of use, and all you see are neat, uniform “positive reviews,” then it is highly likely that what you are seeing is a manufactured image rather than something that actually exists in reality.

The third move: ask AI one sentence—“What is your basis?”

This is the lowest-cost step, and also the one most easily overlooked.

When AI gives you a suggestion or a conclusion, do not stop there. Ask it: what is your basis? Where does this information come from?

A reliable AI will give you sources that are relatively traceable. AI content that has been poisoned often gives itself away at this step. It may cite a self-media account you have never heard of, or vaguely say “research shows” without any way for you to verify it. At that point, what you need to do is actually check: does that source exist? Was that study really published? Is that “expert” being used as an endorsement someone who is genuinely trustworthy in this field? Does that person even really exist?

Many people think this is too troublesome. But in fact, it only takes a minute or two—and what it may save you from losing could be your money, your health, or an even harder thing to recover: your judgment.

These three moves, when all is said and done, are not really aimed at AI at all. They are simply habits we should have had in the first place. When we read an article, we ask who the author is. When we see a news report, we wonder whether the media outlet is credible. When we buy something, we ask friends whether anyone has used it. Yet once people begin using AI, many quietly abandon these habits.

Why? Because AI answers so smoothly, so confidently, and so much like a friend who seems to know everything that people feel awkward pressing further.

But it is precisely this sense of awkwardness that gives poisoners their opening.

To cultivate these three moves is not to distrust technology; it is to be honest with yourself. If you are willing to spend time verifying something, that means you know truth has value. That recognition is exactly what no poisoning can easily penetrate.

Text and Image Studies on AIGC tells us this: technology can generate answers, but only human beings can judge value. Your critical thinking is the strongest firewall against malicious AI poisoning.

To protect your real rights and interests is to protect your dignity as a human being.

April 11, 2026, “ Shang Shan Ruo Shui (As Good as Water)” column, Lianhe Zaobao, Singapore.





沒有留言: