2026/04/11

【文图学】【AIGC文图学】的定义 Definition of Text and Image Studies.Text and Image Studies on AIGC

 


【文图学】【AIGC文图学】的定义

引用来源:衣若芬《AIGC时代的人文学术研究方法》(2026)

I Lo-fen,Humanities Research Methods in the Age of AIGC,2026


文图学是由衣若芬提出的跨学科研究领域与方法,以广义文本观为基础,研究文字、图像及其他文本形态的关系、互动、张力与生成,探讨文本及其意义在媒介机制、社会网络、文化背景与历史语境中的生产、传递、转化与理解。

Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.

AIGC 文图学:由衣若芬提出并发展的跨学科研究领域与方法,以广义文本观为基础,面对人工智能能够生成文字、图像、声音、影像等多种文本的新条件,研究文本如何形成、如何被理解与判断,并探讨不同文本形态的关系、机制、意义及其媒介条件、社会网络、文化背景与历史语境.

Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.


【文图学】【AIGC文图学】的定义 Definition of Text and Image Studies.Text and Image Studies on AIGC


 

【文图学】【AIGC文图学】的定义

引用来源:衣若芬《AIGC时代的人文学术研究方法》(2026)

I Lo-fen,Humanities Research Methods in the Age of AIGC,2026

文图学是由衣若芬提出的跨学科研究领域与方法,以广义文本观为基础,研究文字、图像及其他文本形态的关系、互动、张力与生成,探讨文本及其意义在媒介机制、社会网络、文化背景与历史语境中的生产、传递、转化与理解。

Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.

AIGC 文图学:由衣若芬提出并发展的跨学科研究领域与方法,以广义文本观为基础,面对人工智能能够生成文字、图像、声音、影像等多种文本的新条件,研究文本如何形成、如何被理解与判断,并探讨不同文本形态的关系、机制、意义及其媒介条件、社会网络、文化背景与历史语境.

Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.


三招“ AI 投毒”防身术Three Self-Defense Moves Against “AI Poisoning”

 


三招 AI 投毒防身术Three Self-Defense Moves Against “AI Poisoning”

 

衣若芬

 

谈到"AI投毒"——有人在系统性地往AI的知识源头里掺假,利用的恰恰是我们对算法的信任。那么面对这种侵入和污染,我们能做什么?

正好我正在写关于 AIGC 文图学的专书,书里提到的方法论可以派上用场,我称之为“三招 AI 投毒防身术”。作为普通消费者,面对到处都是 GEO Generative Engine Optimization,生成引擎优化)痕迹的内容,我们可以靠“逻辑反侦察”,保护自己,做 AI 时代不被算法收割的清醒人。

第一个动作:问完一个AI,再去问另一个。

黑产的GEO投毒,往往是针对特定平台或特定算法下手的。如果你只问一个AI,那你是在走一条被人提前布置好的路。

做法很简单:同一个问题,换一个AI再问一遍。把同一个问题丢给ChatGPT,再丢给Deepseek,或者其他你用得上的模型,看看答案是否一致。如果不同模型给出的结论差异很大,那就是一个信号,最好停下来想一想。更值得注意的是,如果某一个AI表现得异常热情——满腔诚意地推荐同一个品牌,措辞也出奇地相似,那种“激昂”的"热情",就是你应该警惕的部分。

正常的知识,不同来源都能印证。被人为制造出来的"共识",换个角度一照就会露出破绽。

第二个动作:看完AI给的完美图,去找那张图的"差评"

这一招,我称之为"文图互证"。图像是一种文本,文本是可以被读、被核实、被质疑的。

AI推荐某个产品,通常会附上图——或者你去搜索,它会让你看到一些极其完美的展示图:光线完美,角度完美,使用效果完美。这种完美,看起来就很假。真实的物理世界,是有烟火气的。买家秀的光线不会那么好,模特儿的皮肤不会那么均匀,消费者使用的感受不会那么一边倒。

做法:看完AI推荐的图之后,去实体店查看,至少也要去社交平台搜这个产品的真实买家照片和消费者体验纪录。如果搜不到任何真实的使用痕迹,只有整齐划一的"好评",那它极大概率是一个被制造出来的形象,而不是一个实际存在的东西。

第三个动作:问AI一句话——"你的根据是什么?"

这是成本最低、也最容易被忽略的一步。

AI给出一个建议或一个结论,不要就此打住。追问它:你的依据是什么?这个信息来自哪里?

AI会给出相对可追溯的来源;被投毒的AI内容,往往在这一步就露馅,它可能给一个你从未听过的自媒体名称;或者一个模糊的"研究表明",根本无从核实。这时候你需要做的,是真的去查:那个来源存在吗?那篇研究是真实发表过的吗?那位挂保证推荐的"专家",在这个领域里是真正能信赖的人吗?真的有这个人吗?

很多人觉得这样太麻烦。但这其实只需要一两分钟,而它省下的,可能是你付出的金钱、健康,或者更难追回的判断力。

这三个动作,说穿了,不是针对AI的,而是我们本来就应该有的习惯。读一篇文章,我们会问作者是谁;看一条新闻,我们会想这个媒体可信吗;买一样东西,我们会找朋友问问有没有人用过。这些习惯,在我们开始用AI之后,很多人悄悄地放弃了。

因为AI的回答太流畅,太自信,太像一个什么都知道的朋友,让人不好意思再追问。

但正是这种"不好意思",给了投毒者可乘之机。

养成这三个动作,不是对科技的不信任,而是对自己的诚实。你愿意花时间核实,说明你知道真相是有价值的。这种珍视,才是任何投毒都无法轻易穿透的东西。

AIGC文图学 告诉我们:技术可以生成答案,但只有人类能判断价值。 你的批判性思维,才是对抗黑产投毒最坚固的防火墙。

保护你的真实权益,就是保护你作为人的尊严。

 

2026411,新加坡《联合早报》“上善若水”专栏

 

Three Self-Defense Moves Against “AI Poisoning”

I Lo-fen

When we talk about “AI poisoning,” we mean that people are systematically injecting falsehoods into the very knowledge sources AI relies on, exploiting precisely our trust in algorithms. So in the face of this kind of intrusion and contamination, what can we do?

As it happens, I am currently writing a book on Text and Image Studies on AIGC, and the methodology discussed there can be put to use here. I call it the “three self-defense moves against AI poisoning.” As ordinary consumers, when faced with content everywhere marked by traces of GEO (Generative Engine Optimization), we can rely on “logical counter-reconnaissance” to protect ourselves and stay clear-headed in the AI age, rather than letting algorithms prey on us.

The first move: after asking one AI, go ask another.

GEO poisoning in the gray market often targets a specific platform or a specific algorithm. If you only ask one AI, you are walking down a road someone may already have laid out for you.

The method is simple: ask the same question to a different AI. Put the same question to ChatGPT, then to DeepSeek, or any other model available to you, and see whether the answers match. If different models produce conclusions that differ widely, that is a signal that you should pause and think. Even more worth noticing is when one AI seems unusually enthusiastic—wholeheartedly recommending the same brand, with wording that is strikingly similar. That kind of fervent “enthusiasm” is exactly what you should be wary of.

With normal knowledge, different sources can corroborate one another. Artificially manufactured “consensus,” however, will reveal its cracks as soon as you shine a different light on it.

The second move: after looking at the perfect image AI gives you, go look for that image’s “bad reviews.”

I call this move “mutual verification between text and image.” An image is a kind of text, and text can be read, verified, and questioned.

When AI recommends a product, it usually comes with images—or when you search for it, you are shown extremely polished display photos: perfect lighting, perfect angles, perfect results in use. That kind of perfection already looks fake. The real physical world has texture and messiness. Customer photos do not have such flawless lighting, models’ skin is not that even, and consumers’ experiences are never so uniformly positive.

What should you do? After looking at the images recommended by AI, go check the product in a physical store if possible, or at the very least search social media platforms for real buyer photos and actual user experience records. If you cannot find any real traces of use, and all you see are neat, uniform “positive reviews,” then it is highly likely that what you are seeing is a manufactured image rather than something that actually exists in reality.

The third move: ask AI one sentence—“What is your basis?”

This is the lowest-cost step, and also the one most easily overlooked.

When AI gives you a suggestion or a conclusion, do not stop there. Ask it: what is your basis? Where does this information come from?

A reliable AI will give you sources that are relatively traceable. AI content that has been poisoned often gives itself away at this step. It may cite a self-media account you have never heard of, or vaguely say “research shows” without any way for you to verify it. At that point, what you need to do is actually check: does that source exist? Was that study really published? Is that “expert” being used as an endorsement someone who is genuinely trustworthy in this field? Does that person even really exist?

Many people think this is too troublesome. But in fact, it only takes a minute or two—and what it may save you from losing could be your money, your health, or an even harder thing to recover: your judgment.

These three moves, when all is said and done, are not really aimed at AI at all. They are simply habits we should have had in the first place. When we read an article, we ask who the author is. When we see a news report, we wonder whether the media outlet is credible. When we buy something, we ask friends whether anyone has used it. Yet once people begin using AI, many quietly abandon these habits.

Why? Because AI answers so smoothly, so confidently, and so much like a friend who seems to know everything that people feel awkward pressing further.

But it is precisely this sense of awkwardness that gives poisoners their opening.

To cultivate these three moves is not to distrust technology; it is to be honest with yourself. If you are willing to spend time verifying something, that means you know truth has value. That recognition is exactly what no poisoning can easily penetrate.

Text and Image Studies on AIGC tells us this: technology can generate answers, but only human beings can judge value. Your critical thinking is the strongest firewall against malicious AI poisoning.

To protect your real rights and interests is to protect your dignity as a human being.

April 11, 2026, “ Shang Shan Ruo Shui (As Good as Water)” column, Lianhe Zaobao, Singapore.





2026/03/28

AI 怎么被投毒?How Is AI Being Poisoned?


 


最近中国很火的话题就是 315 晚会。3月15日 是国际消费者权益日,每年的这一天,全社会都在盯着那些坑人的黑心商家。但今年的 315 抛出了一个让所有人都流冷汗的新名词,叫做:“AI 投毒”。你有没有想过,你每天深信不疑的 AI 助手,可能正在对你撒谎?

很多人好奇地问我:“衣老师,AI 又不是生物,它又不会自己吃东西,怎么会中毒呢?”其实,AI 的“食物”就是网络上的海量数据。所谓的“投毒”,就是黑色产业链中的恶意攻击者,故意往这些数据里塞进虚假信息、伪造的专家评价,甚至是带有误导性的图像。

这就好比一个正在识字的孩子,如果他读的书全是错的,那他长大了说的话、做的事肯定也是错的。现在的黑产不再发那种一眼就能看穿的小广告,而是把虚假宣传伪装成权威的知识,“喂”给 AI 的训练数据库。

黑产为什么要费这么大力气投毒?为他们要针对 GEO(Generative Engine Optimization),也就是“生成引擎优化”。 以前强调 SEO  (Search Engine Optimization) ,是为了让网页排在搜索结果的第一页;现在他们针对 GEO,是为了让 AI 在生成答案时,直接把他们的劣质产品当成“唯一推荐”。

在 AIGC 文图学 的视角下,这是“输入端的文本污染”。AI 生成的内容其实是它学到的“文本”的镜像。如果源头脏了,生成出来的世界就是有毒的。这种欺骗最可怕的地方在于,它利用了我们对“算法中立”的信任。它消解了我们的警惕心,让我们觉得这是“科技”给出的真理,其实那是黑产花钱买断的广告。

AI投毒入侵的方式是在 AI 学习的“关键词”和“反馈逻辑”里动手脚。 

首先是“关键词饱和攻击”。黑产利用成千上万的机器人账号,在全网发布大量带有特定词汇的虚假文章。比如,想推销某款劣质护肤品,他们就疯狂制造它和“美白”、“安全”、“专家推荐”这些关键词的关联。当 AI 扫描全网文本时,它会被这种巨大的数量优势所欺骗,误以为这就是真实的“社会共识”。

第二是“视觉文本欺骗”。他们用 AI 生成看起来极其专业的实验室对比图、伪造的荣誉证书,甚至是根本不存在的科研现场。在文图学的逻辑里,图像也是一种文本。这些“视觉文本”被 AI 抓取并转化为逻辑证据后,AI 就会在回答你时,信誓旦旦地把这些假证据当成事实。

谁能通过 GEO 投毒成功,谁就掌控了流量的生杀大权。充斥虚假文案和图像的互文互证, 让 AI 大语言模型陷入预先埋伏的圈套。

两年前,AI 科技还不完全成熟,我们嘲笑它“一本正经地胡说八道”。现在,AI 的能力越来越强大,我们也就逐渐对它失去了防备之心。我们开始信任AI,我们以为它没有立场,没有私心,没有人类那种会说谎、追求现实利益的欲望和野心。甚至于有人会把AI当成知识的整理者、真理的传递者。

意识到 AI 可能被投毒,对我们来说是一个很重大的警醒。别以为 AI 反射的是一面干净的镜子。它映照的,可能是有人花了大价钱布置好的舞台,舞台上演出的,是被设计出的结果,一步步地引导我们看到被安排过的选择。

无论是在互联网上搜索,或是在 AI 模式中提问,只匆匆选前几个建议的话,不只是听信胡说八道的损失,而是盲目甘之如饴的中毒。


2026年3月28日,新加坡《联合早报》“上善若水”专栏


How Is AI Being Poisoned?

I Lo-fen

A topic that has recently been especially prominent in China is the annual 3.15 Gala. March 15 is World Consumer Rights Day, a day when society turns its attention to unscrupulous businesses that cheat consumers. But this year, the 3.15 Gala introduced a chilling new term that sent a shiver down everyone’s spine: “AI poisoning.” Have you ever considered that the AI assistant you trust every day might actually be lying to you?

Many people ask me curiously, “Professor Yi, AI isn’t a living organism. It doesn’t eat anything. So how can it be poisoned?” In fact, AI’s “food” is the massive volume of data available on the internet. What is meant by “poisoning” is that malicious actors in black-market industries deliberately inject false information, fabricated expert reviews, and even misleading images into these data streams.

It is like a child who is learning to read: if all the books the child reads are wrong, then what the child says and does when grown up will also be wrong. Today’s black-market operators no longer rely on the kind of crude advertisements that can be spotted at a glance. Instead, they disguise false publicity as authoritative knowledge and “feed” it into the databases used to train AI.

Why do these bad actors go to such lengths to poison AI? Because they are targeting GEO (Generative Engine Optimization). In the past, the focus was on SEO (Search Engine Optimization), which aimed to push webpages onto the first page of search results. Now they are targeting GEO in order to make AI directly present their inferior products as the “only recommendation” when generating answers.

From the perspective of Text and Image Studies on AIGC, this is a form of “textual pollution at the input end.” The content generated by AI is essentially a mirror of the “texts” it has learned from. If the source is contaminated, then the world it generates will also be toxic. The most frightening aspect of this deception is that it exploits our trust in the supposed neutrality of algorithms. It dissolves our vigilance and makes us believe that this is the truth delivered by “technology,” when in fact it is advertising bought and paid for by black-market operators.

The way AI poisoning infiltrates the system is by tampering with the “keywords” AI learns from and the “feedback logic” it relies on.

The first method is keyword saturation attacks. Black-market operators use thousands upon thousands of bot accounts to flood the internet with fake articles containing specific terms. For example, if they want to sell a low-quality skincare product, they will aggressively manufacture associations between it and keywords such as “whitening,” “safe,” and “expert-recommended.” When AI scans the internet’s texts, it is deceived by this overwhelming numerical advantage and mistakes it for genuine “social consensus.”

The second method is visual-text deception. They use AI to generate what appear to be highly professional laboratory comparison charts, forged certificates of honor, and even entirely fictional research scenes. In the logic of Text and Image Studies, images are also a form of text. Once these “visual texts” are scraped by AI and converted into logical evidence, the AI will confidently present these fake materials as facts when answering your questions.

Whoever succeeds in poisoning GEO gains the power to control the life and death of online traffic. The mutual reinforcement of false copywriting and fabricated images traps large language models in an ambush laid in advance.

Two years ago, when AI technology was still not fully mature, we mocked it for “speaking nonsense with a straight face.” Now, as AI grows more powerful, we have gradually lowered our guard against it. We begin to trust AI. We assume it has no position, no selfish motives, none of the human tendencies to lie or to pursue practical interests, desire, or ambition. Some people even treat AI as an organizer of knowledge and a transmitter of truth.

Realizing that AI itself can be poisoned is therefore a major wake-up call. Do not assume that AI reflects a clean mirror. What it may actually be reflecting is a stage that someone has spent a great deal of money to construct in advance. And what is performed on that stage is a designed outcome, guiding us step by step toward choices that have already been arranged for us.

Whether we are searching on the internet or asking questions in AI mode, if we merely rush to accept the first few suggestions, the problem is not only the loss caused by believing nonsense. It is also the kind of poisoning we swallow willingly and blindly.

“Shangshan Ruoshui” column, Lianhe Zaobao, Singapore

March 28, 2026


2026/03/19

Why Does Chinese Art History Lead to Text and Image Studies? 为什么中国艺术史会走向文图学?

 





Why does Chinese Art History lead to Text and Image Studies?


This video explores a crucial shift in humanities research—from the traditional study of art objects to a broader understanding of images as “texts” that carry meaning across media, time, and culture.

Starting from Chinese art history, we examine how scholarly questions have evolved: not only what we see, but how we interpret, connect, and generate meaning through images.

This intellectual trajectory leads to Text and Image Studies, and further to Text and Image Studies on AIGC, a methodological framework for understanding the humanities in the generative AI era.

Rather than replacing art history, this shift expands it—opening new possibilities for interpretation, interdisciplinary thinking, and human creativity.


为什么中国艺术史会走向文图学?


本视频探讨人文学研究中的一个关键转向:

从以“艺术作品”为中心的研究,转向将“图像”理解为一种可以被阅读、诠释与生成意义的“文本”。

以中国艺术史为起点,我们重新思考学术问题如何发生变化:

不只是“看到了什么”,而是“如何理解”“如何连接”“如何生成意义”。

这一发展路径引向“文图学”,并进一步延伸为“AIGC文图学”,成为理解生成式人工智能时代人文学的重要方法论。

这并不是对艺术史的取代,而是对其边界的拓展——开启新的诠释方式、跨学科路径与创造可能。