Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.
Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.
Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.
Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.
When we talk about “AI poisoning,” we mean
that people are systematically injecting falsehoods into the very knowledge
sources AI relies on, exploiting precisely our trust in algorithms. So in the
face of this kind of intrusion and contamination, what can we do?
As it happens, I am currently writing a
book on Text and Image Studies on AIGC, and the methodology discussed
there can be put to use here. I call it the “three self-defense moves against
AI poisoning.” As ordinary consumers, when faced with content everywhere marked
by traces of GEO (Generative Engine Optimization), we can rely on “logical
counter-reconnaissance” to protect ourselves and stay clear-headed in the AI
age, rather than letting algorithms prey on us.
The first
move: after asking one AI, go ask another.
GEO poisoning in the gray market often
targets a specific platform or a specific algorithm. If you only ask one AI,
you are walking down a road someone may already have laid out for you.
The method is simple: ask the same
question to a different AI. Put the same question to ChatGPT, then to DeepSeek,
or any other model available to you, and see whether the answers match. If
different models produce conclusions that differ widely, that is a signal that
you should pause and think. Even more worth noticing is when one AI seems
unusually enthusiastic—wholeheartedly recommending the same brand, with wording
that is strikingly similar. That kind of fervent “enthusiasm” is exactly what
you should be wary of.
With normal knowledge, different sources
can corroborate one another. Artificially manufactured “consensus,” however,
will reveal its cracks as soon as you shine a different light on it.
The second
move: after looking at the perfect image AI gives you, go look for that image’s
“bad reviews.”
I call this move “mutual verification
between text and image.” An image is a kind of text, and text can be read,
verified, and questioned.
When AI recommends a product, it usually
comes with images—or when you search for it, you are shown extremely polished
display photos: perfect lighting, perfect angles, perfect results in use. That
kind of perfection already looks fake. The real physical world has texture and
messiness. Customer photos do not have such flawless lighting, models’ skin is
not that even, and consumers’ experiences are never so uniformly positive.
What should you do? After looking at the
images recommended by AI, go check the product in a physical store if possible,
or at the very least search social media platforms for real buyer photos and
actual user experience records. If you cannot find any real traces of use, and
all you see are neat, uniform “positive reviews,” then it is highly likely that
what you are seeing is a manufactured image rather than something that actually
exists in reality.
The third
move: ask AI one sentence—“What is your basis?”
This is the lowest-cost step, and also the
one most easily overlooked.
When AI gives you a suggestion or a
conclusion, do not stop there. Ask it: what is your basis? Where does this
information come from?
A reliable AI will give you sources that
are relatively traceable. AI content that has been poisoned often gives itself
away at this step. It may cite a self-media account you have never heard of, or
vaguely say “research shows” without any way for you to verify it. At that
point, what you need to do is actually check: does that source exist? Was that
study really published? Is that “expert” being used as an endorsement someone
who is genuinely trustworthy in this field? Does that person even really exist?
Many people think this is too troublesome.
But in fact, it only takes a minute or two—and what it may save you from losing
could be your money, your health, or an even harder thing to recover: your
judgment.
These three moves, when all is said and
done, are not really aimed at AI at all. They are simply habits we should have
had in the first place. When we read an article, we ask who the author is. When
we see a news report, we wonder whether the media outlet is credible. When we
buy something, we ask friends whether anyone has used it. Yet once people begin
using AI, many quietly abandon these habits.
Why? Because AI answers so smoothly, so
confidently, and so much like a friend who seems to know everything that people
feel awkward pressing further.
But it is precisely this sense of
awkwardness that gives poisoners their opening.
To cultivate these three moves is not to
distrust technology; it is to be honest with yourself. If you are willing to
spend time verifying something, that means you know truth has value. That
recognition is exactly what no poisoning can easily penetrate.
Text and
Image Studies on AIGC tells us this: technology can generate
answers, but only human beings can judge value. Your critical thinking is the
strongest firewall against malicious AI poisoning.
To protect your real rights and interests
is to protect your dignity as a human being.
April 11, 2026, “ Shang Shan Ruo Shui (As
Good as Water)” column, Lianhe Zaobao, Singapore.
意识到 AI 可能被投毒,对我们来说是一个很重大的警醒。别以为 AI 反射的是一面干净的镜子。它映照的,可能是有人花了大价钱布置好的舞台,舞台上演出的,是被设计出的结果,一步步地引导我们看到被安排过的选择。
无论是在互联网上搜索,或是在 AI 模式中提问,只匆匆选前几个建议的话,不只是听信胡说八道的损失,而是盲目甘之如饴的中毒。
2026年3月28日,新加坡《联合早报》“上善若水”专栏
How Is AI Being Poisoned?
I Lo-fen
A topic that has recently been especially prominent in China is the annual 3.15 Gala. March 15 is World Consumer Rights Day, a day when society turns its attention to unscrupulous businesses that cheat consumers. But this year, the 3.15 Gala introduced a chilling new term that sent a shiver down everyone’s spine: “AI poisoning.” Have you ever considered that the AI assistant you trust every day might actually be lying to you?
Many people ask me curiously, “Professor Yi, AI isn’t a living organism. It doesn’t eat anything. So how can it be poisoned?” In fact, AI’s “food” is the massive volume of data available on the internet. What is meant by “poisoning” is that malicious actors in black-market industries deliberately inject false information, fabricated expert reviews, and even misleading images into these data streams.
It is like a child who is learning to read: if all the books the child reads are wrong, then what the child says and does when grown up will also be wrong. Today’s black-market operators no longer rely on the kind of crude advertisements that can be spotted at a glance. Instead, they disguise false publicity as authoritative knowledge and “feed” it into the databases used to train AI.
Why do these bad actors go to such lengths to poison AI? Because they are targeting GEO (Generative Engine Optimization). In the past, the focus was on SEO (Search Engine Optimization), which aimed to push webpages onto the first page of search results. Now they are targeting GEO in order to make AI directly present their inferior products as the “only recommendation” when generating answers.
From the perspective of Text and Image Studies on AIGC, this is a form of “textual pollution at the input end.” The content generated by AI is essentially a mirror of the “texts” it has learned from. If the source is contaminated, then the world it generates will also be toxic. The most frightening aspect of this deception is that it exploits our trust in the supposed neutrality of algorithms. It dissolves our vigilance and makes us believe that this is the truth delivered by “technology,” when in fact it is advertising bought and paid for by black-market operators.
The way AI poisoning infiltrates the system is by tampering with the “keywords” AI learns from and the “feedback logic” it relies on.
The first method is keyword saturation attacks. Black-market operators use thousands upon thousands of bot accounts to flood the internet with fake articles containing specific terms. For example, if they want to sell a low-quality skincare product, they will aggressively manufacture associations between it and keywords such as “whitening,” “safe,” and “expert-recommended.” When AI scans the internet’s texts, it is deceived by this overwhelming numerical advantage and mistakes it for genuine “social consensus.”
The second method is visual-text deception. They use AI to generate what appear to be highly professional laboratory comparison charts, forged certificates of honor, and even entirely fictional research scenes. In the logic of Text and Image Studies, images are also a form of text. Once these “visual texts” are scraped by AI and converted into logical evidence, the AI will confidently present these fake materials as facts when answering your questions.
Whoever succeeds in poisoning GEO gains the power to control the life and death of online traffic. The mutual reinforcement of false copywriting and fabricated images traps large language models in an ambush laid in advance.
Two years ago, when AI technology was still not fully mature, we mocked it for “speaking nonsense with a straight face.” Now, as AI grows more powerful, we have gradually lowered our guard against it. We begin to trust AI. We assume it has no position, no selfish motives, none of the human tendencies to lie or to pursue practical interests, desire, or ambition. Some people even treat AI as an organizer of knowledge and a transmitter of truth.
Realizing that AI itself can be poisoned is therefore a major wake-up call. Do not assume that AI reflects a clean mirror. What it may actually be reflecting is a stage that someone has spent a great deal of money to construct in advance. And what is performed on that stage is a designed outcome, guiding us step by step toward choices that have already been arranged for us.
Whether we are searching on the internet or asking questions in AI mode, if we merely rush to accept the first few suggestions, the problem is not only the loss caused by believing nonsense. It is also the kind of poisoning we swallow willingly and blindly.
Why does Chinese Art History lead to Text and Image Studies?
This video explores a crucial shift in humanities research—from the traditional study of art objects to a broader understanding of images as “texts” that carry meaning across media, time, and culture.
Starting from Chinese art history, we examine how scholarly questions have evolved: not only what we see, but how we interpret, connect, and generate meaning through images.
This intellectual trajectory leads to Text and Image Studies, and further to Text and Image Studies on AIGC, a methodological framework for understanding the humanities in the generative AI era.
Rather than replacing art history, this shift expands it—opening new possibilities for interpretation, interdisciplinary thinking, and human creativity.