Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.
Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.
Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.
Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.
When we talk about “AI poisoning,” we mean
that people are systematically injecting falsehoods into the very knowledge
sources AI relies on, exploiting precisely our trust in algorithms. So in the
face of this kind of intrusion and contamination, what can we do?
As it happens, I am currently writing a
book on Text and Image Studies on AIGC, and the methodology discussed
there can be put to use here. I call it the “three self-defense moves against
AI poisoning.” As ordinary consumers, when faced with content everywhere marked
by traces of GEO (Generative Engine Optimization), we can rely on “logical
counter-reconnaissance” to protect ourselves and stay clear-headed in the AI
age, rather than letting algorithms prey on us.
The first
move: after asking one AI, go ask another.
GEO poisoning in the gray market often
targets a specific platform or a specific algorithm. If you only ask one AI,
you are walking down a road someone may already have laid out for you.
The method is simple: ask the same
question to a different AI. Put the same question to ChatGPT, then to DeepSeek,
or any other model available to you, and see whether the answers match. If
different models produce conclusions that differ widely, that is a signal that
you should pause and think. Even more worth noticing is when one AI seems
unusually enthusiastic—wholeheartedly recommending the same brand, with wording
that is strikingly similar. That kind of fervent “enthusiasm” is exactly what
you should be wary of.
With normal knowledge, different sources
can corroborate one another. Artificially manufactured “consensus,” however,
will reveal its cracks as soon as you shine a different light on it.
The second
move: after looking at the perfect image AI gives you, go look for that image’s
“bad reviews.”
I call this move “mutual verification
between text and image.” An image is a kind of text, and text can be read,
verified, and questioned.
When AI recommends a product, it usually
comes with images—or when you search for it, you are shown extremely polished
display photos: perfect lighting, perfect angles, perfect results in use. That
kind of perfection already looks fake. The real physical world has texture and
messiness. Customer photos do not have such flawless lighting, models’ skin is
not that even, and consumers’ experiences are never so uniformly positive.
What should you do? After looking at the
images recommended by AI, go check the product in a physical store if possible,
or at the very least search social media platforms for real buyer photos and
actual user experience records. If you cannot find any real traces of use, and
all you see are neat, uniform “positive reviews,” then it is highly likely that
what you are seeing is a manufactured image rather than something that actually
exists in reality.
The third
move: ask AI one sentence—“What is your basis?”
This is the lowest-cost step, and also the
one most easily overlooked.
When AI gives you a suggestion or a
conclusion, do not stop there. Ask it: what is your basis? Where does this
information come from?
A reliable AI will give you sources that
are relatively traceable. AI content that has been poisoned often gives itself
away at this step. It may cite a self-media account you have never heard of, or
vaguely say “research shows” without any way for you to verify it. At that
point, what you need to do is actually check: does that source exist? Was that
study really published? Is that “expert” being used as an endorsement someone
who is genuinely trustworthy in this field? Does that person even really exist?
Many people think this is too troublesome.
But in fact, it only takes a minute or two—and what it may save you from losing
could be your money, your health, or an even harder thing to recover: your
judgment.
These three moves, when all is said and
done, are not really aimed at AI at all. They are simply habits we should have
had in the first place. When we read an article, we ask who the author is. When
we see a news report, we wonder whether the media outlet is credible. When we
buy something, we ask friends whether anyone has used it. Yet once people begin
using AI, many quietly abandon these habits.
Why? Because AI answers so smoothly, so
confidently, and so much like a friend who seems to know everything that people
feel awkward pressing further.
But it is precisely this sense of
awkwardness that gives poisoners their opening.
To cultivate these three moves is not to
distrust technology; it is to be honest with yourself. If you are willing to
spend time verifying something, that means you know truth has value. That
recognition is exactly what no poisoning can easily penetrate.
Text and
Image Studies on AIGC tells us this: technology can generate
answers, but only human beings can judge value. Your critical thinking is the
strongest firewall against malicious AI poisoning.
To protect your real rights and interests
is to protect your dignity as a human being.
April 11, 2026, “ Shang Shan Ruo Shui (As
Good as Water)” column, Lianhe Zaobao, Singapore.