Halfway through the semester, I asked my
teaching assistant to select twelve poems from those taught in class this term
as the scope for the midterm quiz. To me, this was a perfectly ordinary
teaching arrangement. Twelve poems were neither too many nor too few: enough
for students to review the key points covered in class, but not so many as to
create an excessive burden of preparation.
Unexpectedly, things began to grow
complicated from there.
Some students came to say that I had
mentioned in class that only ten poems needed to be prepared for the quiz, not
twelve. Others said that I had once stated that longer works would not be
included in the examination scope. I was absolutely certain that I had never
said any such thing.
As it happened, I had recorded my lectures
this semester. So, in order to confirm the matter and to give my teaching
assistant a clear answer, I began searching through the lecture recordings. I
spent a long time doing so, repeatedly dragging the audio track back and forth,
trying to locate, in one class session after another, the sentence that might
have been misheard, misunderstood, or perhaps had never existed at all.
Of course, I found nothing.
I had never said ten poems, nor had I said
that longer works would not be tested. Yet this incident left me deeply
frustrated. Not because the students had remembered incorrectly, but because I
realized that I had actually been willing to spend so much time on it, merely
to prove that I had not said something.
At that moment, I became aware that this
matter had long ceased to be simply a question of “memorizing two additional
poems.”
From the students’ perspective, this was a
task of “risk management.” The difference between ten and twelve poems did not
lie in literary value, but in the “cost” of preparation. Two additional poems
meant a little more uncertainty, a slightly greater risk that something might
appear on the test that they had not fully mastered. The so-called scope of the
quiz was no longer an organization of course content, but a demarcation of
examination boundaries.
What they cared about was not what a poem
was about, but whether it would appear on the test paper.
Thus, any hint regarding the scope—even if
it was only an ambiguous expression in tone, or even their own
understanding—could be remembered, magnified, and, when necessary, turned into
a basis for negotiation.
And what about me?
I could have simply made a unified
clarification: “The scope of this quiz consists of twelve poems, as stated in
the latest announcement on the course website.” The matter might then have
ended there. Yet I chose instead to return to the recordings to look for
evidence, trying to determine who had remembered incorrectly.
Why?
Because in that instant, I understood the
students’ question as a challenge to the consistency of my teaching. I worried
that they might think I had contradicted myself. I worried that this would
affect their perception of fairness in the course. I even worried that it might
become a negative evaluation of my teaching performance.
As a result, the arrangement for a quiz
that counted for only 15 percent of the final grade began to trigger a much
larger emotional response. I was no longer handling a minor teaching detail; I
was defending my professionalism.
When I finally realized this, I also
realized that I had taken on an explanatory burden that I did not in fact need
to bear.
In a highly assessment-oriented learning
environment, students naturally transform course content into a controllable
examination scope, while teachers also easily come to regard any dispute over
that scope as a challenge to the normativity of their teaching. Both sides are
trying to reduce uncertainty, yet in doing so they deepen their reliance on
rules.
A literature course thus becomes a task of
boundary management: what will be tested and what will not; what must be
memorized and what can be skipped. A poem is no longer merely a poem, but an
item that might appear on an examination paper.
And my act of spending time searching
through the recordings itself became a manifestation of this environment—we
increasingly need traceable explanations, explicit commitments, and verifiable
records in order to maintain a teaching order that is regarded as fair.
Perhaps the two additional poems
themselves were not important. What matters is how they prompted a
redefinition, between teacher and students, of rules, memory, and
responsibility.
The everyday scene of teaching is
sometimes just like this: a seemingly minor adjustment can allow us to see what
subtle tensions are taking place between grades and preparation, between
understanding and completion.
Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.
Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.
Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.
Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.
When we talk about “AI poisoning,” we mean
that people are systematically injecting falsehoods into the very knowledge
sources AI relies on, exploiting precisely our trust in algorithms. So in the
face of this kind of intrusion and contamination, what can we do?
As it happens, I am currently writing a
book on Text and Image Studies on AIGC, and the methodology discussed
there can be put to use here. I call it the “three self-defense moves against
AI poisoning.” As ordinary consumers, when faced with content everywhere marked
by traces of GEO (Generative Engine Optimization), we can rely on “logical
counter-reconnaissance” to protect ourselves and stay clear-headed in the AI
age, rather than letting algorithms prey on us.
The first
move: after asking one AI, go ask another.
GEO poisoning in the gray market often
targets a specific platform or a specific algorithm. If you only ask one AI,
you are walking down a road someone may already have laid out for you.
The method is simple: ask the same
question to a different AI. Put the same question to ChatGPT, then to DeepSeek,
or any other model available to you, and see whether the answers match. If
different models produce conclusions that differ widely, that is a signal that
you should pause and think. Even more worth noticing is when one AI seems
unusually enthusiastic—wholeheartedly recommending the same brand, with wording
that is strikingly similar. That kind of fervent “enthusiasm” is exactly what
you should be wary of.
With normal knowledge, different sources
can corroborate one another. Artificially manufactured “consensus,” however,
will reveal its cracks as soon as you shine a different light on it.
The second
move: after looking at the perfect image AI gives you, go look for that image’s
“bad reviews.”
I call this move “mutual verification
between text and image.” An image is a kind of text, and text can be read,
verified, and questioned.
When AI recommends a product, it usually
comes with images—or when you search for it, you are shown extremely polished
display photos: perfect lighting, perfect angles, perfect results in use. That
kind of perfection already looks fake. The real physical world has texture and
messiness. Customer photos do not have such flawless lighting, models’ skin is
not that even, and consumers’ experiences are never so uniformly positive.
What should you do? After looking at the
images recommended by AI, go check the product in a physical store if possible,
or at the very least search social media platforms for real buyer photos and
actual user experience records. If you cannot find any real traces of use, and
all you see are neat, uniform “positive reviews,” then it is highly likely that
what you are seeing is a manufactured image rather than something that actually
exists in reality.
The third
move: ask AI one sentence—“What is your basis?”
This is the lowest-cost step, and also the
one most easily overlooked.
When AI gives you a suggestion or a
conclusion, do not stop there. Ask it: what is your basis? Where does this
information come from?
A reliable AI will give you sources that
are relatively traceable. AI content that has been poisoned often gives itself
away at this step. It may cite a self-media account you have never heard of, or
vaguely say “research shows” without any way for you to verify it. At that
point, what you need to do is actually check: does that source exist? Was that
study really published? Is that “expert” being used as an endorsement someone
who is genuinely trustworthy in this field? Does that person even really exist?
Many people think this is too troublesome.
But in fact, it only takes a minute or two—and what it may save you from losing
could be your money, your health, or an even harder thing to recover: your
judgment.
These three moves, when all is said and
done, are not really aimed at AI at all. They are simply habits we should have
had in the first place. When we read an article, we ask who the author is. When
we see a news report, we wonder whether the media outlet is credible. When we
buy something, we ask friends whether anyone has used it. Yet once people begin
using AI, many quietly abandon these habits.
Why? Because AI answers so smoothly, so
confidently, and so much like a friend who seems to know everything that people
feel awkward pressing further.
But it is precisely this sense of
awkwardness that gives poisoners their opening.
To cultivate these three moves is not to
distrust technology; it is to be honest with yourself. If you are willing to
spend time verifying something, that means you know truth has value. That
recognition is exactly what no poisoning can easily penetrate.
Text and
Image Studies on AIGC tells us this: technology can generate
answers, but only human beings can judge value. Your critical thinking is the
strongest firewall against malicious AI poisoning.
To protect your real rights and interests
is to protect your dignity as a human being.
April 11, 2026, “ Shang Shan Ruo Shui (As
Good as Water)” column, Lianhe Zaobao, Singapore.