2026/05/09

AI 工具会更新,方法带你走对方向

 


感谢踊跃预购!

《AIGC 时代的人文学术研究方法》第一阶段预购即将截止!

预购单

AI 工具越来越多,可是人文学术研究真正需要的,不是追着每一个工具跑。

工具会过时,方法不会。

在 AIGC 时代,真正重要的不是会用多少 AI,而是知道怎样用、何时用、用到哪里为止。

这本书不是一本 AI 工具清单,而是从人文学术研究的实际现场出发,讨论如何在论文写作、文献整理、资料分析、课堂教学、学术发表与研究伦理中,建立一套可以判断、可以说明、可以负责的 AI 使用原则与研究方法。

书中提供多种可以立即使用的模板、清单与表格,包括:

AIGC 使用披露声明模板、学术诚信自查清单、三大国际相关学术伦理规范体系核心要点对照表、人机协作写作工作流、推荐提示词示例、AI 使用日志模板、常用数据库与检索路径速查表、注脚格式示例、伦理申请基本清单,以及投稿、同行评审与预发表流程。

你不需要记住所有 AI 工具。

你真正需要的是:

知道什么时候可以用 AI,什么时候不该用;

知道 AI 可以帮你做到哪一步,哪一步必须由自己判断;

知道怎样说明自己的 AI 协作过程;

知道如何在提高效率的同时,守住学术伦理与研究主体性。

完成付款,才视为预购成功。

只登记但未付款者,不列入第一阶段预购名单。

已经登记但尚未付款的读者,请尽快完成付款并电邮付款资料。预购与付款方式请见表单说明。

多了两首诗

 


学期过半,我请助教从本学期教过的诗作中选出十二首,作为其中测验的出题范围。对我来说,这是一个再平常不过的教学安排。十二首诗,不算多,也不算少,既足以让学生回顾课堂重点,又不至于造成太大的准备负担。

没想到,事情却从这里开始变得复杂起来。

有学生来反映,说我在课堂上讲过,这次测验只需要准备十首诗,而不是十二首。也有人表示,我曾经提到篇幅较长的作品不会纳入考题范围。这些话,我非常确定自己没有说过。

刚好这学期我上课有录音。于是,为了确认,也为了给助教一个明确的答复,我开始翻找课堂录音。整整找了很长一段时间,反复拖动音轨,试图在一节又一节课的内容中,找到那句可能被误听、误解,甚至根本不存在的话。

结果当然没有找到。

我没有说过十首,也没有说过长篇不会考。但这件事让我感到非常懊恼。不是因为学生记错,而是因为我发现自己竟然愿意为此投入那么多时间,只为了证明我没有说过。

那一刻,我意识到,这件事情其实早已不只是多背两首诗的问题。

从学生的角度来看,这是一项风险管理任务。十首与十二首之间的差别,不在于文学价值,而在于准备成本。多两首诗,意味着多一些不确定性,多一点可能在考场上出现却未能充分掌握的风险。所谓测验范围,不再是课程内容的整理,而是考试边界的划定。

他们关心的,不是这首诗讲了什么,而是它会不会出现在试卷上。

于是,任何关于范围的提示——即便只是语气中的模糊表达,甚至是他们自己的理解都可能被记住、放大,并在需要时成为一种可以据以协商的依据。

而我呢?

我本可以直接统一说明:本次测验范围为十二首诗,以课程网站最新公告为准。事情也许就此结束。但我却选择回到录音中去寻找证据,试图厘清到底是谁记错了。

为什么?

因为在那一瞬间,我把学生的疑问理解成了一种对我教学一致性的质疑。我担心他们会觉得我前后说法不一,担心这会影响他们对课程公平性的感受,甚至担心这会成为对我教学表现的负面评价。

于是,一项只占总成绩15%的测验安排,开始牵动更大的情绪反应。我不再是在处理一个教学细节,而是在为自己的专业性辩护。

当我终于意识到这一点时,也意识到自己其实承担了本不必承担的解释成本。

在一个高度评量导向的学习环境中,学生自然会把课程内容转化为可控的考试范围,而教师也容易将任何关于范围的争议,视为对自身教学规范性的挑战。双方都在努力降低不确定性,却也因此不断加深对规则的依赖。

文学课程于是变成了一种边界管理任务:什么会考,什么不会考;哪些需要背诵,哪些可以略过。诗不再只是诗,而是一个可能出现在考卷上的项目。

而我花时间找录音的行为,本身也成为这种环境的体现——我们越来越需要可以追溯的说明、明确的承诺,以及可供核对的记录,来维持一种被认为是公平的教学秩序。

多出来的两首诗,也许本身并不重要。重要的是,它们如何在师生之间,引发了对规则、记忆与责任的重新界定。

教学现场的日常,有时就是这样:看似微小的调整,却能让我们看见,在分数与准备之间,理解与完成之间,究竟有哪些不易察觉的张力正在发生。

 

202659日,新加坡《联合早报》,“上善若水”专栏

 

Two Additional Poems

I Lo-fen

Halfway through the semester, I asked my teaching assistant to select twelve poems from those taught in class this term as the scope for the midterm quiz. To me, this was a perfectly ordinary teaching arrangement. Twelve poems were neither too many nor too few: enough for students to review the key points covered in class, but not so many as to create an excessive burden of preparation.

Unexpectedly, things began to grow complicated from there.

Some students came to say that I had mentioned in class that only ten poems needed to be prepared for the quiz, not twelve. Others said that I had once stated that longer works would not be included in the examination scope. I was absolutely certain that I had never said any such thing.

As it happened, I had recorded my lectures this semester. So, in order to confirm the matter and to give my teaching assistant a clear answer, I began searching through the lecture recordings. I spent a long time doing so, repeatedly dragging the audio track back and forth, trying to locate, in one class session after another, the sentence that might have been misheard, misunderstood, or perhaps had never existed at all.

Of course, I found nothing.

I had never said ten poems, nor had I said that longer works would not be tested. Yet this incident left me deeply frustrated. Not because the students had remembered incorrectly, but because I realized that I had actually been willing to spend so much time on it, merely to prove that I had not said something.

At that moment, I became aware that this matter had long ceased to be simply a question of “memorizing two additional poems.”

From the students’ perspective, this was a task of “risk management.” The difference between ten and twelve poems did not lie in literary value, but in the “cost” of preparation. Two additional poems meant a little more uncertainty, a slightly greater risk that something might appear on the test that they had not fully mastered. The so-called scope of the quiz was no longer an organization of course content, but a demarcation of examination boundaries.

What they cared about was not what a poem was about, but whether it would appear on the test paper.

Thus, any hint regarding the scope—even if it was only an ambiguous expression in tone, or even their own understanding—could be remembered, magnified, and, when necessary, turned into a basis for negotiation.

And what about me?

I could have simply made a unified clarification: “The scope of this quiz consists of twelve poems, as stated in the latest announcement on the course website.” The matter might then have ended there. Yet I chose instead to return to the recordings to look for evidence, trying to determine who had remembered incorrectly.

Why?

Because in that instant, I understood the students’ question as a challenge to the consistency of my teaching. I worried that they might think I had contradicted myself. I worried that this would affect their perception of fairness in the course. I even worried that it might become a negative evaluation of my teaching performance.

As a result, the arrangement for a quiz that counted for only 15 percent of the final grade began to trigger a much larger emotional response. I was no longer handling a minor teaching detail; I was defending my professionalism.

When I finally realized this, I also realized that I had taken on an explanatory burden that I did not in fact need to bear.

In a highly assessment-oriented learning environment, students naturally transform course content into a controllable examination scope, while teachers also easily come to regard any dispute over that scope as a challenge to the normativity of their teaching. Both sides are trying to reduce uncertainty, yet in doing so they deepen their reliance on rules.

A literature course thus becomes a task of boundary management: what will be tested and what will not; what must be memorized and what can be skipped. A poem is no longer merely a poem, but an item that might appear on an examination paper.

And my act of spending time searching through the recordings itself became a manifestation of this environment—we increasingly need traceable explanations, explicit commitments, and verifiable records in order to maintain a teaching order that is regarded as fair.

Perhaps the two additional poems themselves were not important. What matters is how they prompted a redefinition, between teacher and students, of rules, memory, and responsibility.

The everyday scene of teaching is sometimes just like this: a seemingly minor adjustment can allow us to see what subtle tensions are taking place between grades and preparation, between understanding and completion.

May 9, 2026, “Shang Shan Ruo Shui” column, Lianhe Zaobao, Singapore.

 

2026/04/11

【文图学】【AIGC文图学】的定义 Definition of Text and Image Studies.Text and Image Studies on AIGC

 


【文图学】【AIGC文图学】的定义

引用来源:衣若芬《AIGC时代的人文学术研究方法》(2026)

I Lo-fen,Humanities Research Methods in the Age of AIGC,2026


文图学是由衣若芬提出的跨学科研究领域与方法,以广义文本观为基础,研究文字、图像及其他文本形态的关系、互动、张力与生成,探讨文本及其意义在媒介机制、社会网络、文化背景与历史语境中的生产、传递、转化与理解。

Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.

AIGC 文图学:由衣若芬提出并发展的跨学科研究领域与方法,以广义文本观为基础,面对人工智能能够生成文字、图像、声音、影像等多种文本的新条件,研究文本如何形成、如何被理解与判断,并探讨不同文本形态的关系、机制、意义及其媒介条件、社会网络、文化背景与历史语境.

Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.


【文图学】【AIGC文图学】的定义 Definition of Text and Image Studies.Text and Image Studies on AIGC


 

【文图学】【AIGC文图学】的定义

引用来源:衣若芬《AIGC时代的人文学术研究方法》(2026)

I Lo-fen,Humanities Research Methods in the Age of AIGC,2026

文图学是由衣若芬提出的跨学科研究领域与方法,以广义文本观为基础,研究文字、图像及其他文本形态的关系、互动、张力与生成,探讨文本及其意义在媒介机制、社会网络、文化背景与历史语境中的生产、传递、转化与理解。

Text and Image Studies: an interdisciplinary field and method proposed by I Lo-fen, grounded in a broad concept of text. It examines the relationships, interactions, tensions, and generative processes among words, images, and other textual forms, and explores the production, transmission, transformation, and understanding of texts and their meanings within media mechanisms, social networks, cultural contexts, and historical circumstances.

AIGC 文图学:由衣若芬提出并发展的跨学科研究领域与方法,以广义文本观为基础,面对人工智能能够生成文字、图像、声音、影像等多种文本的新条件,研究文本如何形成、如何被理解与判断,并探讨不同文本形态的关系、机制、意义及其媒介条件、社会网络、文化背景与历史语境.

Text and Image Studies on AIGC: an interdisciplinary field and method proposed and developed by I Lo-fen. Grounded in a broad concept of text, it responds to the new condition in which artificial intelligence can generate multiple forms of text, including words, images, sound, and video. It studies how texts are formed, understood, and judged, and explores the relations, mechanisms, and meanings of different textual forms, as well as their media conditions, social networks, cultural contexts, and historical circumstances.


三招“ AI 投毒”防身术Three Self-Defense Moves Against “AI Poisoning”

 


三招 AI 投毒防身术Three Self-Defense Moves Against “AI Poisoning”

 

衣若芬

 

谈到"AI投毒"——有人在系统性地往AI的知识源头里掺假,利用的恰恰是我们对算法的信任。那么面对这种侵入和污染,我们能做什么?

正好我正在写关于 AIGC 文图学的专书,书里提到的方法论可以派上用场,我称之为“三招 AI 投毒防身术”。作为普通消费者,面对到处都是 GEO Generative Engine Optimization,生成引擎优化)痕迹的内容,我们可以靠“逻辑反侦察”,保护自己,做 AI 时代不被算法收割的清醒人。

第一个动作:问完一个AI,再去问另一个。

黑产的GEO投毒,往往是针对特定平台或特定算法下手的。如果你只问一个AI,那你是在走一条被人提前布置好的路。

做法很简单:同一个问题,换一个AI再问一遍。把同一个问题丢给ChatGPT,再丢给Deepseek,或者其他你用得上的模型,看看答案是否一致。如果不同模型给出的结论差异很大,那就是一个信号,最好停下来想一想。更值得注意的是,如果某一个AI表现得异常热情——满腔诚意地推荐同一个品牌,措辞也出奇地相似,那种“激昂”的"热情",就是你应该警惕的部分。

正常的知识,不同来源都能印证。被人为制造出来的"共识",换个角度一照就会露出破绽。

第二个动作:看完AI给的完美图,去找那张图的"差评"

这一招,我称之为"文图互证"。图像是一种文本,文本是可以被读、被核实、被质疑的。

AI推荐某个产品,通常会附上图——或者你去搜索,它会让你看到一些极其完美的展示图:光线完美,角度完美,使用效果完美。这种完美,看起来就很假。真实的物理世界,是有烟火气的。买家秀的光线不会那么好,模特儿的皮肤不会那么均匀,消费者使用的感受不会那么一边倒。

做法:看完AI推荐的图之后,去实体店查看,至少也要去社交平台搜这个产品的真实买家照片和消费者体验纪录。如果搜不到任何真实的使用痕迹,只有整齐划一的"好评",那它极大概率是一个被制造出来的形象,而不是一个实际存在的东西。

第三个动作:问AI一句话——"你的根据是什么?"

这是成本最低、也最容易被忽略的一步。

AI给出一个建议或一个结论,不要就此打住。追问它:你的依据是什么?这个信息来自哪里?

AI会给出相对可追溯的来源;被投毒的AI内容,往往在这一步就露馅,它可能给一个你从未听过的自媒体名称;或者一个模糊的"研究表明",根本无从核实。这时候你需要做的,是真的去查:那个来源存在吗?那篇研究是真实发表过的吗?那位挂保证推荐的"专家",在这个领域里是真正能信赖的人吗?真的有这个人吗?

很多人觉得这样太麻烦。但这其实只需要一两分钟,而它省下的,可能是你付出的金钱、健康,或者更难追回的判断力。

这三个动作,说穿了,不是针对AI的,而是我们本来就应该有的习惯。读一篇文章,我们会问作者是谁;看一条新闻,我们会想这个媒体可信吗;买一样东西,我们会找朋友问问有没有人用过。这些习惯,在我们开始用AI之后,很多人悄悄地放弃了。

因为AI的回答太流畅,太自信,太像一个什么都知道的朋友,让人不好意思再追问。

但正是这种"不好意思",给了投毒者可乘之机。

养成这三个动作,不是对科技的不信任,而是对自己的诚实。你愿意花时间核实,说明你知道真相是有价值的。这种珍视,才是任何投毒都无法轻易穿透的东西。

AIGC文图学 告诉我们:技术可以生成答案,但只有人类能判断价值。 你的批判性思维,才是对抗黑产投毒最坚固的防火墙。

保护你的真实权益,就是保护你作为人的尊严。

 

2026411,新加坡《联合早报》“上善若水”专栏

 

Three Self-Defense Moves Against “AI Poisoning”

I Lo-fen

When we talk about “AI poisoning,” we mean that people are systematically injecting falsehoods into the very knowledge sources AI relies on, exploiting precisely our trust in algorithms. So in the face of this kind of intrusion and contamination, what can we do?

As it happens, I am currently writing a book on Text and Image Studies on AIGC, and the methodology discussed there can be put to use here. I call it the “three self-defense moves against AI poisoning.” As ordinary consumers, when faced with content everywhere marked by traces of GEO (Generative Engine Optimization), we can rely on “logical counter-reconnaissance” to protect ourselves and stay clear-headed in the AI age, rather than letting algorithms prey on us.

The first move: after asking one AI, go ask another.

GEO poisoning in the gray market often targets a specific platform or a specific algorithm. If you only ask one AI, you are walking down a road someone may already have laid out for you.

The method is simple: ask the same question to a different AI. Put the same question to ChatGPT, then to DeepSeek, or any other model available to you, and see whether the answers match. If different models produce conclusions that differ widely, that is a signal that you should pause and think. Even more worth noticing is when one AI seems unusually enthusiastic—wholeheartedly recommending the same brand, with wording that is strikingly similar. That kind of fervent “enthusiasm” is exactly what you should be wary of.

With normal knowledge, different sources can corroborate one another. Artificially manufactured “consensus,” however, will reveal its cracks as soon as you shine a different light on it.

The second move: after looking at the perfect image AI gives you, go look for that image’s “bad reviews.”

I call this move “mutual verification between text and image.” An image is a kind of text, and text can be read, verified, and questioned.

When AI recommends a product, it usually comes with images—or when you search for it, you are shown extremely polished display photos: perfect lighting, perfect angles, perfect results in use. That kind of perfection already looks fake. The real physical world has texture and messiness. Customer photos do not have such flawless lighting, models’ skin is not that even, and consumers’ experiences are never so uniformly positive.

What should you do? After looking at the images recommended by AI, go check the product in a physical store if possible, or at the very least search social media platforms for real buyer photos and actual user experience records. If you cannot find any real traces of use, and all you see are neat, uniform “positive reviews,” then it is highly likely that what you are seeing is a manufactured image rather than something that actually exists in reality.

The third move: ask AI one sentence—“What is your basis?”

This is the lowest-cost step, and also the one most easily overlooked.

When AI gives you a suggestion or a conclusion, do not stop there. Ask it: what is your basis? Where does this information come from?

A reliable AI will give you sources that are relatively traceable. AI content that has been poisoned often gives itself away at this step. It may cite a self-media account you have never heard of, or vaguely say “research shows” without any way for you to verify it. At that point, what you need to do is actually check: does that source exist? Was that study really published? Is that “expert” being used as an endorsement someone who is genuinely trustworthy in this field? Does that person even really exist?

Many people think this is too troublesome. But in fact, it only takes a minute or two—and what it may save you from losing could be your money, your health, or an even harder thing to recover: your judgment.

These three moves, when all is said and done, are not really aimed at AI at all. They are simply habits we should have had in the first place. When we read an article, we ask who the author is. When we see a news report, we wonder whether the media outlet is credible. When we buy something, we ask friends whether anyone has used it. Yet once people begin using AI, many quietly abandon these habits.

Why? Because AI answers so smoothly, so confidently, and so much like a friend who seems to know everything that people feel awkward pressing further.

But it is precisely this sense of awkwardness that gives poisoners their opening.

To cultivate these three moves is not to distrust technology; it is to be honest with yourself. If you are willing to spend time verifying something, that means you know truth has value. That recognition is exactly what no poisoning can easily penetrate.

Text and Image Studies on AIGC tells us this: technology can generate answers, but only human beings can judge value. Your critical thinking is the strongest firewall against malicious AI poisoning.

To protect your real rights and interests is to protect your dignity as a human being.

April 11, 2026, “ Shang Shan Ruo Shui (As Good as Water)” column, Lianhe Zaobao, Singapore.