注:机翻,未校对。
一些关于 AI 幻觉的文章合集。
Hallucinations: Why AI Makes Stuff Up, and What’s Being Done About It
幻觉:为什么人工智能会编造东西,以及正在采取什么措施
There’s an important distinction between using AI to generate content and to answer questions.
使用 AI 生成内容和回答问题之间有一个重要的区别。
Lisa Lacy
April 1, 2024 5:00 a.m. PT
8 min read
wildpixel/iStock via Getty Images
Less than two years ago, cognitive and computer scientist Douglas Hofstadter demonstrated how easy it was to make AI hallucinate when he asked a nonsensical question and OpenAI’s GPT-3 replied, “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.”
不到两年前,认知和计算机科学家道格拉斯・霍夫施塔特(Douglas Hofstadter)提出了一个荒谬的问题,证明了让人工智能产生幻觉是多么容易,OpenAI 的 GPT-3 回答说:“金门大桥于 2016 年 10 月第二次穿越埃及。
Now, however, GPT-3.5 — which powers the free version of ChatGPT — tells you, “There is no record or historical event indicating that the Golden Gate Bridge, which is located in San Francisco, California, USA, was ever transported across Egypt.”
然而,现在,为 ChatGPT 免费版本提供动力的 GPT-3.5 告诉你,“没有任何记录或历史事件表明,位于美国加利福尼亚州旧金山的金门大桥曾经被运过埃及。
It’s a good example of how quickly these AI models evolve. But for all the improvements on this front, you still need to be on guard.
这是一个很好的例子,说明这些人工智能模型的发展速度有多快。但是,尽管在这方面取得了所有改进,但您仍然需要保持警惕。
AI chatbots continue to hallucinate and present material that isn’t real, even if the errors are less glaringly obvious. And the chatbots confidently deliver this information as fact, which has already generated plenty of challenges for tech companies and headlines for media outlets.
人工智能聊天机器人继续产生幻觉并呈现不真实的材料,即使错误不那么明显。聊天机器人自信地将这些信息作为事实提供,这已经给科技公司带来了很多挑战,也给媒体带来了头条新闻。
Taking a more nuanced view, hallucinations are actually both a feature and a bug — and there’s an important distinction between using an AI model as a content generator and tapping into it to answer questions.
从更细致的角度来看,幻觉实际上既是一个功能,也是一个错误 —— 使用人工智能模型作为内容生成器和利用它来回答问题之间有一个重要的区别。
Since late 2022, we’ve seen the introduction of generative AI tools like ChatGPT, Copilot and Gemini from tech giants and startups alike. As users experiment with these tools to write code, essays and poetry, perfect their resumes, create meal and workout plans and generate never-before-seen images and videos, we continue to see mistakes, like inaccuracies in historical image generation. It’s a good reminder generative AI is still very much a work in progress, even as companies like Google and Adobe showcase tools that can generate games and music to demonstrate where the technology is headed.
自 2022 年底以来,我们看到科技巨头和初创公司推出了 ChatGPT、Copilot 和 Gemini 等生成式 AI 工具。当用户尝试使用这些工具编写代码、论文和诗歌、完善简历、创建膳食和锻炼计划以及生成前所未见的图像和视频时,我们继续看到错误,例如历史图像生成的不准确。这是一个很好的提醒,生成式人工智能仍然是一项正在进行的工作,尽管像谷歌和 Adobe 这样的公司展示了可以生成游戏和音乐的工具,以展示该技术的发展方向。
If you’re trying to wrap your head around what hallucinations are and why they happen, this explainer is for you. Here’s what you need to know.
如果您想了解幻觉是什么以及它们为什么会发生,那么这个解释器适合您。以下是您需要了解的内容。
What is an AI hallucination? 什么是人工智能幻觉?
A generative AI model “hallucinates” when it delivers false or misleading information.
生成式 AI 模型在提供虚假或误导性信息时会 “产生幻觉”。
A frequently cited example comes from February 2023 when Google’s Bard chatbot (now called Gemini) was asked about the discoveries made by NASA’s James Webb Space Telescope and it incorrectly stated the telescope took the first pictures of an exoplanet outside our solar system. But there are plenty of others.
一个经常被引用的例子来自 2023 年 2 月,当时谷歌的 Bard 聊天机器人(现在称为 Gemini)被问及美国宇航局詹姆斯韦伯太空望远镜的发现,它错误地表示该望远镜拍摄了太阳系外系外行星的第一张照片。但还有很多其他的。
ChatGPT falsely stated an Australian politician was one of the guilty parties in a bribery case when he was in fact the whistleblower. And during a two-hour conversation, Bing’s chatbot eventually professed its love for New York Times tech columnist Kevin Roose.
ChatGPT 错误地表示一名澳大利亚政客是贿赂案的罪魁祸首之一,而他实际上是举报人。在两个小时的对话中,Bing 的聊天机器人最终表达了对《纽约时报》科技专栏作家凯文・罗斯(Kevin Roose)的热爱。
According to Stefano Soatto, vice president and distinguished scientist at Amazon Web Services, a hallucination in AI is “synthetically generated data,” or “fake data that is statistically indistinguishable from actual factually correct data.” (Amazon Web Services works with clients like LexisNexis and Ricoh to build generative AI applications with Anthropic’s Claude 3 Haiku model.)
根据亚马逊网络服务副总裁兼杰出科学家斯特凡诺・索阿托(Stefano Soatto)的说法,人工智能中的幻觉是 “合成生成的数据”,或者是 “在统计上与实际事实正确的数据无法区分的虚假数据”。(亚马逊网络服务与 LexisNexis 和 Ricoh 等客户合作,使用 Anthropic 的 Claude 3 Haiku 模型构建生成式 AI 应用程序。
Let’s unpack that a little. Take, for example, an AI model that can generate text and was trained on Wikipedia. Its purpose is to generate text that looks and sounds like the posts we already see on Wikipedia.
让我们稍微解开一下。举个例子,一个可以生成文本的 AI 模型,并在维基百科上进行了训练。其目的是生成看起来和听起来像我们已经在维基百科上看到的帖子的文本。
In other words, the model is trained to generate data that is “statistically indistinguishable” from the training data, or that has the same type of generic characteristics. There’s no requirement for it to be “true,” Soatto said.
换言之,模型经过训练后,会生成与训练数据 “在统计上无法区分” 的数据,或者具有相同类型的通用特征的数据。没有要求它是 “真实的”,Soatto 说。
How and why does AI hallucinate? 人工智能如何以及为什么会产生幻觉?
It all goes back to how the models were trained.
这一切都可以追溯到模型的训练方式。
The large language models that underpin generative AI tools are trained on massive amounts of data, like articles, books, code and social media posts. They’re very good at generating text that’s similar to whatever they saw during training.
支撑生成式 AI 工具的大型语言模型是在大量数据上训练的,例如文章、书籍、代码和社交媒体帖子。他们非常擅长生成与他们在训练期间看到的任何内容相似的文本。
Let’s say the model has never seen a sentence with the word “crimson” in it. It can nevertheless infer this word is used in similar contexts to the word “red.” And so it might eventually say something is crimson in color rather than red.
假设模型从未见过包含 “crimson” 一词的句子。尽管如此,它仍然可以推断出这个词与 “红色” 一词在相似的上下文中使用。因此,它最终可能会说某些东西是深红色的,而不是红色的。
“It generalizes or makes an inference based on what it knows about language, what it knows about the occurrence of words in different contexts,” said Swabha Swayamdipta, assistant professor of computer science at the USC Viterbi School of Engineering and leader of the Data*,* Interpretability*,* Languageand Learning (DILL) lab. “This is why these language models produce facts which kind of seem plausible but are not quite true because they’re not trained to just produce exactly what they have seen before.”
“它根据它对语言的了解,它对不同上下文中单词出现的了解来概括或做出推断,” 南加州大学维特比工程学院计算机科学助理教授、数据、可解释性、语言和学习(DILL)实验室的负责人 Swabha Swayamdipta 说。“这就是为什么这些语言模型产生的事实看起来似乎合理,但并不完全正确,因为它们没有经过训练来产生他们以前看到的东西。
Hallucinations can also result from improper training and/or biased or insufficient data, which leave the model unprepared to answer certain questions.
幻觉也可能是由不正确的训练和 / 或有偏见或不足的数据引起的,这使得模型没有准备好回答某些问题。
“The model doesn’t have contextual information,” said Tarun Chopra, vice president of product management at IBM Data & AI. “It’s just saying, ‘Based on this word, I think that the right probability is this next word.’ That’s what it is. Just math in the basic sense.”
“该模型没有上下文信息,”IBM 数据与 AI 产品管理副总裁 Tarun Chopra 说。“它只是在说,’ 根据这个词,我认为正确的概率是下一个词。就是这样。只是基本意义上的数学。
How often does AI hallucinate? 人工智能多久产生一次幻觉?
Estimates from gen AI startup Vectara show chatbots hallucinate anywhere from 3% to 27% of the time. It has a Hallucination Leaderboard on developer platform Github, which keeps a running tab on how often popular chatbots hallucinate when summarizing documents.
据人工智能初创公司 Vectara 估计,聊天机器人产生幻觉的几率从 3% 到 27% 不等。它在开发者平台 Github 上有一个幻觉排行榜,该排行榜密切关注流行的聊天机器人在总结文档时产生幻觉的频率。
Tech companies are well aware of these limitations.
科技公司很清楚这些局限性。
For example, GPT-3.5 warns, “ChatGPT can make mistakes. Consider checking important information,” while Google includes a disclaimer that says, “Gemini may display inaccurate info, including about people, so double-check responses.”
例如,GPT-3.5 警告说,“ChatGPT 可能会犯错误。考虑检查重要信息 “,而谷歌则包含一项免责声明,其中写道:” 双子座可能会显示不准确的信息,包括关于人的信息,因此请仔细检查回复。
An OpenAI spokesperson said the company is “continuing to make improvements to limit the issue as we make model updates.”
OpenAI 的一位发言人表示,该公司 “正在继续进行改进,以限制我们在进行模型更新时的问题。
According to OpenAI’s figures, GPT-4, which came out in March 2023, is 40% more likely to produce factual responses than its predecessor, GPT-3.5.
根据 OpenAI 的数据,2023 年 3 月问世的 GPT-4 比其前身 GPT-3.5 产生事实回应的可能性高 40%。
In a statement, Google said, “As we’ve said from the beginning, hallucinations are a known challenge with all LLMs — there are instances where the AI just gets things wrong. This is something that we’re constantly working on improving.”
谷歌在一份声明中表示,“正如我们从一开始就说过的那样,幻觉是所有 LLMs 的一个已知挑战 - 在某些情况下,AI 会把事情弄错。这是我们一直在努力改进的地方。
When asked about hallucinations in its products, a Microsoft spokesperson said it has “made progress on grounding, fine-tuning and steering techniques to help address when an AI model or AI chatbot fabricates a response.”
当被问及其产品中的幻觉时,Microsoft 发言人表示,它 “在接地、微调和转向技术方面取得了进展,以帮助解决人工智能模型或人工智能聊天机器人何时做出回应的问题。
Can you prevent AI hallucinations? 你能防止人工智能幻觉吗?
We can’t stop hallucinations, but we can manage them.
我们无法阻止幻觉,但我们可以控制它们。
One way is to ensure the training data is of a high quality and adequate breadth and the model is tested at various checkpoints.
一种方法是确保训练数据具有高质量和足够的广度,并在各个检查点测试模型。
Swayamdipta suggested a set of journalism-like standards in which outputs generated by language models are verified by third-party sources.
Swayamdipta 提出了一套类似新闻的标准,其中语言模型生成的输出由第三方来源进行验证。
Another solution is to embed the model within a larger system — more software — that checks consistency and factuality and traces attribution.
另一种解决方案是将模型嵌入到一个更大的系统中(更多的软件),该系统检查一致性和事实性,并追踪归因。
“Hallucination as a property of an AI model is unavoidable, but as a property of the system that uses the model, it is not only unavoidable, it is very avoidable and manageable,” Soatto said.
“幻觉作为人工智能模型的一个属性是不可避免的,但作为使用该模型的系统的一个属性,它不仅是不可避免的,而且是非常可避免和可控的,”Soatto 说。
This larger system could also help businesses make sure their chatbots are aligned with other constraints, policies or regulations — and avoid the lawsuit Air Canada found itself in after its chatbot hallucinated details about the airline’s bereavement policy that were inaccurate.
这个更大的系统还可以帮助企业确保他们的聊天机器人符合其他限制、政策或法规,并避免加拿大航空公司在其聊天机器人幻觉有关航空公司丧亲政策的细节不准确后陷入的诉讼。
“If users hope to download a pretrained model from the web and just run it and hope that they get factual answers to questions, that is not a wise use of the model because that model is not designed and trained to do that,” Soatto added. “But if they use services that place the model inside a bigger system where they can specify or customize their constraints … that system overall should not hallucinate.”
“如果用户希望从网络上下载一个预训练的模型,然后运行它,并希望他们得到问题的事实答案,那么这不是对模型的明智使用,因为该模型的设计和训练不是为了做到这一点,”Soatto 补充道。“但是,如果他们使用将模型放置在一个更大的系统中的服务,他们可以在其中指定或自定义他们的约束… 这个系统总体上不应该产生幻觉。
A quick check for users is to ask the same question in a slightly different way to see how the model’s response compares.
对用户来说,快速检查是以略有不同的方式提出相同的问题,以查看模型的响应如何比较。
“If someone is a habitual liar, every time they generate a response, it will be a different response,” said Sahil Agarwal, CEO of AI security platform Enkrypt AI. “If a slight change in the prompt vastly deviates the response, then the model actually didn’t understand what we’re asking it in the first place.”
“如果有人是一个习惯性的说谎者,每次他们产生反应时,都会有不同的反应,” 人工智能安全平台 Enkrypt AI 的首席执行官 Sahil Agarwal 说。“如果提示中的微小变化极大地偏离了响应,那么模型实际上一开始就没有理解我们要问它什么。
Are AI hallucinations always bad? 人工智能幻觉总是坏事吗?
The beauty of generative AI is its potential for new content, so sometimes hallucinations can actually be welcome.
生成式人工智能的美妙之处在于它对新内容的潜力,因此有时幻觉实际上是受欢迎的。
“We want these models to come up with new scenarios, or maybe new ideas for stories or … to write a sonnet in the style of Donald Trump,” Swayamdipta said. “We don’t want it to produce exactly what it has seen before.”
“我们希望这些模型能提出新的场景,或者可能是故事的新想法,或者… 以唐纳德・特朗普的风格写一首十四行诗,“斯瓦亚姆迪普塔说。“我们不希望它产生完全像以前见过的那样。
And so there’s an important distinction between using an AI model as a content generator and using it to factually answer questions.
因此,使用人工智能模型作为内容生成器和使用它来实际回答问题之间存在着重要的区别。
“It’s really not fair to ask generative models to not hallucinate because that’s what we train them for,” Soatto added. “That’s their job.”
“要求生成模型不要产生幻觉真的很不公平,因为这就是我们训练它们的目的,”Soatto 补充道。“那是他们的工作。”
How do you know if an AI is hallucinating? 你怎么知道人工智能是否在产生幻觉?
If you’re using generative AI to answer questions, it’s wise to do some external fact-checking to verify responses.
如果您正在使用生成式 AI 来回答问题,那么进行一些外部事实检查以验证答案是明智的。
It might also be a good idea to lean in to generative AI’s creative strengths but use other tools when seeking factual information.
利用生成式人工智能的创造性优势,但在寻求事实信息时使用其他工具也可能是一个好主意。
“I might go to a language model if I wanted to rephrase something or help with some kind of writing tasks as opposed to a task that involves correct information generation,” Swayamdipta said.
“如果我想改写一些东西或帮助完成某种写作任务,而不是涉及正确信息生成的任务,我可能会使用语言模型,”Swayamdipta 说。
Another option is retrieval augmented generation (RAG). With this feature, the overall system fact-checks sources and delivers responses with a link to said source, which the user can double-check.
另一种选择是检索增强生成 (RAG)。借助此功能,整个系统会对来源进行事实检查,并通过指向所述来源的链接提供响应,用户可以仔细检查该链接。
OpenAI’s GPT-4 has the ability to browse the Internet if it doesn’t know the answer to a query — and it will cite where the information came from.
OpenAI 的 GPT-4 如果不知道查询的答案,它有能力浏览互联网 —— 它会引用信息的来源。
Microsoft also can search the web for relevant content to inform its responses. And Copilot includes links to websites where users can verify responses.
Microsoft 还可以在网络上搜索相关内容,以告知其响应。Copilot 包含指向网站的链接,用户可以在其中验证响应。
Will we ever get to a point where AI doesn’t hallucinate? 我们是否会达到人工智能不会产生幻觉的地步?
Hallucinations are a result of training data limitations and lack of world knowledge, but researchers are working to mitigate them with better training data, improved algorithms and the addition of fact-checking mechanisms.
幻觉是训练数据限制和缺乏世界知识的结果,但研究人员正在努力通过更好的训练数据、改进的算法和增加事实核查机制来缓解幻觉。
In the short term, the technology companies behind generative AI tools have added disclaimers about hallucinations.
在短期内,生成式人工智能工具背后的科技公司已经增加了关于幻觉的免责声明。
Human oversight is another aspect to potentially better manage hallucinations within the scope of factual information. But it also may come down to government policies to ensure guardrails are in place to guide future development.
人为监督是另一个方面,可以更好地管理事实信息范围内的幻觉。但这也可能归结为政府的政策,以确保护栏到位,以指导未来的发展。
The EU in March approved the Artificial Intelligence Act, which seeks to foster the development of trustworthy AI with clear requirements and obligations for specific uses.
欧盟于 3 月批准了《人工智能法案》(Artificial Intelligence Act),该法案旨在促进可信赖的人工智能的发展,并对特定用途提出明确的要求和义务。
According to Chopra, the EU AI Act “provides a much tidier framework for ensuring transparency, accountability and human oversight” in developing and deploying AI. “Not every country is going to do the same thing, but the basic principles … are super, super critical,” he added.
根据 Chopra 的说法,《欧盟人工智能法案》“为确保透明度、问责制和人类监督提供了一个更整洁的框架”。“不是每个国家都会做同样的事情,但基本原则… 超级,超级关键,“他补充说。
Until then, we’ll have to use a multi-pronged strategy to take advantage of what these models offer while limiting any risks.
在此之前,我们将不得不使用多管齐下的策略来利用这些模型提供的功能,同时限制任何风险。
“I think it helps to not expect of machines what even humans cannot do, especially when it comes to interpreting the intent of humans,” Soatto said. “It’s important for humans to understand [AI models], exploit them for what they can do, mitigate the risks for what they’re not designed to do and design systems that manage them.”
“我认为不要期望机器做甚至人类都做不到的事情是有帮助的,尤其是在解释人类的意图时,”Soatto 说。“对于人类来说,理解 [人工智能模型],利用它们来做他们能做的事,减轻他们不是设计用来做的事情的风险,并设计管理它们的系统,这一点很重要。”
via:
- Hallucinations: Why AI Makes Stuff Up, and What’s Being Done About It
https://www.cnet.com/tech/hallucinations-why-ai-makes-stuff-up-and-whats-being-done-about-it/
–
When AI Gets It Wrong: Addressing AI Hallucinations and Bias 当 AI 出错时:解决 AI 幻觉和偏见
At a Glance 概览
Generative AI has the potential to transform higher education—but it’s not without its pitfalls. These technology tools can generate content that’s skewed or misleading (Gen