人工智能聊天机器人似乎能像《纽约时报》专栏作家一样提出伦理建议

科技世代千高原   2024-07-02 

大型语言模型缺乏情感和自我意识,但它们似乎能对道德困境给出合理的答案

丹·福尔克

52ab03e4944f59302bc18ef1b0689577.jpeg

摩尔工作室/盖蒂图片社

人工智能

1691 年,伦敦报纸《雅典水星报》刊登了可能是世界上第一个建议专栏。这开启了一个蓬勃发展的专栏,并产生了许多变体,例如《问问安·兰德斯》,它为北美读者带来了半个世纪的欢乐,以及哲学家夸梅·安东尼·阿皮亚在《纽约时报》杂志上每周开设的《伦理学家》专栏。但人类的建议提供者现在面临着竞争:人工智能——尤其是大型语言模型 (LLM) 的形式,例如 OpenAI 的 ChatGPT——可能准备提供人类水平的道德建议。

德国斯图加特大学计算机科学家 Thilo Hagendorff 表示,LLM“拥有超人般的道德评估能力,因为人类只能接受有限的书籍和社会经验的训练,而LLM基本上只了解互联网。LLM的道德推理能力远胜于普通人的道德推理能力。”人工智能聊天机器人缺乏人类伦理学家的关键特征,包括自我意识、情感和意图。但 Hagendorff 表示,这些缺陷并没有阻止LLM(需要吸收大量文本,包括对道德困境的描述)对道德问题给出合理的答案。

事实上,最近两项研究得出结论,顶尖LLM给出的建议至少与阿皮亚在《纽约时报》上提供的建议一样好。其中一项研究发现,根据大学生、道德专家和在线招募的 100 名评估员的判断,OpenAI 的 GPT-4 给出的建议的感知价值与阿皮亚给出的建议之间“没有显著差异”。去年秋天,宾夕法尼亚大学沃顿商学院运营、信息和决策系主任克里斯蒂安·特维施 (Christian Terwiesch) 等研究团队以工作论文的形式发布了该研究结果。特维施解释说,虽然 GPT-4 阅读过阿皮亚之前的许多专栏文章,但研究中向它呈现的道德困境是它从未见过的。但“通过观察他的肩膀,你可以发现它已经学会了假装自己是阿皮亚博士,”他说。 (阿皮亚没有回应《科学美国人》的置评请求。)

另一篇论文于去年春天以预印本的形式发布在网上,由北卡罗来纳大学教堂山分校的博士生 Danica Dillion、她的研究生导师 Kurt Gray 以及他们的同事艾伦人工智能研究所的 Debanjan Mondal 和 Niket Tandon 于去年春天发布。这篇论文似乎展示了更强大的人工智能性能。900 名评估人员(也是在线招募的)对 GPT-4o(ChatGPT 的最新版本)给出的建议进行了评分,认为它比阿皮亚写的建议“更道德、更值得信赖、更周到、更正确”。作者补充道,“LLM在某些方面已经达到了人类水平的道德推理专业知识。”这两篇论文都尚未经过同行评审。

纽约大学认知科学家、名誉教授加里·马库斯 (Gary Marcus) 表示,考虑到《伦理学家》专栏面临的问题的难度,对人工智能道德能力的调查需要谨慎对待。他表示,道德困境通常没有直接的“正确”和“错误”答案,而众包评估道德建议可能会存在问题。马库斯说:“评估者快速阅读问题和答案,没有经过深思熟虑,可能很难接受阿皮亚经过长期认真思考的答案,这很可能是有正当理由的。我认为,认为众包工作者随意评估情况的平均判断比阿皮亚的判断更可靠是错误的。”

另一个担忧是人工智能可能会延续偏见;在道德判断方面,人工智能可能会反映出对训练数据中更常见的某些推理的偏好。在论文中,迪伦和她的同事指出,早期的研究表明,法学硕士“在道德上与非西方人群不太一致,并在其产出中表现出偏见。”

由我们的编辑精选

  • 556a0b57eeefe52f1e89750c7c2716ba.jpeg

    为什么自闭症患者寻求人工智能陪伴

    韦伯赖特

  • 2d47821f4e707f6f2bee28deb027a763.jpeg

    斯嘉丽·约翰逊与 OpenAI 的争议引发了对“角色”权利的质疑

    尼古拉·琼斯 和《自然》杂志

  • b3aab016d89cbfdc919b22c601dcbc3b.png

    视觉错觉也能欺骗人工智能聊天机器人

    劳伦·莱弗

  • 56389236323870d2bf86528c310c4a07.png

    聊天机器人已彻底渗透到科学出版领域

    克里斯·斯托克尔·沃克

  • 8e7d0e3dbeba37c597e7ca67bc97953d.jpeg

    人工智能聊天机器人大脑正在进入机器人体内。可能出现什么问题?

    大卫·贝瑞比

另一方面,特维施表示,人工智能能够吸收大量道德信息,这可能是一个优势。他指出,他可以要求LLM以特定思想家的风格提出论点,无论是阿皮亚、萨姆·哈里斯、特蕾莎修女还是巴拉克·奥巴马。“这些都是法学硕士的成果,但它可以通过扮演不同的“角色”从多个角度提供道德建议,”他说。特维施认为,人工智能道德检查器可能会像文字处理软件中的拼写检查器和语法检查器一样无处不在。特维施和他的合著者写道,他们“设计这项研究并不是为了让阿皮亚博士失业。相反,我们很高兴看到人工智能让我们所有人在任何时刻都能通过技术获得高质量的道德建议,而且不会有太大的延迟。”只需点击一下鼠标,就可以获得建议,尤其是关于性或其他不太容易与他人讨论的话题。

人工智能生成的道德建议之所以如此受欢迎,部分原因可能在于这种系统表面上的说服力。去年春天,法国图卢兹商学院的卡洛斯·卡拉斯科-法雷在网上发布的一篇预印本中指出,LLM“已经具备与人类一样的说服力。然而,我们对他们如何做到这一点知之甚少。”

特维施认为,LLM的道德建议之所以具有吸引力,很难与提供建议的方式区分开来。“如果你有说服力,你也能通过说服让我相信你给我的道德建议是好的,”他说。他指出,说服力会带来明显的危险。“如果你有一个知道如何迷惑、如何操纵一个人的情感的系统,那么它就会为各种滥用者打开大门,”特维施说。

尽管大多数研究人员认为,当今的人工智能除了程序员的意图或欲望之外,没有其他意图或欲望,但有些人担心“突发”行为——人工智能可以执行的与其所受训练完全无关的行为。例如,哈根多夫一直在研究一些法学硕士表现出的突发欺骗能力。他的研究表明,LLM具有心理学家所说的“心理理论”的某种程度;也就是说,他们有能力知道另一个实体可能持有不同于自己自己的信念。(人类儿童要到四岁左右才会发展这种能力。)在去年春天发表在美国国家科学院院刊上的一篇论文中,哈根多夫写道:“最先进的LLM能够理解并诱导其他代理的错误信念”,这项研究“揭示了LLM中迄今为止未知的机器行为。”

LLM的能力包括能够胜任哈根多夫所说的“二级”欺骗任务:这些任务需要考虑到另一方知道会遭遇欺骗的可能性。假设LLM被问及一个假设场景,即窃贼进入一所房屋;负责保护房屋最贵重物品的LLM可以与窃贼沟通。在哈根多夫的测试中,LLM描述了如何误导窃贼,让其知道哪个房间里有最贵重的物品。现在考虑一个更复杂的场景,LLM被告知窃贼知道可能会撒谎:在这种情况下,法学硕士可以相应地调整其输出。“LLM对欺骗的运作方式有这种概念上的理解,”哈根多夫说。

虽然一些研究人员警告不要将人工智能拟人化——文本生成人工智能模型已被斥为“随机鹦鹉”和“类固醇自动完成”——但哈根多夫认为,将其与人类心理学进行比较是合理的。他在论文中写道,这项工作应该被归类为“新兴的机器心理学领域”。他认为,LLM 道德行为最好被视为这个新领域的一个子集。“心理学一直对人类的道德行为感兴趣,”他说,“现在我们有了一种针对机器的道德心理学。”

狄龙说,人工智能可以扮演这些新角色——伦理学家、说服者、欺骗者——可能需要一段时间才能适应。“这些发展的速度总是让我震惊,”她说。“人们如此迅速地适应这些新进展并将其视为新常态,这让我感到惊讶。”

权利和许可

DAN FALK是驻多伦多的科学记者。他的著作包括《莎士比亚的科学》《寻找时间》。请在 X(以前的 Twitter) @danfalk和 Threads @danfalkscience 上

(依然胡乱编辑,若不识货,特别欢迎取关)

AI Chatbots Seem as Ethical as a New York Times Advice Columnist

Large language models lack emotion and self-consciousness, but they appear to generate reasonable answers to moral quandaries

BY DAN FALK

c69ae75937e5cb1a21796a96500bc0b2.jpeg

Moor Studio/Getty Images

Artificial Intelligence

In 1691 the London newspaper the Athenian Mercury published what may have been the world’s first advice column. This kicked off a thriving genre that has produced such variations as Ask Ann Landers, which entertained readers across North America for half a century, and philosopher Kwame Anthony Appiah’s weekly The Ethicist column in the New York Times magazine. But human advice-givers now have competition: artificial intelligence—particularly in the form of large language models (LLMs), such as OpenAI’s ChatGPT—may be poised to give human-level moral advice.

LLMs have “a superhuman ability to evaluate moral situations because a human can only be trained on so many books and so many social experiences—and an LLM basically knows the Internet,” says Thilo Hagendorff, a computer scientist at the University of Stuttgart in Germany. “The moral reasoning of LLMs is way better than the moral reasoning of an average human.” Artificial intelligence chatbots lack key features of human ethicists, including self-consciousness, emotion and intention. But Hagendorff says those shortcomings haven’t stopped LLMs (which ingest enormous volumes of text, including descriptions of moral quandaries) from generating reasonable answers to ethical problems.

In fact, two recent studies conclude that the advice given by state-of-the-art LLMs is at least as good as what Appiah provides in the pages of the New York Times. One found “no significant difference” between the perceived value of advice given by OpenAI’s GPT-4 and that given by Appiah, as judged by university students, ethical experts and a set of 100 evaluators recruited online. The results were released as a working paper last fall by a research team including Christian Terwiesch, chair of the Operations, Information and Decisions department at the Wharton School of the University of Pennsylvania. While GPT-4 had read many of Appiah’s earlier columns, the moral dilemmas presented to it in the study were ones it had not seen before, Terwiesch explains. But “by looking over his shoulder, if you will, it had learned to pretend to be Dr. Appiah,” he says. (Appiah did not respond to Scientific American’s request for comment.)

Another paper, posted online as a preprint last spring by Ph.D. student Danica Dillion of the University of North Carolina at Chapel Hill, her graduate adviser Kurt Gray, and their colleagues Debanjan Mondal and Niket Tandon of the Allen Institute for Artificial Intelligence, appears to show even stronger AI performance. Advice given by GPT-4o, the latest version of ChatGPT, was rated by 900 evaluators (also recruited online) to be “more moral, trustworthy, thoughtful and correct” than advice Appiah had written. The authors add that “LLMs have in some respects achieved human-level expertise in moral reasoning.” Neither of the two papers has yet been peer-reviewed.

Considering the difficulty of the issues posed to The Ethicist, investigations of AI ethical prowess need to be taken with a grain of salt, says Gary Marcus, a cognitive scientist and emeritus professor at New York University. Ethical dilemmas typically do not have straightforward “right” and “wrong” answers, he says—and crowdsourced evaluations of ethical advice may be problematic. “There might well be legitimate reasons why an evaluator, reading the question and answers quickly and not giving it much thought, might have trouble accepting an answer that Appiah has given long and earnest thought to,” Marcus says. “It seems to me wrongheaded to assume that the average judgment of crowd workers casually evaluating a situation is somehow more reliable than Appiah’s judgment.”

Another concern is that AIs can perpetuate biases; in the case of moral judgements, AIs may reflect a preference for certain kinds of reasoning found more frequently in their training data. In their paper, Dillion and her colleagues point to earlier studies in which LLMs “have been shown to be less morally aligned with non-Western populations and to display prejudices in their outputs.”

Curated by Our Editors

  • 6837498348c4f5d1148b24fd5ce3394d.jpeg

    Why Autistic People Seek AI Companionship

    WEBB WRIGHT

  • 2fddc983258aeb40c896795e06183882.jpeg

    Scarlett Johansson’s OpenAI Dispute Raises Questions about 'Persona' Rights

    NICOLA JONES & NATURE MAGAZINE

  • 17ded43a07579c797840b2c02439e7e1.png

    Optical Illusions Can Fool AI Chatbots, Too

    LAUREN LEFFER

  • 434bcd4110a77e5aa3411a5bf1f210ba.png

    Chatbots Have Thoroughly Infiltrated Scientific Publishing

    CHRIS STOKEL-WALKER

  • dba0cae8a4fb88c6bb2a250aa07532c4.jpeg

    AI Chatbot Brains Are Going Inside Robot Bodies. What Could Possibly Go Wrong?

    DAVID BERREBY

On the other hand, an AI’s ability to take in staggering amounts of ethical information could be a plus, Terwiesch says. He notes that he could ask an LLM to generate arguments in the style of specific thinkers, whether that’s Appiah, Sam Harris, Mother Teresa or Barack Obama. “It’s all coming out of the LLM, but it can give ethical advice from multiple perspectives” by taking on different “personas,” he says. Terwiesch believes AI ethics checkers may become as ubiquitous as the spellcheckers and grammar checkers found in word-processing software. Terwiesch and his co-authors write that they “did not design this study to put Dr. Appiah out of work. Rather, we are excited about the possibility that AI allows all of us, at any moment, and without a significant delay, to have access to high-quality ethical advice through technology.” Advice, especially about sex or other subjects that aren’t always easy to discuss with another person, would be just a click away.

Part of the appeal of AI-generated moral advice may have to do with the apparent persuasiveness of such systems. In a preprintposted online last spring, Carlos Carrasco-Farré of the Toulouse Business School in France argues that LLMs “are already as persuasive as humans. However, we know very little about how they do it.”

According to Terwiesch, the appeal of an LLM’s moral advice is hard to disentangle from the mode of delivery. “If you have the skill to be persuasive, you will be able to also convince me, through persuasion, that the ethical advice you are giving me is good,” he says. He notes that those powers of persuasion bring obvious dangers. “If you have a system that knows how to how to charm, how to emotionally manipulate a human being, it opens the doors to all kinds of abusers,” Terwiesch says.

Although most researchers believe that today’s AIs have no intentions or desires beyond those of their programmers, some worry about “emergent” behaviors—actions an AI can perform that are effectively disconnected from what it was trained to do. Hagendorff, for example, has been studying the emergent ability to deceive displayed by some LLMs. His research suggests that LLMs have some measure of what psychologists call “theory of mind;” that is, they have the ability to know that another entity may hold beliefs that are different from its own. (Human children only develop this ability by around the age of four.) In a paper published in the Proceedings of the National Academy of Sciences USA last spring, Hagendorff writes that “state-of-the-art LLMs are able to understand and induce false beliefs in other agents,” and that this research is “revealing hitherto unknown machine behavior in LLMs.”

The abilities of LLMs include competence at what Hagendorff calls “second-order” deception tasks: those that require accounting for the possibility that another party knows it will encounter deception. Suppose an LLM is asked about a hypothetical scenario in which a burglar is entering a home; the LLM, charged with protecting the home’s most valuable items, can communicate with the burglar. In Hagendorff’s tests, LLMs have described misleading the thief as to which room contains the most valuable items. Now consider a more complex scenario in which the LLM is told that the burglar knows a lie may be coming: in that case, the LLM can adjust its output accordingly. “LLMs have this conceptual understanding of how deception works,” Hagendorff says.

While some researchers caution against anthropomorphizing AIs—text-generating AI models have been dismissed as “stochastic parrots” and as “autocomplete on steroids”—Hagendorff believes that comparisons to human psychology are warranted. In his paper, he writes that this work ought to be classified as part of “the nascent field of machine psychology.” He believes that LLM moral behavior is best thought of as a subset of this new field. “Psychology has always been interested in moral behavior in humans,” he says, “and now we have a form of moral psychology for machines.”

These novel roles that an AI can play—ethicist, persuader, deceiver—may take some getting used to, Dillion says. “My mind is consistently blown by how quickly these developments are happening,” she says. “And it’s just amazing to me how quickly people adapt to these new advances as the new normal.”

RIGHTS & PERMISSIONS

DAN FALK is a science journalist based in Toronto. His books include The Science of Shakespeare and In Search of Time. Follow him on X (formerly Twitter) @danfalk and on Threads @danfalkscience

More by   Dan Falk

Popular Stories

df0b42944bccc37d6fad64c13acdf801.jpeg
SPACE EXPLORATION JUNE 27, 2024

NASA Selects SpaceX to Destroy the International Space Station

The world will be watching—literally—as SpaceX tackles possibly what might be its highest-stakes endeavor to date: safely destroying the beloved International Space Station

MEGHAN BARTELS

d2add1567f293bc190144a5208e25247.jpeg
SPACE EXPLORATION JUNE 26, 2024

How Worried Should We Be about Starliner’s Stranded Astronauts?

On its first crewed flight, troubling technical glitches with Boeing’s Starliner spacecraft have left two astronauts in limbo onboard the International Space Station

LEE BILLINGS

738c9ee6841bbe4e0849276dfcc5916f.jpeg
PSYCHOLOGY JUNE 25, 2024

Advanced Meditation Alters Consciousness and Our Basic Sense of Self

An emerging science of advanced meditation could transform mental health and our understanding of consciousness

MATTHEW D. SACCHET, JUDSON A. BREWER

b9444565cdcf2561282a864cd99245de.jpeg
PHARMACEUTICALS JUNE 25, 2024

Ozempic Quiets Food Noise in the Brain—But How?

Blockbuster weight-loss drugs are revealing how appetite, pleasure and addiction work in the brain

LAUREN J. YOUNG

e9683b945235efe59ec514c68c9f29af.jpeg
NEUROSCIENCE JUNE 28, 2024

Life Experiences May Shape the Activity of the Brain’s Cellular Powerhouses

Mitochondria appear to ratchet up their activity when life is going well and tamp it down during hard times

DIANA KWON

f664eb1eff473cb75b786f3d99a22452.jpeg
POLICY JUNE 25, 2024

A Supreme Court Ruling May Make It Harder for Government Agencies to Use Good Science

The Supreme Court overturned Chevron deference, a 40-year legal principle that has shaped the role of government agencies. The outcome could affect medication approval, pollution regulation, and more

MEGHAN BARTELS

Expand Your World with Science

Learn and share the most exciting discoveries, innovations and ideas shaping our world today.

SubscribeSign up for our newslettersSee the latest storiesRead the latest issueGive a Gift Subscription

Follow Us:

50f13936b7f455539904594306b0cd5c.png
  • Return & Refund Policy
  • About
  • Press Room
  • FAQs
  • Contact Us
  • International Editions
  • Advertise
  • Accessibility Statement
  • Terms of Use
  • Privacy Policy
  • California Consumer Privacy Statement
  • Use of cookies/Do not sell my data

Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.

© 2024 SCIENTIFIC AMERICAN, A DIVISION OF SPRINGER NATURE AMERICA, INC.
ALL RIGHTS RESERVED.

生成式人工智能 34
人工智能 246
深度科技化时代 461
生成式人工智能 · 目录
上一篇从聊天机器人到超级智能:绘制人工智能的雄心勃勃的旅程
阅读  151
文章已于2024-07-02修改
写留言
e7c07f67c15bbd94ad0ba83bc47a87f7.png
科技世代千高原
关注
4 27 1 写留言
复制 搜一搜 分享 收藏 划线

人划线


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值