跟《经济学人》学英文:2024年08月10日这期 These are the two new books you need to read about AI

These are the two new books you need to read about AI

They explore the people who make AI work—for good or ill

在这里插入图片描述

Feeding the Machine. By James Muldoon, Mark Graham and Callum Cant. Bloomsbury; 288 pages; $29.99. Canongate; £20

Co-Intelligence. By Ethan Mollick. *Portfolio; 256 pages; $30. WH Allen;*£16.99

原文:

IT IS NOT only the business world that is excited about generative artificial

intelligence (AI). So, too, are publishers. In the past 12 months at least 100

books about AI have been published in America, reckons Thad McIlroy, a

contributing editor at Publishers Weekly, and many multiples of that have

been self-published*.* At the top of the pile are two new titles, which represent

opposing sides of a noisy debate: whether enthusiasm about AI’s benefits

should outweigh concerns about its downsides.

对生成式人工智能(AI)感到兴奋的不仅仅是商界。出版商也是如此。《出版商周刊》的特约编辑萨德·麦克洛伊估计,在过去的12个月里,至少有100本关于人工智能的书籍在美国出版,其中许多是自助出版的。堆在最上面的是两个新头衔,它们代表了一场嘈杂辩论的对立双方:对人工智能好处的热情是否应该超过对其负面影响的担忧。

学习;

publisher:出版商

contributing editor:特约编辑

原文:

The darker side of the shiny AI era is the subject of “Feeding the Machine” by

three academics at the University of Essex and Oxford University.

Automation is born of exploitation, they contend. Today’s glowing data

centres that run AI systems are akin to the soot-covered factories of the 19th

century. Behind the algorithms are humans—yes, lavishly paid engineers,

but also an army of workers who make the systems hum, from those who

review the underlying data that are fed into the software to those who check

its answers.

闪亮的人工智能时代的黑暗面是埃塞克斯大学和牛津大学的三位学者提出的“喂机器”的主题。他们认为,自动化产生于剥削。今天运行人工智能系统的发光数据中心类似于19世纪布满煤烟的工厂。算法的背后是人类——是的,高薪的工程师,但也有一大批让系统运转的工人,从审查输入软件的底层数据的人到检查软件答案的人。

学习:

academics:大学教师;学者;(academic的复数)

soot:美 [sʊt] 煤烟;油烟;

lavishly paid:高薪的

原文:

The authors delve into seven archetypal jobs in the AI supply chain. Online

content moderators, often in poor countries, assess whether material on

platforms such as Facebook is acceptable under the terms of service, which

helps train automated systems. Data-centre technicians are always on call to

ensure the infrastructure is up and running. This guzzles growing amounts of

electricity. A ChatGPT prompt consumes around ten times as much energy as a

Google search.

作者深入研究了人工智能供应链中的七种典型工作。通常在贫穷国家的在线内容版主评估脸书等平台上的内容在服务条款下是否可以接受,这有助于训练自动化系统。数据中心技术人员随时待命,以确保基础设施正常运行。这消耗了越来越多的电力。ChatGPT提示词指令消耗的能量大约是谷歌搜索的十倍。

学习:

archetypal:美 [ˌɑrkəˈtaɪp(ə)l] 典型的;反复出现的;

moderators: 美 ['mɑdəˌretɚs] 版主;(moderator的复数)

technicians:美 [tekˈnɪʃənz] 技术员;专家;(technician的复数)

up and running:在运转, 在使用中

guzzles:狂饮;滥吃;(guzzle的第三人称单数)消耗

原文:

Readers meet a voice actor who has to compete with an audio-synthesised

version of herself for jobs and a machine-learning engineer who struggles

with the ethical implications of her work—perpetuating bias, threatening

jobs and potentially posing an existential risk to humanity. But the most

interesting character-study is of Anita, a Ugandan data annotator, who

spends mind-numbing ten-hour days in low lighting to prevent eye strain. A

university graduate, her entrée into a glamorous career in tech amounts to

watching a constant stream of video footage of car drivers, looking for

evidence of driver fatigue—such as slumping shoulders or drooping eyes—

and labelling it, all for around $1.20 an hour.

读者会遇到一个配音演员,她不得不与音频合成版的自己竞争工作,以及一个机器学习工程师,她努力应对自己工作的道德影响——使偏见永久化,威胁工作,并可能对人类构成生存风险。但最有趣的性格研究是关于乌干达数据注释者安妮塔的,她每天在昏暗的灯光下度过令人麻木的10个小时,以防止眼睛疲劳。作为一名大学毕业生,她进入一个迷人的科技职业生涯相当于观看汽车司机的连续视频片段,寻找司机疲劳的证据——如耷拉的肩膀或低垂的眼睛——并给它贴上标签,每小时约1.2美元。

学习:

voice actor:配音演员,声优

mind-numbing:使人大脑麻木的

entree:美 [ˈɑnˌtreɪ] 进入权;加入权;

video footage:录像;现场的录像片段

fatigue:美 [fəˈtiːɡ] 劳累;极度疲劳;疲乏;疲惫 注意发音

driver fatigue:司机疲劳;疲劳驾驶

drooping:无力的;下垂的

原文:

There is much to respect about the authors’ critical assessment of AI, yet also

much to challenge. It is true that data-labelling is dreary, and that content-

moderation can require mental-health counselling. But the authors grossly

overstate their case. “The AI industry is just the next phase in a long journey

that stretches back to the age of colonialism,” they argue. “The solution is to

dismantle the machine and build something else in its place.” This

extremism is ridiculous, considering that AI can automate expensive services

like medical diagnoses, energy distribution and logistics—to name just three

—which can help people in poor countries.

作者对人工智能的批判性评估有很多值得尊重的地方,但也有很多值得挑战的地方。诚然,数据标签是沉闷的,内容适度可能需要心理健康咨询。但是作者严重夸大了他们的情况。“人工智能产业只是追溯到殖民主义时代的漫长旅程的下一个阶段,”他们认为。"解决的办法是拆除机器,在它的位置上建造其他东西."这种极端主义是荒谬的,考虑到人工智能可以自动化医疗诊断、能源分配和物流等昂贵的服务——仅举三例——这可以帮助贫穷国家的人民。

学习:

dreary:美 [ˈdrɪri] 沉闷的;阴郁的;令人沮丧的;枯燥无味的;

grossly:非常

colonialism:美 [kəˈloʊniəˌlɪzəm] 殖民主义;殖民政策;殖民统治

原文:

How AI gets built is only one facet of the technology. Another is how it gets

used. A practical and more positive way to think about the interaction of

people and AI is provided by Ethan Mollick, a professor at the University of

Pennsylvania’s Wharton School. He focuses on how people should learn to

use generative AI services like ChatGPT. The technology is an “alien

intelligence”, he says, that can augment humans’ own. But people need to

raise their game in order to get the most from it. In that respect, AI is literally

a “co-intelligence”, as the book’s title stresses.

人工智能是如何建立的只是这项技术的一个方面。另一个是它如何被使用。宾夕法尼亚大学沃顿商学院(the University of Pennsylvania ’ s Wharton School)教授伊桑·莫利克(Ethan Mollick)提供了一种更实际、更积极的方式来思考人与人工智能的互动。他关注的是人们应该如何学习使用像ChatGPT这样的生成式人工智能服务。他说,这项技术是一种“外星智能”,可以增强人类自身的智能。但是人们需要提高他们的游戏水平,以便从中获得最大收益。在这方面,正如书名所强调的,人工智能实际上是一种“合作智能”。

学习:

facet: 美 [ˈfæsɪt] 方面;特征;(事物的)部分 注意发音

Wharton School:沃顿商学院

literally:(强调事实可能令人惊讶)确实地;真正地;实际上;

原文:

Mr Mollick introduces the idea of a “jagged frontier”: the boundary between

what AI can and cannot do. It is jagged because it is not clear where humans

are better, and the dividing line is always changing. For instance, prompting

a large language model (LLM) for a sonnet of exactly 50 words may result in a

beautiful text returned in just a few seconds with, alas, 48 words, not 50.

This is because the system is designed to produce a simulacrum of what it

has seen in its training data, not act as a counter or calculator. In this and in a

myriad of other tasks, AI is weird. When it fails, it does so in ways that people

would not.

莫利克引入了“锯齿状边界”的概念:人工智能能做什么和不能做什么之间的边界。它是锯齿状的,因为不清楚人类在哪里更好,分界线总是在变。例如,提示大型语言模型(LLM)一首正好50个单词的十四行诗可能会在几秒钟内返回一个漂亮的文本,唉,是48个单词,而不是50个。这是因为该系统旨在产生其在训练数据中看到的模拟物,而不是充当计数器或计算器。在这项任务和无数其他任务中,人工智能是怪异的。当它失败时,它会以人们不愿意的方式出现。

学习:

jagged:美 [ˈdʒæɡəd] 锯齿状的;参差不齐的;凹凸不平的;

dividing line:分界线

sonnet: 英 [ˈsɒnɪt] (意大利式或英格兰式的)十四行诗

alas:美 [əˈlæs] 悲哀地;遗憾地

simulacrum:美 [ˌsɪmjəˈleɪkrəm] 模拟

原文:

As a result, people need to experiment with AI to learn its capabilities and

flaws. Mr Mollick advocates four rules. First, “always invite AI to the table”;

that is, try to find a way to use it in every task. Second, “be the human in the

loop”—look for ways it can help, rather than replace you. Third, give the AI a

persona and prod it. Oddly, LLMs work better when they are asked to adopt a

persona, such as “you are a corporate-strategy expert with a flair for

originality”. Fourth, assume this is the worst AI you will use—so do not be

sanctimonious when it fails. Systems will only get better.

因此,人们需要对人工智能进行实验,以了解它的能力和缺陷。莫利克先生提倡四条规则。第一,“总请AI上桌”;也就是说,试着在每一项任务中找到使用它的方法。第二,“成为循环中的人”——寻找它能帮助你的方法,而不是取代你。第三,给人工智能一个角色,刺激它。奇怪的是,当LLM被要求扮演一个角色时,他们表现得更好,比如“你是一个有独创性的公司战略专家”。第四,假设这是你将使用的最糟糕的人工智能——所以当它失败时不要假装神圣。系统只会变得更好。

学习:

persona:美 [pərˈsoʊnə] 表面形象;伪装人格

prod:美 [prɑːd] 刺;戳;捅;激励

flair:天赋;天分;资质

sanctimonious: 英 [ˌsaŋ(k)tɪˈməʊnɪəs] 伪善的;道貌岸然的;假装虔诚的

原文:

The precepts force people to develop new skills to work with the machine,

just as humans had to enhance their numeracy to work with calculators and

spreadsheets, even as those tools made many things easier. “The strengths

and weaknesses of AI may not mirror your own, and that’s an asset,” Mr

Mollick writes. “This diversity in thought and approach can lead to

innovative solutions and ideas that might never occur to a human mind.”

这些规则迫使人们开发新的技能来使用机器,就像人类必须提高他们的计算能力来使用计算器和电子表格一样,尽管这些工具使许多事情变得更容易。“人工智能的优势和弱点可能并不反映你自己的,这是一种资产,”莫利克写道。“这种思想和方法的多样性可以带来人类思维可能永远不会想到的创新解决方案和想法。”

学习:

numeracy: 美 [ˈnuːmərəsi] 识数;计算能力;数理能力

enhance their numeracy:提高计算能力

原文:

“Co-Intelligence” usefully brings data to bear on AI performance. LLMs score

higher on creativity than most people, according to several studies in 2023

by researchers in America, Germany and Britain. AI also helps business

people accomplish more tasks, work faster and improve the quality of their

output, benefiting average workers most. For software developers, there was

a 56% improvement on tasks, according to a study by Microsoft.

“合作智能”有效地将数据带到了人工智能的表现上。根据美国、德国和英国的研究人员在2023年进行的几项研究,LLMs在创造力方面的得分高于大多数人。人工智能还可以帮助商务人士完成更多任务,工作更快,提高产出质量,普通工人受益最大。根据微软的一项研究,对于软件开发人员来说,任务有56%的提高。

学习:

business people:商务人士

average workers:普通工人

原文:

Yet AI’s usefulness presents a new problem: it lulls people into a dangerous

complacency. When AI systems are very good, people tend to trust the output

without fully scrutinising it. When the AI is good but not great, people are

more attentive and add their own judgment, which improves performance. It

is a reminder that in the AI age, humans are still needed—yet must become

sharper still.

然而,人工智能的有用性带来了一个新问题:它让人们陷入危险的自满。当人工智能系统非常好的时候,人们倾向于信任输出,而不完全仔细检查它。当人工智能很好但并不伟大时,人们会更加专注并添加自己的判断,这提高了性能。这提醒人们,在人工智能时代,人类仍然是需要的——但必须变得更加敏锐。

学习:

lull:使(人)放松警惕;用欺骗手段减轻;哄骗;

complacency:美 [kəmˈpleɪsnsi] 自满;自鸣得意;沾沾自喜;

原文:

Amid AI hype in business, where companies say a lot but seem to do little,

“Co-Intelligence” usefully notes that innovation is hard for organisations but

easy for individuals. Hence, do not look for how AI will change business from

chief executives’ statements but from ordinary worker-bees who quietly

incorporate it into their everyday tasks. The revolution will be noticed only

in hindsight.

在商业领域对人工智能的大肆宣传中,企业说得很多,但做得似乎很少,《合作智能》(Co-Intelligence)有益地指出,创新对组织来说很难,但对个人来说很容易。因此,不要从首席执行官的声明中寻找人工智能将如何改变商业,而是从普通工人那里寻找,他们悄悄地将人工智能融入日常工作。这场革命只有在事后才会被人注意到。

学习:

hype:大肆宣传;炒作;过分的宣传

ordinary worker:普通工人

everyday task:日常任务

hindsight:美 [ˈhaɪndsaɪt] 事后聪明;后见之明;事后的领悟;

原文:

So, gentle reader, did your correspondent use AI to write this review? Yes—it

was entirely written by artificial intelligence. Every word of it. Just kidding.

None of it was, actually. The reason is that writing is not just the output that

readers consume but a process of reflection and intellectual discovery by the

writer, hopefully to originate novel ideas, not just express existing ones. Yet

Mr Mollick’s first rule was not disobeyed: an LLM was prompted to challenge

the article’s points. (Sadly its response was so generic that a vituperous

editor was needed instead.)

那么,温柔的读者,你的通讯员用AI写这篇评论了吗?是的——它完全是由人工智能编写的。每一个字。开玩笑的。实际上,都不是。原因是,写作不仅仅是读者消费的产出,而是作者思考和智力发现的过程,希望产生新的想法,而不仅仅是表达现有的想法。然而,莫利克先生的第一条规则并没有被违反:LLM被提示质疑文章的观点。(遗憾的是,它的回应如此普通,以至于需要一个责骂的编辑来代替。)

学习:

originate novel ideas:产生新的想法

vituperous:辱骂的

原文:

As AI becomes commonplace, people will be empowered as well as reduced

by it. Whether humans are the master craftsmen to their algorithmic

assistants, or they become mere apprentices to the AI masterminds, remains

the question. It is not one ChatGPT can reliably answer. ■

随着人工智能变得司空见惯,人们将被它赋予权力,也将被它削弱。人类是算法助手的大师级工匠,还是仅仅成为人工智能大师的学徒,这仍然是个问题。这不是一个ChatGPT能可靠回答的问题。■

学习:

commonplace:普遍的;平凡的;常见的

craftman:工匠

apprentices:美 [əˈprentɪsɪz] 学徒;(apprentice的复数)

mastermind:主谋;策略家;创意者

后记

2024年8月13日13点51分于上海。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值