20230316英语学习

篇目1

Why We Forget Most of the Books We Read
为啥读过的书我们大多都会忘记?

在这里插入图片描述

Pamela Paul’s memories of reading are less about words and more about the experience.“I almost always remember where I wasand I remember the book itself.I remember the physical object,” says Paul, who reads, it is fair to say, a lot of books.“I remember the edition; I remember the cover; I usually remember where I bought it, or who gave it to me.What I don’t remember — and it’s terrible — is everything else.”

Surely some people can read a book or watch a movie once and retain the plot perfectly.But for many, the experience of consuming culture is like filling up a bathtub, soaking in it, and then watching the water run down the drain.

“Memory generally has a very intrinsic limitation,” says Faria Sana, an assistant professor of psychology at Athabasca University, in Canada.

The “forgetting curve” is steepest during the first 24 hours after you learn something.Unless you review the material, much of it slips down the drain after the first day.

Jared Horvath, a research fellow at the University of Melbourne, says that the way people now consume information and entertainment has changed what type of memory we value — and it’s not the kind that helps you hold onto the plot of a movie you saw six months ago.

In the internet age, recalling memory has become less necessary.“So long as you know where that information is at and how to access it, then you don’t really need to recall it,” Horvath says.

Research has shown that the internet functions as a sort of externalized memory.“When people expect to have future access to information, they have lower rates of recall of the information itself,” as one study puts it.

But even before the internet existed, entertainment products have served as externalized memories for themselves.You don’t need to remember a quote from a book if you can just look it up.Once videotapes came along, you could review a movie or TV show fairly easily.There’s not a sense that if you don’t burn a piece of culture into your brain, that it will be lost forever.

In the dialogue Plato wrote between Socrates and the aristocrat Phaedrus, Socrates tells a story about the god Theuth discovering "the use of letters."The Egyptian king Thamus says to Theuth:

This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.

“In the dialogue Socrates hates writing because he thinks it’s going to kill memory,” Horvath says."And he’s right.Writing absolutely killed memory.But think of all the incredible things we got because of writing.I wouldn’t trade writing for a better recall memory, ever."Perhaps the internet offers a similar tradeoff: You can access and consume as much information and entertainment as you want, but you won’t retain most of it.

It’s true that people often shove more into their brains than they can possibly hold.Last year, Horvath and his colleagues at the University of Melbourne found that those who binge-watched TV shows forgot the content of them much more quickly than people who watched one episode a week.Right after finishing the show, the binge-watchers scored the highest on a quiz about it, but after 140 days, they scored lower than the weekly viewers.

People are binging on the written word, too.In 2009, the average American encountered 100,000 words a day, even if they didn’t “read” all of them.It’s hard to imagine that’s decreased in the nine years since.

In “Binge-Reading Disorder,” Nikkitha Bakshani analyzes the meaning of this statistic.“Reading is a nuanced word,” she writes, “but the most common kind of reading is likely reading as consumption: where we read, especially on the internet, merely to acquire information.Information that stands no chance of becoming knowledge unless it ‘sticks.’”

The lesson from his binge-watching study is that if you want to remember the things you watch and read, space them out.Memories get reinforced the more you recall them, Horvath says.


篇目2

Very Smart People Keep Failing the AI Mirror Test
AI镜像测试,或许是一个悖论

在这里插入图片描述

In behavioral psychology, the mirror test is designed to discover animals’ capacity for self-awareness.There are a few variations of the test, but the essence is always the same: do animals recognize themselves in the mirror or think it’s another being altogether?

Right now, humanity is being presented with its own mirror test thanks to the expanding capabilities of AI — and a lot of otherwise smart people are failing it.

The mirror is the latest breed of AI chatbots, of which Microsoft’s Bing is the most prominent example.We’re convinced these tools might be the superintelligent machines from our stories because, in part, they’re trained on those same tales.Knowing this, we should be able to recognize ourselves in our new machine mirrors, but instead, it seems like more than a few people are convinced they’ve spotted another form of life.

This misconception is spreading with varying degrees of conviction.It’s been energized by a number of influential tech writers who have waxed lyrical about late nights spent chatting with Bing.They aver that the bot is not sentient, of course, but note, all the same, that there’s something else going on — that its conversation changed something in their hearts.

Having spent a lot of time with these chatbots, I recognize these reactions.But I also think they’re overblown and tilt us dangerously toward a false equivalence of software and sentience.In other words: they fail the AI mirror test.

What is important to remember is that chatbots are autocomplete tools.They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides.These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it.They are undeniably good at it and getting better, but mimicking speech does not make a computer sentient.

This is not a new problem, of course.The original AI intelligence test, the Turing test, is a simple measure of whether a computer can fool a human into thinking it’s real through conversation.An early chatbot from the 1960s named ELIZA captivated users even though it could only repeat a few stock phrases, leading to what researchers call the “ELIZA effect” — or the tendency to anthropomorphize machines that mimic human behavior.

Now, though, these computer programs are no longer relatively simple and have been designed in a way that encourages such delusions.In a blog post responding to reports of Bing’s “unhinged” conversations, Microsoft cautioned that the system "tries to respond or reflect in the tone in which it is being asked to provide responses."It is a mimic trained on unfathomably vast stores of human text — an autocomplete that follows our lead.

Researchers have even found that this trait increases as AI language models get bigger and more complex.Researchers at startup Anthropic — itself founded by former OpenAI employees — tested various AI language models for their degree of “sycophancy,” or tendency to agree with users’ stated beliefs, and discovered that “larger LMs are more likely to answer questions in ways that create echo chambers by repeating back a dialog user’s preferred answer.”

To say that we’re failing the AI mirror test is not to deny the fluency of these tools or their potential power.It is undeniably fun to talk to chatbots — to draw out different “personalities,” test the limits of their knowledge, and uncover hidden functions.Chatbots present puzzles that can be solved with words, and so, naturally, they fascinate writers.

But in a time of AI hype, it’s dangerous to encourage such illusions.What we know for certain is that Bing, ChatGPT, and other language models are not sentient, and neither are they reliable sources of information.They make things up and echo the beliefs we present them with.To give them the mantle of sentience — even semi-sentience — means bestowing them with undeserved authority — over both our emotions and the facts with which we understand in the world.

It’s time to take a hard look in the mirror.And not mistake our own intelligence for a machine’s.

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Haleine

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值