友邦安达待遇_如何给友邦捏意识

友邦安达待遇

In 1998, an engineer in Sony’s computer science lab in Japan filmed a lost-looking robot moving trepidatiously around an enclosure. The robot was tasked with two objectives: avoid obstacles and find objects in the pen. It was able to do so because of its ability to learn the contours of the enclosure and the locations of the sought-after objects.

1998年,索尼公司的计算机科学实验室在日本的工程师拍摄丢失的前瞻性机器人trepidatiously走动的外壳 。 机器人的任务有两个目标:避开障碍物并在笔中找到物体。 之所以能够做到这一点,是因为它能够了解外壳的轮廓和所追求的物体的位置。

But whenever the robot encountered an obstacle it didn’t expect, something interesting happened: Its cognitive processes momentarily became chaotic. The robot was grappling with new, unexpected data that didn’t match its predictions about the enclosure. The researchers who set up the experiment argued that the robot’s “self-consciousness” arose in this moment of incoherence. Rather than carrying on as usual, it had to turn its attention inward, so to speak, to decide how to deal with the conflict.

但每当机器人遇到OBS 牛逼 ACLE没想到,一些有趣的事情发生了:它的认知过程瞬间变得混乱。 机器人正在处理与其对机柜的预测不符的新的意外数据。 进行实验的研究人员认为,机器人的“自我意识”是在这种不连贯的时刻出现的。 可以说,它没有像往常那样进行,而是必须将注意力转移到内部,以决定如何处理冲突。

This idea about self-consciousness — that it asserts itself in specific contexts, such as when we are confronted with information that forces us to reassess our environment and then make an executive decision about what to do next — is an old one, dating back to the work of the German philosopher Martin Heidegger in the early 20th century. Now, A.I. researchers are increasingly influenced by neuroscience and are investigating whether neural networks can and should achieve the same higher levels of cognition that occur in the human brain.

这种关于自我意识的想法(它在特定的环境中得以确立,例如当我们面对迫使我们重新评估环境然后做出下一步决定的行政决定时所用的信息)是一个古老的想法,可以追溯到是20世纪初期德国哲学家马丁·海德格尔(Martin Heidegger)的作品。 现在,人工智能研究人员越来越受到神经科学的影响,并正在研究神经网络是否能够并且应该达到与人类大脑相同的更高认知水平。

Far from the “stupid” robots of today, which don’t have any real understanding of where they are or what they experience, the hope is that a level of awareness analogous to consciousness in humans could make future A.I.s much more intelligent. They could learn by themselves, for example, how to select and focus on data in order to acquire new skills that they assimilate and go on to perform with ease. But giving machines the power to think like this also brings with it risks — and ethical uncertainties.

远离当今对自己的位置或所经历的东西没有任何真正了解的“愚蠢”机器人,人们希望的是,类似于人类意识的意识水平可以使未来的AI变得更加智能。 例如,他们可以自己学习如何选择和关注数据,以获取他们吸收并轻松执行的新技能。 但是赋予机器这样思考的能力也带来了风险和道德上的不确定性。

“I don’t design consciousness,” says Jun Tani, PhD, co-designer of the 1998 experiment and now a professor in the Cognitive Neurorobotics Research Unit at the Okinawa Institute of Technology. He tells OneZero that to describe what his robots experience as “consciousness” is to use a metaphor. That is, the bots aren’t actually cogitating in a way we would recognize, they’re just exhibiting behavior that is structurally similar. And yet he is fascinated by parallels between machine minds and human minds. So much so that he has tried simulating the neural responses associated with autism via a robot.

“我不是在设计意识,” Jun Tani博士说,他是1998年实验的共同设计师,现在是冲绳技术学院认知神经机器人研究单元的教授。 他告诉OneZero ,用比喻来描述他的机器人所经历的“意识”。 也就是说,这些机器人实际上并没有像我们所认识的那样进行交流,它们只是表现出结构上相似的行为。 然而,他着迷于机器思维和人类思维之间的相似之处。 如此之多,以至于他试图通过机器人来模拟与自闭症相关神经React

“Research on consciousness is still considered somewhat taboo in A.I.”

“在AI中,关于意识的研究仍被视为禁忌”

One of the world’s foremost A.I. experts, Yoshua Bengio, founder of Mila, the Quebec Artificial Intelligence Institute, is likewise fascinated by consciousness in A.I. He uses the analogy of driving to describe the switch between conscious and unconscious actions.

世界上最重要的人工智能专家之一,魁北克人工智能研究所Mila的创始人Yoshua Bengio同样对AI的意识着迷。他使用驾驶的类比来描述有意识和无意识行为之间的转换。

“It starts by conscious control when you learn how to drive and then, after some practice, most of the work is done at an unconscious level and you can have a conversation while driving,” he explains via email.

他通过电子邮件解释道:“当您学习驾驶方法时,首先要有意识地进行控制,然后经过一些练习,大部分工作都是在无意识的情况下进行的,您可以在驾驶时进行对话。”

That higher, attentive level of processing is not always necessary — or even desirable — but it seems to be crucial for humans to learn new skills or adapt to unexpected challenges. A.I. systems and robots could potentially avoid the stupidity that currently plagues them if only they could gain the same ability to prioritize, focus, and resolve a problem.

更高,专注的处理水平并非总是必要的,甚至不是可取的,但是对于人类来说,学习新技能或适应突发的挑战似乎至关重要。 如果AI系统和机器人能够获得相同的优先级,重点和解决问题的能力,它们可能会避免当前困扰他们的愚蠢行为。

Inspired in part by what we think we know about human consciousness, Bengio and his colleagues have spent several years working on the principle of “attention mechanisms” for A.I. systems. These systems are able to learn what data is relevant and therefore what to focus on in order to complete a given task.

Bengio和他的同事在某种程度上受到我们认为对人类意识的了解的启发,已经花了几年时间致力于AI系统的“注意力机制”原理。 这些系统能够了解什么数据是相关的,从而了解为了完成给定任务而需要关注的数据。

“Research on consciousness,” Bengio adds, “is still considered somewhat taboo in A.I.” Because consciousness is such a difficult phenomenon to understand, even for neuroscientists, it has mostly been discussed by philosophers until now, he says.

他说:“关于人工智能的研究仍然被认为是人工智能中的一项禁忌。”因为意识是一个很难理解的现象,即使对于神经科学家而言,到目前为止,它仍是哲学家们一直在讨论的话题。

Knowledge about the human brain and the human experience of consciousness is increasingly relevant to the pursuit of more advanced systems and has already led to some fascinating crossovers. Take, for example, the work by Newton Howard, PhD, professor of computational neurosciences and neurosurgery at the University of Oxford. He and colleagues have designed an operating system inspired by the human brain.

关于人类大脑和人类意识体验的知识与追求更先进的系统越来越相关,并且已经导致了一些引人入胜的交叉。 以牛津大学计算神经科学和神经外科教授牛顿·霍华德 ( Newton Howard)的工作为例。 他和同事们设计了一个受人脑启发操作系统

“When it’s deployed, it’s like a child. It’s eager to learn.”

“部署后,就像一个孩子。 渴望学习。”

Rather than rely on one approach to solving problems, it can choose the best data processing technique for the task in question — a bit like how different parts of the brain handle different sorts of information.

与其依靠一种解决问题的方法,它可以为所讨论的任务选择最佳的数据处理技术-有点像大脑的不同部分如何处理不同种类的信息。

He’s also experimenting with a system that can gather data from various sensors and sources in order to automatically build knowledge on various topics. “When it’s deployed, it’s like a child,” he says. “It’s eager to learn.”

他还正在尝试一种可以从各种传感器和来源收集数据的系统,以便自动建立有关各种主题的知识。 他说:“部署后,就像一个孩子。” “它渴望学习。”

All of this work, loosely inspired by what we know about human brains, may push the boundaries of what A.I. can accomplish today. And yet some argue it might not get us much closer to a truly conscious machine mind that has a sense of a self, a detached “soul” that inhabits its body (or chipset), with free will to boot.

所有这些工作,都受到我们对人脑的了解而受到松散的启发,可能会推动AI今天所能完成的工作。 但是,有些人认为这可能无法使我们更接近具有自我意识的真正有意识的机器思维,即一种独立的“灵魂”,栖息在其身体(或芯片组)中,具有自由意志。

The philosopher Daniel Dennett, who has spent much of his life thinking about what consciousness is and is not, argues that we won’t see machines develop this level of consciousness anytime soon — not even within 50 years. He and others have pointed out that the A.I.s we are able to build today seem to have no semblance of the reflective thinking or awareness that we assume are crucial for consciousness.

哲学家丹尼尔·丹内特(Daniel Dennett)毕生都在思考什么是意识,什么不是意识,他辩称,我们不会在不久的将来甚至在50年内看到机器发展这种意识水平。 他和其他人指出,我们今天能够建立的AI似乎与我们认为对于意识至关重要的反思性思维或意识不相像。

It’s in the search for a system that does possess these attributes, though, that a profound crossover between neuroscience and A.I. research might happen. At the moment, consciousness remains one of the great mysteries of science. No one knows to what activity in the brain it is tied, exactly, though scientists are gradually working out that certain neural connections seem to be associated with it. Some researchers have found oscillations in brain activity that appear to be related to specific states of consciousness — signatures, if you like, of wakefulness.

这是在寻找那确实拥有这些属性,虽然系统中,神经科学和人工智能研究之间的深刻的交叉可能发生。 目前,意识仍然是科学的重大奥秘​​之一。 确切地说,没人知道它与大脑有什么联系,尽管科学家正在逐渐研究出某些神经联系似乎与它有关。 一些研究人员发现,大脑活动的振荡似乎与意识的特定状态有关-觉醒的信号(如果您愿意的话)。

By replicating such activity in a machine, we could perhaps enable it to experience conscious thought, suggests Camilo Miguel Signorelli, a research assistant in computer science at the University of Oxford.

牛津大学计算机科学研究助理Camilo Miguel Signorelli建议,通过在机器中复制此类活动,我们也许可以使它体验有意识的思想。

He mentions the liquid “wetware” brain of the robot in Ex Machina, a gel-based container of neural activity. “I had to get away from circuitry, I needed something that could arrange and rearrange on a molecular level,” explains Oscar Isaac’s character, who has created a conscious cyborg.

他提到了Ex Machina机器人的液体“湿软件”大脑, 基于凝胶的神经活动容器。 “我必须远离电路,我需要可以在分子水平上进行排列和重新排列的东西,”奥斯卡·伊萨克(Oscar Isaac)的角色解释道,他创造了一个有意识的机器人。

“The risk of mistakenly creating suffering in a conscious machine is something that we need to avoid.”

“我们需要避免在有意识的机器中错误地造成痛苦的风险。”

“That would be an ideal system for an experiment,” says Signorelli, since a fluid, highly plastic brain might be configured to experience consciousness-forming neural oscillations — akin to the waves of activity we see in human brains.

Signorelli表示:“这将是一个理想的实验系统,”流体高塑性的大脑可能被配置为经历意识形成的神经振荡,类似于我们在人脑中看到的活动波。

This, it must be said, is highly speculative. And yet it raises the question of whether completely different hardware might be necessary for consciousness (as we experience it) to arise in a machine. Even if we do one day successfully confirm the presence of consciousness in a computer, Signorelli says that we will probably have no real power over it.

必须说,这是高度投机的。 然而,这提出了一个问题,即机器(如我们所经历的)中意识是否可能需要完全不同的硬件。 即使我们有一天成功地确认了计算机中存在意识,Signorelli说我们可能对它没有真正的力量。

“Probably we will get another animal, humanlike consciousness but we can’t control this consciousness,” he says.

他说:“也许我们会得到另一种动物,类似于人的意识,但我们无法控制这种意识。”

As some have argued, that could make such an A.I. dangerous and unpredictable. But a conscious machine that proves to be harmless could still raise ethical quandaries. What if it felt pain, despair, or a terrible state of confusion?

正如某些人所论证的那样 ,这可能会使这种AI变得危险且不可预测。 但是,一个被证明是无害的有意识的机器仍然可能引发道德难题。 如果感觉到疼痛,绝望或混乱的状态怎么办?

“The risk of mistakenly creating suffering in a conscious machine is something that we need to avoid,” says Andrea Luppi, a PhD student at the University of Cambridge who studies human brain activity and consciousness.

“我们需要避免在有意识的机器中错误地造成痛苦的风险,”我们研究人类大脑活动和意识的剑桥大学博士生Andrea Luppi说。

It may be a long time before we really need to grapple with this sort of issue. But A.I. research is increasingly drawing on neuroscience and ideas about consciousness in the pursuit of more powerful systems. That’s happening now. What sort of agent this will help us create in the future is, like the emergence of consciousness itself, tantalizingly difficult to predict.

我们真正需要解决此类问题可能需要很长时间。 但是AI研究在寻求更强大的系统时越来越多地利用神经科学和意识观念。 现在正在发生。 就像意识本身的出现一样,这将在未来帮助我们创建什么样的主体,这一点非常难以预测。

翻译自: https://onezero.medium.com/how-to-give-a-i-a-pinch-of-consciousness-c70707d62b88

友邦安达待遇

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值