hana迁移可行性评估_评估``有益的AI''的可行性以及如何将其应用于自动驾驶汽车

hana迁移可行性评估

Dr. Lance Eliot, AI Insider

AI Insider的Lance Eliot博士

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

[编辑 注意:如果读者对Eliot博士正在进行的有关自动驾驶汽车问世的业务分析感兴趣,请参阅他的在线《福布斯》专栏: https //forbes.com/sites/lanceeliot/ ]

Will AI be beneficial?

人工智能会有益吗?

If so, can we prove definitively that a particular AI system is indeed beneficial, verifying as such before it is released and also subsequently after its release?

如果是这样,我们是否可以肯定地证明某个特定的AI系统确实是有益的,在发布之前以及发布之后进行验证?

Among the plethora of AI systems, some will undoubtedly be or might become eventually untoward, working in non-beneficial ways, carrying out detrimental acts that in some manner cause irreparable harm, injury, and possibly even death to humans.

在众多的AI系统中,某些毫无疑问将成为或可能最终变得不利,以非有益的方式工作,进行有害行为,从而以某种方式对人类造成不可挽回的伤害,伤害甚至可能导致死亡。

There is a distinct possibility that there are toxic AI systems among the ones that are aiming to help mankind.

在旨在帮助人类的系统中,有可能存在有毒的AI系统。

We do not know whether it might be just a scant few that are reprehensible or whether it might be the preponderance that goes that malevolent route.

我们不知道是不是应该谴责的少数人,还是这条恶意路线的主导地位。

One crucial twist that accompanies an AI system is that they are often devised to learn while in use, thus, there is a real chance that the original intent will be waylaid and overtaken into foul territory, doing so over time, and ultimately exceed any preset guardrails and veer into evil-doing.

人工智能系统附带的一个关键转折是,它们通常被设计成在使用时进行学习,因此,很有可能原始意图会被摆放并超越污秽区域,随着时间的流逝,最终超过任何预设护栏和转向邪恶行径。

Proponents of AI cannot assume that AI will necessarily always be cast toward goodness.

支持AI的人不能假设AI一定会永远被推向善良。

There is the noble desire to achieve AI For Good, and likewise the ghastly underbelly of AI For Bad.

人们渴望拥有AI For Good的崇高愿望,同样有着AI For Bad的可怕肋骨。

To clarify, even if AI developers had something virtuous in mind, realize that their creation can either on its own transgress into badness as it adjusts on-the-fly via Machine Learning (ML) and Deep Learning (DL), or it could contain unintentionally seeded errors or omissions that when later encountered during use are inadvertently going to generate bad acts.

需要澄清的是,即使AI开发人员心中有一些美德,也要意识到他们的创建可以通过机器学习(ML)和深度学习(DL)进行即时调整时,自行转变为坏处,或者可以包含无意间植入的错误或遗漏,这些错误或遗漏在使用过程中在以后遇到时会无意间产生不良行为。

Somebody ought to be doing something about this, you might be thinking and likewise wringing your hands worryingly.

有人应该为此做些事情,您可能正在思考,同样令人担忧地扭曲了双手。

Proposed Approach Of Provably Beneficial AI

拟议的有益人工智能方法

One such proposed solution is an arising focus on provably beneficial AI.

一种这样的提议解决方案是日益关注可证明有益的AI

Here’s the background.

这是背景。

If an AI system could be mathematically modeled, it might be feasible to perform a mathematical proof that would logically indicate whether the AI will be beneficial or not.

如果可以用数学方法对AI系统进行建模,则执行数学上的证明可能是可行的,该证明将在逻辑上指示AI是否有益。

As such, anyone embarking on putting an AI system into the world would be able to run the AI through this provability approach and then be confident that their AI will be in the AI For Good camp, and those that endeavor to use the AI or that become reliant upon the AI will be comforted by the aspect that the AI was proven to be beneficial.

这样,任何打算将AI系统推向世界的人都可以通过这种可证明性方法来运行AI,然后确信他们的AI将会进入AI For Good营地,以及那些愿意使用AI或依赖AI被证明是有益的,这将使人感到放心。

Voila, we turn the classic notion of A is to B, and as B is to C, into the strongly logical conclusion that A is to C, as a kind of tightly interwoven mathematical logic that can be applied to AI.

瞧,我们将A的经典概念转化为B,并且将B转化为C,这是一个强有力的逻辑结论,即A转化为C,这是一种可以应用到AI的紧密交织的数学逻辑。

For those that look to the future and see a potential for AI that might overtake mankind, perhaps becoming a futuristic version of a frightening Frankenstein this idea of clamping down on AI by having it undergo a provability mechanism to ensure it is beneficial offers much relief and excitement.

对于那些展望未来并看到人工智能的潜力可能超越人类的人,也许成为可怕的科学怪人的未来派版本,通过让其经历可证明性机制以确保它是有益的来压制AI的想法可以大大减轻和激动。

We all ought to rejoice in the goal of being able to provably showcase that an AI system is beneficial.

我们所有人都应该为能够证明可证明AI系统有益的目标而高兴。

Well, other than those that are on the foul side of AI, aiming to use AI for devious deeds and purposely seeking to do AI For Bad. They would be likely to eschew any such proofs and offer instead pretenses perhaps that their AI is aimed at goodness as a means of distracting from its true goals (meanwhile, some might come straight out and proudly proclaim they are making AI for destructive aspirations, the so-called Dr. Evil flair).

好吧,除了那些在人工智能方面犯规的人之外,他们的目的是利用人工智能来欺骗行为,并有目的地寻求为坏人人工智能 。 他们可能会避开任何此类证据,而是假装其AI的目标是善良,以此来偏离其真实目标(与此同时,有些人可能会直截了当并自豪地宣称他们正在为破坏性的愿望而制造AI,所谓的邪恶博士天才)。

There seems to be little doubt that overall, the world would be better off if there was such a thing as provably beneficial AI.

似乎毫无疑问,如果有可证明有益的AI之类的东西,整个世界将会变得更好。

We could use it on AI that is being unleashed into the real-world and then is heartened that we have done our best to keep AI from doing us in, and accordingly use our remaining energies on keeping watch on the non-proven AI that is either potentially afoul or that might be purposely crafted to be adverse.

我们可以将其用于正在释放到现实世界中的AI,然后感到鼓舞的是,我们已竭尽全力阻止AI进入我们的行列,并因此将剩下的精力用于监视未经验证的AI。可能有污秽或故意制造成不利的。

Regrettably, there is a rub.

遗憾的是,有摩擦。

The rub is that wanting to have a means for creating or verifying provably beneficial AI is a lot harder than it might sound.

难处在于,想要拥有一种创建或验证可证明有益的AI的方法比听起来要难得多。

Let’s consider one such approach.

让我们考虑一种这样的方法。

Professor Stuart Russell at the University of California Berkeley is at the forefront of provably beneficial AI and offers in his research that there are three core principles involved (as indicated in his research paper at https://people.eecs.berkeley.edu/~russell/papers/russell-bbvabook17-pbai.pdf):

加州大学伯克利分校的Stuart Russell教授站在可证明有益的AI的最前沿,并在他的研究中提出涉及三个核心原则(如他在https://people.eecs.berkeley.edu/上的研究论文所指出的那样)。 russell / papers / russell-bbvabook17-pbai.pdf ):

1) “The machine’s purpose is to maximize the realization of human values. In particular, it has no purposes of its own and no innate desire to protect itself.”

1)“机器的目的是最大限度地实现人类价值。 特别是,它没有自己的目的,也没有与生俱来的自我保护的愿望。”

2) “The machine is initially uncertain about what those human values are. The machine may learn more about human values as it goes along, of course, but it may never achieve complete certainty.”

2)“机器最初不确定这些人的价值观是什么。 当然,这台机器可能会不断学习有关人类价值的知识,但它可能永远无法完全确定。”

3) “Machines can learn about human values by overserving the choices that we humans make.”

3)“机器可以通过超越人类做出的选择来了解人类的价值观。”

Those core principles are then formulated into a mathematical framework, and an AI system is either designed and built according to those principles from the ground-up, or an existent AI system might be retrofitted to abide by those principles (the retrofitting would be generally unwise as it is easier and more parsimonious to start things the right way rather than trying to, later on, squeeze a square peg into a round hole, as it were).

然后将这些核心原理公式化为数学框架,并从头开始根据这些原理设计和构建AI系统,或者可能对现有的AI系统进行改造以遵守这些原则(这种改造通常是不明智的因为以正确的方式开始事情比较容易,也比省事省事,而不是像以后那样将方形钉子塞入圆Kong中)。

For those of you that are AI insiders, you might recognize this approach as being characterized by being a Cooperative Inverse Reinforcement Learning (CIRL) scheme, whereby multiple agents are working cooperatively and the agents, in this case, are a human and an AI, of which the AI attempts to learn from the human by the actions of the human instead of learning from the AI’s direct actions per se.

对于那些是AI内部人员的人来说,您可能会认识到这种方法的特点是协作逆强化学习(CIRL)方案,即多个特工正在协同工作,并且在这种情况下,特工是人和AI,其中AI试图通过人类的行为向人类学习,而不是从AI本身的直接行为中学习。

Some would bluntly say that this particular approach to provably beneficial AI is shaped around making humans happy with the results of the AI efforts.

有人会直言不讳地说,这种可证明有益的AI的特殊方法是围绕使人们对AI的成果感到满意而形成的。

And making humans happy sure seems like a laudable ambition.

使人类感到幸福似乎是值得称赞的野心。

The Complications Involved

涉及的并发症

It turns out that there is no free lunch in trying to achieve provably beneficial AI.

事实证明,尝试实现可证明有益的AI没有免费的午餐。

Consider some of the core principles and what they bring about.

考虑一些核心原则及其带来的影响。

The first stated principle is that the AI is aimed to maximize the realization of human values and that the AI has no purposes of its own, including no desire to protect itself.

首先陈述的原则是AI旨在最大程度地实现人类价值,并且AI没有自己的目的,包括不希望自我保护的意图。

Part of the basis for making this rule is that it would seem to do away with the classic paperclip problem or the King Midas problem of AI.

制定此规则的部分依据是,它似乎消除了AI的经典回形针问题或King Midas问题。

Allow me to explain.

请允许我解释。

Hypothetically, suppose an AI system was set up to produce paperclips. If the AI is solely devoted to that function, it might opt to do so in ways that are detrimental to mankind. For example, to produce as many paperclips as possible, the AI begins to take over steel production to ensure that there are sufficient materials to make paper clips. Soon, in a draconian way, the AI has marshaled all of the world’s resources to incessantly make those darned paperclips.

假设,假设建立了一个AI系统来生产回形针。 如果AI仅专注于该功能,则它可能选择以不利于人类的方式这样做。 例如,为了生产尽可能多的回形针,AI开始接管钢铁生产,以确保有足够的材料制作回形针。 很快,人工智能以一种严厉的方式整理了世界上所有的资源,以不断地制作那些简陋的回形针。

Plus, horrifically, humanity might be deemed as getting in the way of the paperclip production, and so the AI then wipes out humanity too.

另外,令人讨厌的是,人类可能会被视为回形针生产的障碍,因此AI也会消灭人类。

All in all, this is decidedly not what we would have hoped for as a result of the AI paperclip making system.

总而言之,这绝对不是我们希望通过AI回形针制作系统获得的结果。

This is similar to the fable of King Midas whereby everything he touched turned to gold, which at first seemed like a handy way to great rich, but then upon touching water it turns to gold, and the food turned to gold, and so on, ultimately he does himself in and realizes that his wishes were a curse.

这类似于迈达斯国王的寓言,他所触摸的所有东西都变成了黄金,起初似乎是方便致富的便捷方式,但是一旦碰到水,它便变成了黄金,食物变成了黄金,依此类推,最终,他投入了自己,并意识到自己的愿望是一个诅咒。

Thus, rather than AI having a goal that it embodies, such as making paper clips, the belief in this version of provably beneficial AI is that it would be preferred that the AI not have any self-beliefs and instead entirely be driven by the humans around it.

因此,不是AI具有实现它的目标(例如制作回形针),而是相信此版本可证明有益的AI的信念是,最好AI不具有任何自信心,而应完全由人类来驱动周围。

Notice too that the principle states that the AI is established such that it has no desire to protect itself.

还要注意,该原则指出AI的建立是为了使其不希望保护自己。

Why so?

为什么这样?

Aha, this relates to another classic AI problem, the off-switch or kill-switch issue.

呵呵,这与另一个经典的AI问题有关,即关闭开关或关闭开关问题。

Assume that any AI that we humans craft will have some form of off-switch or kill-switch, meaning that if we wanted to do so, we could stop the AI, presumably whenever we deemed desirable to so halt. Certainly, this would be a smart thing for us to do, else we might have that crazed paperclip maker and have no means to prevent it from overwhelming the planet in paperclips.

假设我们人工制作的任何AI都将具有某种形式的关闭开关或终止开关,这意味着如果我们想要这样做,我们可以停止AI,大概是在我们认为有必要停止的时候。 当然,对我们来说这将是一件明智的事情,否则我们可能会拥有那台疯狂的回形针制造商,而无法阻止它在回形针中淹没地球。

If the AI has any wits about it, which we are kind of assuming it would, the AI would be astute enough to realize that there is an off-switch and that humans could use it. But if the AI is doggedly determined to make those paper clips, the use of an off-switch would prevent it from meeting its overarching goal, and therefore the proper thing to do would be for the AI to disable that kill-switch.

如果AI对它有任何智慧,我们有点假设,那么AI将足够敏锐,以至于意识到存在一个开关,人类可以使用它。 但是,如果AI坚决地决定制作这些回形针,则使用关闭开关将阻止其实现其总体目标,因此,正确的做法是AI禁用该终止开关。

It might be one of the first and foremost acts that the AI would undertake, seeking to preserve its own “lifeblood” by disabling the off switch.

这可能是AI采取的第一个也是最重要的动作之一,即通过禁用关闭开关来保持自己的“生命线”。

To try and get around this potential loophole, the stated principle in this provably beneficial AI framework indicates that the AI is not going to have that kind of self-preservation cooked into its inherent logic.

为了解决这个潜在的漏洞,在这个可证明是有益的AI框架中陈述的原则表明,AI不会将这种自我保护变成其内在逻辑。

Presumably, if the AI is going to seek to maximize the realization of human values, it could be that the AI will itself realize that disabling the off-switch is not in keeping with the needs of society and thus will refrain from doing so. Furthermore, maybe the AI eventually realizes that it cannot achieve the realization of human values, or that it has begun to violate that key premise, and the AI might overtly turn itself off, viewing that its own “demise” is the best way to accede to human values.

据推测,如果AI要寻求最大程度地实现人类价值,那么AI本身可能会意识到禁用非交换机与社会需求不符,因此将避免这样做。 此外,也许AI最终意识到它无法实现人类价值观念,或者已经开始违反这一关键前提,并且AI可能认为自己的“灭亡”是达成共识的最佳方式,因此会自行关闭。对人类的价值观。

This does seem enterprising and perhaps gets us out of the AI doomsday predicaments.

这看起来确实很有进取心,也许使我们摆脱了AI世界末日的困境。

Not everyone sees it that way.

并非所有人都这样看。

One concern is that if the AI does not have a cornerstone of any semblance of self, it will potentially be readily swayed in directions that are not quite so desirable for humanity.

一个令人担心的问题是,如果AI没有任何自我表象的基石,它很可能会朝着人类不太希望的方向摇摆。

Essentially, without a truism at its deepest realm of something ironclad about don’t harm humans, using perhaps Issac Asimov’s famous first rule that a robot may not injure a human being or via inaction allow a human to be harmed, there is no failsafe of preventing the AI from going kilter.

从本质上讲,如果没有最深层的事实,即铁定的东西不会伤害到人类,可以使用伊萨克·阿西莫夫(Issac Asimov)着名的第一条规则,即机器人不得伤害人类或通过无为而动,以免伤害人类,因此没有万无一失的防止AI崩溃。

That being said, the counter-argument is that the core principles of this kind of provably beneficial AI are indicative that the AI will learn about human values, doing so by observation of human acts, and we might assume this includes that the AI will inevitably and inextricably discover on its own Asimov’s first rule, doing so by the mere act of observing human behavior.

话虽这么说,但相反的论点是,这种可证明有益的AI的核心原理表明AI将通过观察人类行为来了解人的价值,并且我们可以假设这包括AI不可避免地会并仅凭观察人类行为就能发现阿西莫夫的第一条规则。

Will it?

会吗

A counter to the counter-argument is that the AI might learn that humans do kill each other, somewhat routinely and with at times seemingly little regard for human life, out of which the AI might then divine that it is okay to harm or kill humans.

一个与之相反的论点是,AI可能会了解到人类确实会经常性地互相残杀,有时似乎很少考虑到人类的生命,因此AI可能会因此认为可以伤害或杀死人类是可以的。 。

Since the AI lacks any ingrained precept that precludes harming humans, the AI will be open to whatever it seems to “learn” about humans, including the worst and exceedingly vile of acts.

由于AI缺乏防止伤害人类的任何根深蒂固的戒律,因此AI可以接受任何似乎“了解”人类的知识,包括最恶劣和极其邪恶的行为。

Additionally, those that are critics of this variant of provably beneficial AI that are apt to point out that the word “beneficial” is potentially being used in a misleading and confounding way.

此外,那些批评这种可证明有益的AI的人倾向于指出,“有益”一词可能以一种误导和混淆的方式使用。

It would seem that the core principles do not mean to achieve “beneficial” in that sense of arriving at a decidedly “good” result per se (in any concrete or absolute way), and instead beneficial is intended as relative to whatever humans happen to be exhibiting as seemingly so-called beneficial behavior. This might be construed as a relativistic ethics stanch, and in that manner, does not abide by any presumed everlasting or considered unequivocal rules of how humans ought to behave (even if they do not necessarily behave in such ways).

似乎核心原则并不意味着从本质上(以任何具体或绝对的方式)取得确定的“良好”结果的意义上实现“有益”,而是相对于人类发生的任何事情而言,有益是指表现出看似所谓的有益行为。 这可能被解释为相对主义伦理学的僵化,并且以这种方式,并不遵守任何关于人类应该如何行事的假定的永恒或被认为是明确的规则(即使他们不一定以这种方式行事)。

You can likely see that this topic can indubitably get immersed in and possibly mired into cornerstone philosophical and ethical foundations debates.

您可能会看到,这个话题可以毫无疑问地陷入甚至可能陷入基石的哲学和伦理基础辩论中。

This also takes things into the qualms about basing the AI on the behaviors of humans.

这也使将AI建立在人类行为的基础上变得无所适从。

We all know that oftentimes humans say one thing and yet do another.

我们都知道,人类经常说一件事而又做另一件事。

As such, one might construe that it is best to base the AI on what people do, rather than what they say since their actions presumably speak louder than their words. The problem with this viewpoint of humanity is that it seems to omit that words do matter and that inspection of behavior alone might be a rather narrow means of ascribing things like intent, which would seem to be an equally important element for consideration.

因此,人们可能会认为,最好是将AI建立在人们所做的事情之上,而不是基于他们的讲话,因为AI的行为可能比他们的言语要响亮。 这种关于人类的观点的问题在于,似乎忽略了言语的重要性,而仅对行为的检查可能只是一种狭义的手段,将诸如意图之类的东西归为一类,而这似乎也是考虑的同等重要的因素。

There is also the open question about which humans are to be observed.

关于要观察哪些人,还有一个悬而未决的问题。

Suppose the humans are part of a cult that is bent on death and destruction, and in which case, their “happiness” might be shaped around the beliefs that lead to those dastardly results, and the AI would dutifully “learn” those as the thing to maximize as human values.

假设人类是一个致力于死亡和破坏的邪教组织的一部分,那么在这种情况下,他们的“幸福”可能会围绕导致那些令人生畏的结果的信念而形成,而AI会忠实地“学习”那些东西最大化人类价值。

And so on.

等等。

In short, as pointed out earlier, seeking to devise an approach for provably beneficial AI is a lot more challenging than meets the eye at first glance.

简而言之,正如前面指出的那样,寻求设计一种可证明有益的AI的方法比乍一看更具挑战性。

That being said, we should not cast aside the goal of finding a means to arrive at provably beneficial AI.

话虽这么说,我们不应抛弃寻找可证明有益的AI的方法的目标。

Keep on trucking, as they say.

正如他们所说,继续卡车运输。

Meanwhile, how might the concepts of provably beneficial AI be applied in a real-world context?

同时,如何将可证明有益的AI概念应用到现实世界中?

Consider the matter of AI-based true self-driving cars.

考虑一下基于AI的真正自动驾驶汽车的问题。

The Role of AI-Based Self-Driving Cars

基于AI的无人驾驶汽车的作用

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

真正的自动驾驶汽车是指AI完全自行驾驶汽车,并且在驾驶过程中无需任何人工协助。

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

这些无人驾驶车辆被认为是4级和5级,而要求驾驶员共同分担驾驶努力的汽车通常被认为是2级或3级。共同分担驾驶任务的汽车被描述为:是半自治的,通常包含称为ADAS(高级驾驶员辅助系统)的各种自动化附加组件。

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

5级还没有真正的自动驾驶汽车,我们甚至都不知道这是否有可能实现,以及到达那里需要多长时间。

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

同时,尽管是否应允许进行这种测试本身存在争议(我们都是实验中的有生命或有生命的豚鼠),但4级研究人员正在通过非常狭窄和选择性的公共道路试验逐渐尝试吸引一些关注。指出在我们的高速公路和小路上发生)。

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

由于半自动驾驶汽车需要人工驾驶,因此这类汽车的采用与传统汽车的驾驶方式没有明显不同,因此,在这个主题上,它们本身并没有太多新的内容要报道(尽管您会看到暂时,接下来提出的要点通常适用)。

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

对于半自动驾驶汽车,重要的是必须预先警告公众有关最近出现的令人不安的方面,即尽管有那些人类驾驶员不断发布自己在2级或3级汽车的方向盘上睡着的视频, ,我们所有人都需要避免被误导以为驾驶员在驾驶半自动驾驶汽车时可以将注意力从驾驶任务上移开。

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

您是车辆驾驶行为的负责方,无论可能将多少自动化投入2级或3级。

Self-Driving Cars And Provably Beneficial AI

无人驾驶汽车和有益的AI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

对于4级和5级真正的自动驾驶汽车,不会有人类驾驶员参与驾驶任务。

All occupants will be passengers.

所有乘客均为乘客。

The AI is doing the driving.

AI正在驾驶。

One hope for true self-driving cars is that they will mitigate the approximate 40,000 deaths and about 1.2 million annual injuries that occur due to human driving in the United States alone each year. The assumption is that since the AI won’t be driving and drinking, for example, it will not incur drunk driving-related car crashes (which accounts for nearly a third of all driving fatalities).

真正无人驾驶汽车的一种希望是,它们将减轻仅在美国每年因人类驾驶而造成的约40,000人死亡和每年约120万例伤害。 这样的假设是,例如,由于AI不会开车和喝酒,因此不会导致与酒后驾车相关的车祸(几乎占所有驾驶死亡人数的三分之一)。

Some offer the following “absurdity” instance for those that are considering the notion of provably beneficial AI as an approach based on observing human behavior.

对于那些正在考虑将可证明有益的AI概念视为基于观察人类行为的方法的人,有些人提供了以下“荒谬”实例。

Suppose AI observes the existing driving practices of humans. Undoubtedly, it will witness that humans crash into other cars, and presumably not know that it is due to being intoxicated (in that one-third or so of such instances).

假设AI遵守人类现有的驾驶习惯。 毫无疑问,它将见证人类撞到其他汽车,并且大概不知道这是由于陶醉(在这种情况下的三分之一左右)。

Presumably, we as humans allow those humans to do that kind of driving and cause those kinds of deaths.

据推测,作为人类,我们允许这些人进行此类驾驶并造成此类死亡。

We must, therefore, be “satisfied” with the result, else why we would allow it to continue.

因此,我们必须对结果“满意”,否则我们为什么会继续下去。

The AI then “learns” that it is okay to ram and kill other humans in such car crashes, and has no semblance that it is due to drinking and that it is an undesirable act that humans would prefer to not have taken place.

然后,AI“获悉”可以在这种车祸中撞死其他人,并且没有类似的说法,这是由于饮酒造成的,而且人类不愿发生这种不受欢迎的举动。

Would the AI be able to discern that this is not something it should be doing?

AI能够辨别这不是它应该做的事情吗?

I realize that those of you in the provably beneficial AI camp will be chagrined at this kind of characterization, and indeed there are loopholes in the aforementioned logic, but the point generally is that these are quite complex matters and undoubtedly disconcerting in many ways.

我意识到,在这种可证明有益的AI阵营中的你们将被这种表征所困扰,并且上述逻辑确实存在漏洞,但是总的来说,这些问题是非常复杂的,并且无疑在许多方面令人不安。

Even the notion of having foundational precepts as absolutes is not so readily viable either.

甚至以基本戒律为绝对的观念也不是很可行。

Take as a quick example the assertion by some that an AI driving system ought to have an absolute rule like Asimov’s about not harming humans and thus this apparently resolves any possible misunderstanding or mushiness on the topic.

举一个简单的例子,有人断言一个AI驱动系统应该具有绝对的规则,例如阿西莫夫(Asimov)关于不伤害人类的规则,因此这显然解决了对该主题的任何可能的误解或糊涂。

But, as I’ve pointed out in an analysis of a recent incident in which a man rammed his car into an active shooter, there are going to be circumstances whereby we might want an AI driving system to undertake harm, and cannot necessarily have one ironclad rule thereof.

但是,正如我在最近的一次事件分析中指出的那样,一名男子将汽车撞向了一名活跃的射手,在某些情况下,我们可能希望AI驱动系统遭受伤害,而不一定会造成伤害。铁定的规则。

Again, there is no free lunch, in any direction, that one takes on these matters.

再一次,在任何方向上都没有免费的午餐来解决这些问题。

Conclusion

结论

There is no question that we could greatly benefit from a viable means to provably showcase that AI is beneficial.

毫无疑问,我们可以从一种可行的方法中大大受益,从而证明AI是有益的。

If we cannot attain showing that the AI is beneficial, at least provide a mathematical proof that the AI will keep to its stated requirements (well, this opens another can of worms, but at least sidesteps the notion of “beneficial,” rightfully or wrongly so).

如果我们无法证明AI是有益的,则至少要提供数学证明AI会遵守其既定要求(嗯,这打开了另一条蠕虫,但至少回避了“有益的”概念是对是错)所以)。

Imagine an AI-based self-driving car that was subjected before getting onto the roadways to a provable safety theorem, and that had something similar that worked in real-time as the vehicle navigated our public streets.

想象一下基于AI的自动驾驶汽车,它在驶上道路之前要经过一个可证明的安全性定理,并且当车辆在我们的公共街道上行驶时,它具有类似的实时作用。

Researchers are trying to get there and we can all hope they keep trying.

研究人员正在努力到达那里,我们都希望他们继续努力。

At this juncture, one thing that is provably the case is that all of the upcoming AI that is rapidly emerging into society is going to be extraordinarily vexing and troublesome, and that’s something we can easily prove.

在此关头,可以证明的一件事是,即将进入社会的所有即将出现的AI都将变得异常烦人和麻烦,而我们可以轻易证明这一点。

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

有关此故事的免费播客,请访问: http : //ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

播客还可以在Spotify,iTunes,iHeartRadio等上获得。

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

有关AI自动驾驶汽车的更多信息,请参见: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

要在Twitter上关注Lance Eliot: https//twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

有关他的Forbes.com博客,请访问: https//forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

有关他的AI趋势博客,请访问: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

有关其Medium博客,请访问: https : //medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

有关艾略特博士的书,请参见: https : //www.amazon.com/author/lanceeliot

Copyright © 2020 Dr. Lance B. Eliot

版权所有©2020 Lance B.Eliot博士

翻译自: https://medium.com/@lance.eliot/assessing-the-feasibility-of-provably-beneficial-ai-plus-how-it-applies-to-self-driving-cars-dd1380c789d0

hana迁移可行性评估

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值