不要害怕超级人工智能

Let me put it this way: I’m not worried about invisible people teleporting into my bedroom, so you shouldn’t be worried about an all-powerful Super AI.

让我这样说:我不担心隐形人会隐形进入我的卧室,因此您不必担心功能强大的Super AI。

This topic is a strange one, as the reason to write about it is to tell you that it doesn’t merit writing about. The discussion about Super AI is about as important as discussions like “what should we do when teleportation is readily available to everyone?” or “how can we pair the right to privacy with the right become invisible?” As a matter of fact, talking about the mere concept of a Super AI is more harmful than you might think because it gives the impression that “AI” already has a solid foundation and the risks are looming large.

这个话题很奇怪,因为写这个话题的原因是告诉你它不值得写。 关于超级AI的讨论与诸如“每个人都可以随时使用瞬态传送时我们应该怎么做?”之类的讨论一样重要。 或“我们如何将隐私权与隐秘权配对在一起?” 实际上,谈论超级AI的纯粹概念比您想象的要有害得多,因为它给人的印象是“ AI”已经具有坚实的基础,而风险却越来越大。

The idea of a Super AI is a fantasy extrapolated from science by Hollywood in the same way that teleportation and invisibility science fiction is conjured out of tidbits of science fact. These form fantastical problems for which no real solutions exist because the reality of them is so far removed from where we are today — exacerbated by the fact that those who imagine and write about the horrors that await rarely truly understand how these sciences work. These types of extrapolations make for great Michael Crichton books. The fictional, stretched-to-the-limit end-result — dino DNA to prehistoric behemoths romping around our cities — is simply much easier to envision than the vastly complex and limiting real science underlying it. T-rex, super AI, teleportation, and invisibility may all very well become reality one day, but we’re nowhere near them today. Not by any measure.

超级AI的想法是好莱坞从科学中推断出来的幻想,就像隐形眼镜和隐形科幻小说是从科学事实的奇思妙想中产生出来的。 这些形成了奇妙的问题,没有真正的解决方案,因为它们的现实与我们今天的现状相去甚远,而那些想象和写着等待恐怖的人很少真正了解这些科学是如何工作的,这一事实使情况更加恶化。 这些推论构成了迈克尔·克里顿(Michael Crichton)的著作 。 虚构的,延伸到极限的结果(恐龙基因到史前的庞然大物在我们的城市中泛滥成灾),比其背后复杂而局限的真实科学要容易得多。 T-rex,超级人工智能,远距离传送和隐身性有一天都可能成为现实,但是今天我们离他们还很遥远。 绝对没有。

Image for post
Should we be more afraid of AI or genetic research? I vote neither!
我们应该更害怕人工智能或基因研究吗? 我都不投票!

First, let’s define what a Super AI, or superintelligence, or artificial superintelligence, is. We generally describe three types of AI: Narrow, General, and Super. A Narrow AI is an AI that is, as the name suggests, capable of performing a single or narrow set of tasks, such as detecting when a photo subject has their eyes open before snapping the shot. A General AI is, again as the name suggests, capable of performing more general tasks over a broad range of abilities. It would potentially be able to wake you up earlier in the morning and have your breakfast ready in time to drive you to work despite unusual traffic, still getting you to work on time. It would also be able to help you with your work, regardless of whether you’re a construction worker or a microbiologist. A General AI combines the abilities of many (or all) Narrow AIs. Then comes the Super AI. This AI surpasses all human comprehension and intellectual ability and is essentially capable of handling everything and anything. Because of this, it would operate outside of our boundaries and control, controlling every aspect of our lives and its own existence. It would control every bit of the internet, every satellite, every vehicle — everything — placing us at its mercy. We would simply have to hope we developed it with intentions and goals that prevent it from causing us harm — or stuffing us all in Matrix-style pods for our protection.

首先,让我们定义什么是超级AI或超级智能或人工超级智能。 我们通常描述三种类型的AI:窄型,通用和超级。 狭义AI是顾名思义,能够执行一组或一组狭窄任务的AI,例如在拍摄快照之前检测摄影对象何时睁开眼睛。 顾名思义,通用AI能够在广泛的能力范围内执行更通用的任务。 它可能能够使您在清晨起床,并及时准备早餐,尽管交通异常,但仍可以按时开车。 无论您是建筑工人还是微生物学家,它也将为您的工作提供帮助。 通用AI结合了许多(或所有)狭窄AI的能力。 然后是超级人工智能。 这种AI超越了所有人的理解力和智力,并且本质上能够处理一切。 因此,它将在我们的边界和控制之外运行,控制我们生活的各个方面及其自身的存在。 它会控制互联网的每一个环节,每颗卫星,每辆车-一切-使我们无所适从。 我们只希望希望我们开发出具有意图和目的的产品,以防止它造成我们的伤害,或者将我们全部塞入矩阵式吊舱中以保护我们。

Image for post
Super AIs would theoretically even be more creative than us
理论上,超级人工智能比我们更具创造力

By these definitions, what we have today are Narrow AIs, but I would even argue that AI as a general concept hasn’t even been developed yet. What would that be? Google Assistant? Siri? While I’m a huge fan of them and they’re fantastic technical showcases of today’s capabilities, they’re laughably stupid from an “intelligence” perspective and do nothing even comparable to thinking. The inclusion of the word intelligence makes these machine learning showcases sound like they have some understanding of what they’re doing — but they most certainly do not. These things just relentlessly try, with middling accuracy, to recognize bits and pieces of language from what you’re asking and look through an inherently limited number of data sources to get a rudimentarily coded response about the weather, your calendar or, if it doesn’t have a proper source, a generic web search. These have an extremely limited number of very specifically pre-programmed functions of which their greatest difficulty (and achievement) is discerning which one you want. If you ask “What’s the temperature?” did you want to know the current temperature or the definition of temperature? From weather.com or your thermostat? They don’t understand anything. The AI has no idea, so developers manually code in assumptions to help them along. They’re just tuned to trigger on certain keywords and phrases that tell them which lever to pull — in this case, it’s the weather.com lever — which is all programmed by hand to deliver the right number to your device to speak out loud.

根据这些定义,我们今天所拥有的是狭窄的AI,但我什至认为AI作为一个通用概念还没有得到发展。 那会是什么? Google助理? Siri? 尽管我是它们的忠实拥护者,并且它们是当今功能的出色技术展示,但从“智能”的角度来看,它们是可笑的愚蠢,甚至无可比拟。 包括“智能”一词使这些机器学习展示的声音听起来像他们对自己正在做的事情有所了解,但最肯定的是没有。 这些事情只是不懈地尝试,以中等的精度,从您要查询的内容中识别出一点点的语言,并查看固有数量有限的数据源,以获得关于天气,日历或(如果没有)的基本编码响应没有适当的来源,一般的网络搜索。 它们具有数量非常有限的非常特别的预编程功能,其中最大的困难(和成就)就是您要识别哪个功能。 如果您问“温度是多少?” 您想知道当前温度还是温度定义? 来自weather.com还是您的恒温器? 他们什么都不懂 。 AI毫无头绪,因此开发人员在假设条件下手动编码以帮助他们。 他们只是为了触发某些关键字和短语而触发,这些关键字和短语告诉他们要拉哪个杠杆-在这种情况下,这是weather.com杠杆-所有这些都是手动编程的,可以为您的设备提供正确的数字以大声说出来。

Image for post
Today’s assistants strive to be General AIs.
当今的助手努力成为通用AI。

The more recent and impressively fancy GPT-3 might make you think we’re further ahead than that, but that too is misleading. GPT-3 is truly mesmerizing in what it’s capable of: taking simple prompts and generating entire works of poetry or prose that are mostly coherent and could in some cases even be mistaken for human writing. It’s a great advance in machine learning. The caveat is that it’s not doing anything we couldn’t already do before. The biggest differentiator is that the company behind it, Open AI, simply threw ten times more data and money at it than had ever been done before (reportedly in the order of tens of millions of dollars just to train it) so it simply does its work more accurately. Its function remains simple. Given a word, it predicts what the next word will be, taking the previous context into account. It’s a wonderful word predictor that has simply learned the most probable order of words from millions of sources. While it’s coherently gluing words together, it has no clue what it’s writing about, but to us, it looks like it has become an expert on every topic.

较新近且令人印象深刻的GPT-3可能会让您认为我们比这更遥遥领先,但这也具有误导性 。 GPT-3真正令人着迷:它能提供简单的提示,并产生诗歌或散文的全部作品,这些作品大多是连贯的,在某些情况下甚至可能被误认为是人类的写作。 这是机器学习的一大进步。 需要注意的是,它没有做我们以前无法做的任何事情。 最大的区别在于, Open AI背后的公司向它投入的数据和资金比以往任何时候都要多十倍(据报导,培训它花费了数千万美元),因此它只是在做工作更准确。 其功能仍然很简单。 给定一个单词,它会考虑下一个上下文来预测下一个单词。 这是一个很棒的预言器,可以简单地从数百万个来源中学习到最可能的单词顺序。 尽管它将单词紧密地融合在一起,但是却不知道所写的内容,但是对我们来说,它似乎已经成为每个主题的专家。

Image for post
Does constructing sentences equal intelligence?
构建句子是否等于智力?

What we have today are not AIs. These are implementations of machine learning and all machine learning does is take a big, lumbering algorithm and slowly and painstakingly tweak millions of values and parameters until you’re satisfied that when you show it a cat, it says it’s a cat 95% of the time. You could probably get it to the aspirational “five 9s” of accuracy (99.999%) but that would take a hell of a lot of photos — and outside of academia you probably don’t need some automated way to recognize cats that urgently or precisely. Or you can make it more complex and have it crash a car around a virtual street tens of millions of times, scolding “NO!” and praising “that’s better” until it finally understands how to use its sensors and the steering wheel to parallel park without hitting anything 99.999% of the time.

我们今天拥有的不是AI。 这些是机器学习的实现,并且所有机器学习都需要花费大量的笨拙算法,并缓慢而艰苦地调整数百万个值和参数,直到您满意地将其显示为一只猫,它说它是猫的95%时间。 您可能可以将其达到理想的“ 5个9s”(99.999%)的准确度,但这会带来很多照片的麻烦-在学术界之外,您可能不需要自动方法来紧急或精确地识别猫。 或者,您可以使其变得更复杂,使汽车在虚拟街道上成千上万次撞车,并骂“不!”。 并称赞“更好”,直到它最终了解如何使用其传感器和方向盘进行平行停车而不会在99.999%的时间内撞到任何东西。

These systems do not understand anything — even what they’re doing. They make no actual decisions. They take a predefined input and, after enormous amounts of trial-and-error-style learning with punishment and rewards through that lumbering algorithm, formulate a predefined output to some degree of certainty. The trial-and-error prerequisite means that any task must have clearly defined, measurable right and wrong outcomes, or some degree in between.

这些系统不了解任何内容,即使他们在做什么。 他们没有做出任何实际决定。 他们接受预定义的输入,并通过笨拙的算法在大量的试验和错误式学习中获得惩罚和奖赏后,在某种程度上确定了预定义的输出。 试错的先决条件意味着任何任务必须具有明确定义的,可衡量的对与错结果,或介于两者之间的某种程度。

A “Super AI,” on the other hand, would need to understand things and make decisions that have no possibility for prior learning. What is the outcome of a war between Mexico and Canada? Who knows? Maybe China wins it. There’s no history to base decisions off of and no experience to learn from, let alone millions of examples of war to churn and study (fortunately!). You can’t “play” it against itself Hollywood “WarGames” style either, because that means you’d have to give it clear rules where there are none and provide a plethora of information that you don’t have. (By the way, what a beautiful bit of foresight WarGames is, showcasing a modern adversarial machine learning model…)

另一方面,“超级人工智能”将需要理解事物并做出无法事先学习的决策。 墨西哥和加拿大之间的战争有什么结果? 谁知道? 也许中国赢了。 没有历史可作为决策依据,也没有经验可学,更不用说数百万战争实例可供搅动和学习了(幸运的是!)。 您也不能与好莱坞的“ WarGames ”风格“玩”,因为这意味着您必须为没有的规则提供明确的规则,并提供大量您没有的信息。 (顺便说一下,WarGames的前瞻性是多么美好,展示了现代对抗性机器学习模型……)

Image for post
The WarGames AI calculated that there was no way to win a nuclear war. Whew!
WarGames AI计算得出无法赢得核战争。 ew!

“AI” players in games can only form seemingly ingenious strategies by analyzing millions of games and endlessly playing against themselves in settings with very tight sets of rules. These strategies aren’t ingenious at all. A singular, brilliant move you might admire was simply previously discovered as a viable move when it played itself at game #2,003,509 and it was “memorized” in that lumbering algorithm through a little thumbs-up on the pathway that got it there — but only after over 2 million failures to discover it. Furthermore, if there’s a mistake in the game or a rule is lacking, that’s going to form an integral part of the “AI”s strategy because it doesn’t know any better. It will mercilessly cheat in our eyes because we didn’t truly give it the boundaries we play by. It just bluntly hammers away at all the (largely silly) options until it randomly lands on one that works. It boils down to automated cherry-picking of results and after millions of ridiculous failures, it may finally show you one that looks genius — while it most certainly is not.

游戏中的“ AI”玩家只能通过分析数百万个游戏并在具有非常严格的规则集的环境中无休止地与自己比赛来形​​成看似巧妙的策略。 这些策略一点都不巧妙。 您可能会钦佩的一个奇妙而精妙的举动,以前只是在#2,003,509游戏中被发现时才是可行的举动,并且在笨拙的算法中通过“略微竖起大拇指”的方式“记住”了实现这一目标的途径-但只有在超过200万次失败后发现它。 此外,如果游戏中存在错误或缺少规则,这将成为“ AI”策略不可或缺的一部分,因为它的情况不会更好。 它会无情地欺骗我们,因为我们没有真正赋予它我们所遵循的界限。 它只是直言不讳地放弃了所有(大部分是愚蠢的)选项,直到它随机落在了一个可行的选项上。 它归结为自动挑选结果,在经历了数百万次荒谬的失败之后,它可能最终向您展示了一个看起来很天才的人,而最肯定的不是。

Image for post
OpenAI 5, defeating professional players in Dota 2. Credit: The Verge
OpenAI 5,击败Dota 2中的职业玩家。

It’s the monkey typing Shakespeare — you know the saying — and that’s actually very close to what machine learning is, with the exception that you give the monkey a banana every time it gets a little closer. Typing a full word, putting two words together, and eventually compiling a whole sentence or verse earns a reward. After 10 million years of typing and finally hammering out one of Shakespeare’s works, I’m not going to laud that monkey as being hyper-intelligent or even understanding Shakespear’s basic ideologies, and I’m certainly not afraid of that monkey grabbing a gun and bringing about the Planet of the Apes. It just knows the correct sequence of key presses that result in bananas and a machine learning system just tweaks an algorithm that results in its version of a banana: the lowest error.

这是猴子键入莎士比亚的一句话-您知道这句话-这实际上与机器学习的含义非常接近,不同之处在于,您每次靠近猴子都会给猴子一个香蕉。 输入一个完整的单词,将两个单词放在一起,并最终编写完整的句子或经文,将获得奖励。 经过一千万年的打字并最终敲出莎士比亚的一部作品,我不会为这只猴子超级聪明甚至不了解莎士比亚的基本思想而称赞他,我当然不怕那只猴子抢枪而死。带来了猿人星球。 它只知道产生香蕉的正确按键顺序,而机器学习系统只是调整了导致其香蕉版本的算法:最低的错误。

Image for post
A monkey getting started on his million-year task. Credit: New York Zoological Society
一只猴子开始执行他的百万年任务。 图片来源:纽约动物学会

We can get our machine learning systems to recognize cats (or, more relevantly, tumors, cars, and pedestrians), to beat people in games, and to set reminders, but anything past simple automation and pattern recognition is pure science fiction. Advances are certainly being made, and the advent of quantum computing will help propel machine learning to tackle immensely larger problems with more data, but right now it’s far more useful to consider how to make current machine learning approaches better and more useful, rather than worrying about some fictional all-in-one cat-spotting AI becoming too powerful.

我们可以使我们的机器学习系统识别猫(或更重要的是,肿瘤,汽车和行人),在游戏中击败人并设置提醒,但是简单的自动化和模式识别之外的任何事情都是纯科幻小说。 肯定已经取得了进步,量子计算的到来将有助于推动机器学习以更多的数据解决更大的问题,但是现在思考如何使当前的机器学习方法变得更好和更有用,而不必担心。一些虚构的多合一发现猫的AI变得过于强大。

The only realistic worry about AI, if you’re in the business of worrying, is the polar opposite of “Super AI.” Instead of considering the threat of AI intentionally harming people, worry about people harming people by overestimating AI as Super AI alarmists do. Putting too much faith in what machine learning is capable of puts people’s lives at risk by inevitably placing a big, lumbering algorithm where there absolutely shouldn’t be one. Current machine learning capabilities would certainly include the operating of a draw-bridge in clear weather, but we’re nowhere near trusting them to do automated open-heart surgery. That is how AI will most assuredly, and in the much nearer future, cause harm and cost lives.

如果您要担心的事情,那么关于AI的唯一现实担忧就是“超级AI”的对立面。 与其考虑故意伤害人类的AI威胁,不如像超级AI警报者那样通过高估AI来担心人们伤害人们。 对机器学习的能力抱有太大的信心,因为不可避免地将一个笨拙的大型算法放在绝对不应该使用的算法上,从而使人们的生命处于危险之中。 当前的机器学习功能肯定会包括在晴朗的天气中操作吊桥,但是我们离不了他们信任他们进行自动心脏直视手术的能力。 这将是AI最有把握的方式,并且在不久的将来会造成伤害并造成生命损失。

Image for post
An AI in charge of a deadly missile system
负责致命导弹系统的AI

But I’m not in the business of worrying. I’m a dreamer and a futurist. Machine learning is a tool like a pen, a power drill, a 3D printer, or any other and can be wielded for great good. Sure, you can 3D print a gun, but you can also 3D print millions of ear-saving, face-mask clips for healthcare workers. So, despite what it may sound like, I’m an enormous fan of machine learning and all its potential. I’m simply telling you not to be worried about it. As a matter of fact, I work at Pixplicity, where machine learning projects are a part of daily life, developing smart speaker systems and numerous applications that can, for example, change the time of day in photos, create 3D models from 2D images or generate new speech in a given speaker’s voice. We’re in the early stages of AI and already there are endless possibilities. There is so much potential in the current capabilities that it’s overwhelming and difficult to choose a direction because they’re all so exciting. In the words of one of my favorite machine learning YouTube channels, Two Minute Papers: what a time to be alive!

但我不必担心。 我是一个梦想家和未来主义者。 机器学习是一种工具,例如笔,电钻,3D打印机或其他任何工具,可以很好地运用。 当然,您可以3D打印喷枪,但也可以3D打印数百万为医护人员节省耳朵的面罩夹。 因此,尽管听起来很像,但我还是机器学习及其所有潜能的忠实拥护者。 我只是告诉你不要担心 。 实际上,我在Pixplicity工作,机器学习项目是日常生活的一部分,开发智能扬声器系统和许多应用程序,例如,可以更改照片中的时间,从2D图像创建3D模型或以给定发言人的声音生成新语音。 我们处于AI的早期阶段,已经存在无穷的可能性。 当前功能的潜力很大,以至于它们令人兴奋,因此势不可挡,难以选择方向。 用我最喜欢的机器学习YouTube频道之一的话说, 两分钟论文 :活着的时间!

翻译自: https://medium.com/pixplicity/dont-fear-the-super-ai-2356efcff4b6

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值