人工智能是否会演变为灾难?如何监管防止人工智能被滥用?

作者:Stuart Russell
链接:https://www.zhihu.com/question/59956936/answer/174233364
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
 

From the angry robot in the 1920 drama R.U.R. to the lethal computer HAL900 from 2001: Space Odyssey, there is always dark myth associated with the development of AI. Of course, artificial intelligence research progressed slowly in the past century, especially during the 70s and 80s (so-called AI winter), making these claims seem out of place. Recently the breakthrough of machine learning and computation power brought extra funding to AI research. Now the government is using AI for surveillance and military applications, how can we prevent this?

AI has many wonderful benefits for humanity. Some are already available, such as search engines and machine translation; some are coming soon, such as self-driving cars. As AI increases in capability, more benefits will flow from it. But increasing capabilities also bring increasing possibilities for misuse:

  • We are already seeing uses of AI for intrusive surveillance, persuasion, and control of people, particularly by spreading false information. This is a very bad idea, and we may need to use new kinds of privacy technology and AI systems to defend against it. Possibly we need some new laws and better ways to catch perpetrators.
  • AI can be used to create fully autonomous weapons that can decide to kill without human supervision and responsibility. Because such weapons are scalable – that is, a small number of people can deploy millions of even billions of weapons – they are a new kind of weapon of mass destruction that would damage international security. Fortunately the United Nations is working on a treaty to ban such weapons. I hope China will support this treaty process.
  • The use of AI and robotics on a large scale to replace humans in jobs, with no intelligent planning to prepare for such changes, could lead to major economic and social disruption. With appropriate foresight to develop new kinds of economic structures and education systems, the transition could be much less painful and the end-state much more desirable for everyone.
  • In the longer term, AI systems will become more capable than humans: they will be able to make better decisions in the real world. I am not worried about machines suddenly becoming conscious and hating humans. Instead, the concern is that we may give machines seemingly innocent objectives – such as “cure cancer” – and then find that the machine is using the whole human race as guinea pigs for cancer experiments. A very intelligent machine with the wrong objective is potentially very dangerous. At Berkeley we are developing ways to avoid this problem based on three simple principles:
  1. The machine’s only objective is to maximize the realization of human values
  2. The machine is initially uncertain about what those values are.
  3. Human behavior provides information about human values.

There are some complex technical ideas underlying these simple principles, and our goal is to use those ideas to design AI systems that are provably beneficial for humans – that is, we can prove that humans will be better off with such machines than without them.

 

Stuart Russell 为机器之心 GMIS 2017 大会嘉宾,知乎账号由Stuart Russell 授权,机器之心代为注册和运营,以上为 Stuart Russell 的英文版答案,以下为机器之心翻译答案供大家参考。

从 1920 年的戏剧《R.U.R.》中愤怒的机器人到《2001:太空漫游》中的杀人计算机 HAL9000,人工智能的发展总是伴随着暗黑的神话。当然,上个世纪人工智能研究进展缓慢,尤其是在 70 年代和 80 年(因此被称为“人工智能的冬天”),使得这些说法看起来就像是无中生有。近来,随着机器学习和计算能力的突破性发展,人工智能研究得到了格外的激励。现在政府已经将人工智能用在了监控和军事应用中,我们可以如何阻止这种事情?

人工智能对人类很多好处。其中一些已经实现,比如搜索引擎和机器翻译;另一些则即将实现,比如自动驾驶汽车。随着人工智能能力的增长,其也将带来越来越大的好处。但能力越大,被误用的可能性就越大:

  • 我们看到人工智能已经在侵入式监控、说服和控制人群方面得到了应用,尤其是散布虚假信息。这非常糟糕,而且我们可能需要使用新型的隐私技术和人工智能系统来防止发生这种事。我们可能需要一些新的法律和更好的方法来捉拿做坏事的人。
  • 人工智能可被用于创造全自动武器,它们可以在没有人类监管和责任的情况下进行是否杀戮的决策。因为这样的武器是可扩展的——所以它们是一种新型的大规模杀伤性武器,会损害国际安全。幸运的是,联合国正在制定一项禁止这种武器的条约。我喜欢中国能够支持这个条约的进程。
  • 人工智能和机器人大规模替代人类工作如果没有明智的规划来为这样的改变做好准备,可能会给经济和社会带来巨大的冲击。通过预见性地发展新型的经济结构和经济系统,这样的转变所带来的痛苦可能会小得多,而且其最终状态可能会更适合每一个人。
  • 更长期来看,人工智能系统将会变得比人类更强:它们将能在真实世界中做出更好的决策。我并不担心机器突然变得有意识或仇恨人类。相反,我担忧的是我们可能会给机器一些看起来无害的目标——比如“治愈癌症”——然后发现机器在使用整个人类种族来当小白鼠来进行癌症实验。一个目标错误的高智能机器可能是非常危险的。在伯克利,我们正在开发能避免这一问题的方法,我们的研究基于三个简单的原则: 1.机器的唯一目标是最大化人类价值的实现。 2.机器最初并不确定这些价值是什么。 3.人类行为能够提供关于人类价值的信息。

在这些简单的原则之下,有一些复杂的技术思想,我们的目标是使用这些思想来设计实际上有益于人类的人工智能系统——也就是说,我们可以证明有这些机器比没有它们更好。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值