人工智能的革命:道德可以被编程么?

写在前面

发现一篇文章,讨论的是我比较关注的人工智能的问题,有别人翻译的,但是我觉得不满意,自己动手撸了一版。

其实还有很多地方翻译有问题,先这样吧,回头再改改。

原文链接:http://futurism.com/the-evolution-of-ai-can-morality-be-programmed/


--------------------------------------我是分割线----------------------------------------------------


IN BRIEF
导读


Our artificial intelligence systems are advancing at a remarkable rate, and though it will be some time before we have human-like synthetic intelligence, it makes sense to begin working on programming morality now. And researchers at Duke University are already well on their way.
我们的人工智能系统正在以难以置信的速度在发展,也许现在是时候去开始编程道德了,尽管类人的模拟智能出现还得等候时日。而杜克大学的研究员们已经走在了路上。



Recent advances in artificial intelligence have made it clear that our computers need to have a moral code. Disagree? Consider this: A car is driving down the road when a child on a bicycle suddenly swerves in front of it. Does the car swerve into an oncoming lane, hitting another car that is already there? Does the car swerve off the road and hit a tree? Does it continue forward and hit the child?
人工智能最近的一些进展已经充分表明我们的电脑需要有一套道德体系。不同意?考虑这种情况:一个骑着自行车的小孩突然转弯挡在一辆正在行驶的汽车面前。这辆车究竟是该驶入对面车道,撞上一辆已经在那儿的车,还是转过这条路,撞上一棵树,又或者继续行驶撞到那个孩子呢?


Each solution comes with a problem: It could result in death.
每个解决方案都伴随着问题:它可能会最终导致死亡。

It’s an unfortunate scenario, but humans face such scenarios every day, and if an autonomous car is the one in control, it needs to be able to make this choice. And that means that we need to figure out how to program morality into our computers.
这个场景假设令人不太舒服,但是人类却每天都在面对这样类似的场景,并且如果一辆自动行驶的汽车在控制范围内,那它同样需要做这样的选择。并且这就意味着我们得找出将道德编程进我们电脑的方法。


Vincent Conitzer, a Professor of Computer Science at Duke University, and co-investigator Walter Sinnott-Armstrong from Duke Philosophy, recently received a grant from the Future of Life Institute in order to try and figure out just how we can make an advanced AI that is able to make moral judgments…and act on them.
文森特 科尼策,杜克大学计算机教授,Walter Sinnott-Armstrong学院的联合研究员,最近收到来自未来生命研究所的一笔赞助,希望他可以尝试找出我们怎样才能让赋予高级人工智能进行道德判断并且按照这些执行的能力。


MAKING MORALITY
创造道德标准


At first glance, the goal seems simple enough—make an AI that behaves in a way that is ethically responsible; however, it’s far more complicated than it initially seems, as there are an amazing amount of factors that come into play. As Conitzer’s project outlines, “moral judgments are affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems.”
刚开始的时候,目标看上去很简单---制造一个人工智能,并且让其具备道德上的责任感;然而,情况的复杂程度却远远的超过了刚开始的设想,因为有大量的因素会影响到具体的实现。科尼策的项目大纲这样描述,“道德决策受到权利(比如隐私权),角色(比如一个人在家庭中的角色),过去的行为(比如承诺),动机和意图,以及其他相关的特征的影响。这些多样的影响因素却没有被放入人工智能系统中。”


That’s what we’re trying to do now.
那就是现在我们一直尝试在做的事情。


In a recent interview with Futurism, Conitzer clarified that, while the public may be concerned about ensuring that rogue AI don’t decide to wipe-out humanity, such a thing really isn’t a viable threat at the present time (and it won’t be for some time). As a result, his team isn’t concerned with preventing a global-robotic-apocalypse by making selfless AI that adore humanity. Rather, on a much more basic level, they are focused on ensuring that our artificial intelligence systems are able to make the hard, moral choices that humans make on a daily basis.
在《未来主义》杂志的近期采访中,科尼策声明,尽管公众也许关心如何确保邪恶的人工智能不会把人类清楚,但是这样的情况在目前来看根本不必担心(并且未来很长一段时间内也不必担心)。因此,他的团队并不关心如何制造一个完全敬仰人类的没有自我的人工智能来防止这个世界面临全球机器人天启。相反,在更加基础的层面上,他们专注在确保我们的人工智能系统可以去进行人类每天都会面临的纠结的,道德上的决策。



So, how do you make an AI that is able to make a difficult moral decision?
所以,你怎么样才能制造一个能进行纠结道德决策的人工智能呢?


Conitzer explains that, to reach their goal, the team is following a two path process: Having people make ethical choices in order to find patterns and then figuring out how that can be translated into an artificial intelligence. He clarifies, “what we’re working on right now is actually having people make ethical decisions, or state what decision they would make in a given situation, and then we use machine learning to try to identify what the general pattern is and determine the extent that we could reproduce those kind of decisions.”
科尼策解释道,为了达到他们的目标,他们团队遵循着两条路径:让人们进行道德上的决策以便于发现决策的模式并找出将其转换进人工智能的方法。他说,“我们正在做的事情其实是让人们进行道德决策,或者陈述出他们在一个给定的场景中会采取的方式,然后我们会用机器学习的方式尝试识别出一种通用的模式并决定在什么程度上我们可以复制这种决策行为。”


In short, the team is trying to find the patterns in our moral choices and translate this pattern into AI systems. Conitzer notes that, on a basic level, it’s all about making predictions regarding what a human would do in a given situation, “if we can become very good at predicting what kind of decisions people make in these kind of ethical circumstances, well then, we could make those decisions ourselves in the form of the computer program.”
简而言之,他们团队正在尝试找出我们在进行道德决策时候的模式并将其翻译进人工智能系统。科尼策强调说,在基础层面上,这一切都是关于人类在给定的情境下,如何进行预测的研究,“如果我们可以很擅长于预测人类在面临道德困境的时候会做出怎样的决策,那么,我们就可以使用电脑程序的方式进行这些决策。”

Right now, maybe our moral development hasn’t come to its apex.
现在,也许道德发展并没有达到其顶点。
However, one major problem with this is, of course, that our moral judgments are not objective—it’s neither timeless nor universal.
然而,一个伴随而来的主要问题就是,当然,我们的道德决策并不是客观的---它并不是永恒的,也不是暂时的。


Conitzer articulates the problem by looking to previous decades, “if we did the same ethical tests a hundred years ago, the decisions that we would get from people would be much more racist, sexist, and all kinds of other things that we wouldn’t see as ‘good’ now. Similarly, right now, maybe our moral development hasn’t come to its apex, and a hundred years from now people might feel that some of the things we do right now, like how we treat animals, is completely immoral. So there’s kind of a risk of bias and with getting stuck at whatever our current level of moral development is.”
科尼策通过将时间提前几十年来清晰的表述这个问题,“如果我们在一百年前做同样的道德测试,我们得到的结果将会是种族主义,性别歧视,以及所有其他的我们目前不能称之为'好的'事情。相似的,当前我们的道德发展也并没有到达其顶点,并且百年之后,人们也许会觉得我们目前做的很多事情,比如我们对待动物的方式,是完全非人道的。所以有这样的一种风险,因为偏见和我们目前所处的道德发展层面。”



And of course, there is the aforementioned problem regarding how complex morality is.”Pure altruism, that’s very easy to address in game theory, but maybe you feel like you owe me something based on previous actions. That’s missing from the game theory literature, and so that’s something that we’re also thinking about a lot—how can you make what game theory calls ‘Solution Concepts‘incorporate this aspect? How can you compute these things?”
当然,上述问题的出现由道德问题的复杂性所致。“纯粹的利他主义,在博弈论中很好定义,但是也许是因为在之前的行为中你亏欠了我什么。这是在博弈论中没有考虑到的问题,所以这是我们经常思考的问题----你如何做出博弈论中成为解决方案概念的决策,而不考虑这方面问题?你怎么对这些事情进行计算呢?”



To solve these problems, and to help figure out exactly how morality functions and can (hopefully) be programmed into an AI, the team is combining the methods from computer science, philosophy, economics, and psychology “That’s, in a nutshell, what our project is about,” Conitzer asserts.
为了解决这些问题,并帮助去弄清楚到底道德是如何发挥作用的并且能够(但愿)被编程进人工智能程序,他们团队正在将计算机科学,哲学,经济学和心理学等方法结合起来。“那就是,简而言之,我们项目在做的事情。”科尼策说。


But what about those sentient AI? When will we need to start worrying about them and discussing how they should be regulated?
但是这些有情感的人工智能该怎样处理呢?什么时候我们应该开始担心他们并讨论他们该如何被规范呢?




THE HUMAN-LIKE AI
类人人工智能


According to Conitzer, human-like artificial intelligence won’t be around for some time yet (so yay! No Terminator-styled apocalypse…at least for the next few years).
根据科尼策所说,类人人工智能短期内并不会出现(对!并不会出现终结者样的天启……至少未来几年内不会出现)。


“Recently, there have been a number of steps towards such a system, and I think there have been a lot of surprising advances….but I think having something like a ‘true AI,’ one that’s really as flexible, able to abstract, and do all these things that humans do so easily, I think we’re still quite far away from that,” Conitzer asserts.
“最近,朝着这样一个系统出现了很多的进步,并且我觉得有很多令人惊喜的发展……。但是我觉得拥有‘真正的人工智能’,能够像人一样灵活,能进行抽象,并且像人一样如此轻松的完成这些事情,我想我们离这样的情况还相当遥远”科尼策宣称。



It may be quite a bit further out, but to computer scientists, that means maybe just on the order of decades.
True, we can program systems to do a lot of things that humans do well, but there are some things that are exceedingly complex and hard to translate into a pattern that computers can recognize and learn from (which is ultimately the basis of all AI).
这可能相当遥远,但是对于计算机科学家来说,也许仅仅意味着几十年的时间。确实,我们可以通过编程系统去做很多人类做的很好的事情,但是有一些事情却极其的复杂,因为将这些行为转换成一定的模式而且能都被电脑识别和学习很困难(这才是终极人工智能的基础)。



“What came out of early AI research, the first couple decades of AI research, was the fact that certain things that we had thought of as being real benchmarks for intelligence, like being able to play chess well, were actually quite accessible to computers. It was not easy to write and create a chess-playing program, but it was doable.”
“最早的人工智能研究得出了什么结论呢,在最早几十年的研究中,发现了这样一个事情就是我们曾经认为的是智能发展的里程碑的事情,比如能玩好象棋,其实对于电脑来说是相当简单的。想要写出并创造一个可以玩象棋的游戏并不容易,但是确实可实现的。”


Indeed, today, we have computers that are able to beat the best players in the world in a host of games—Chess and Go, for example.
确实,今天,我们有很多电脑能够在一些棋类游戏上打败世界上最好的棋手---例如象棋和围棋。


But Conitzer clarifies that, as it turns out, playing games isn’t exactly a good measure of human-like intelligence. Or at least, there is a lot more to the human mind. “Meanwhile, we learned that other problems that were very simple for people were actually quite hard for computers, or to program computers to do. For example, recognizing your grandmother in a crowd. You could do that quite easily, but it’s actually very difficult to program a computer to recognize things that well.”
但是科尼策声明,事实证明,下棋并不是一个好的方式去衡量类人智能。至少,人类的心灵还有很多其他的东西。“同时,我们发现很多对人类很容易的事情,对于计算机而言或者编程实现其实相当困难。比如,在人群中识别出你的奶奶。你可能很容易做到,但是对于电脑而言要准确的识别并不是一件容易的事情。”


Since the early days of AI research, we have made computers that are able to recognize and identify specific images. However, to sum the main point, it is remarkably difficult to program a system that is able to do all of the things that humans can do, which is why it will be some time before we have a ‘true AI.’
从早期的人工智能研究开始,我们已经制造了很多电脑,能够识别并找出特定的图片。然而,总而言之,要用编程的方式创造出一个系统能够实现所有人类可以完成的事情是极其困难的,这也是为什么我们想得到“真正的人工智能”还有相当的时间。



Yet, Conitzer asserts that now is the time to start considering what the rules we will use to govern such intelligences. “It may be quite a bit further out, but to computer scientists, that means maybe just on the order of decades, and it definitely makes sense to try to think about these things a little bit ahead.” And he notes that, even though we don’t have any human-like robots just yet, our intelligence systems are already making moral choices and could, potentially, save or end lives.
然而,科尼策也说现在是时候去思考该如何去管理这样的智能。“这也许有点扯远了,但是对于计算机科学家来说,这仅仅意味着几十年的时间,并且提前一些时候去思考这些问题绝对有足够的意义。”他还确定,即使我们现在还没有类人的机器人,我们的智能系统已经能够做道德决策,并且能够,潜在的拯救或者结束生命。


“Very often, many of these decisions that they make do impact people and we may need to make decisions that will typically be considered to be a morally loaded decision. And a standard example is a self-driving car that has to decide to either go straight and crash into the car ahead of it or veer off and maybe hurt some pedestrian. How do you make those trade-offs? And that I think is something we can really make some progress on. This doesn’t require superintelligent AI, simple programs can just make these kind of trade-offs in various ways.”
“很多时候,他们做出的许多决策的确会对人类产生影响,并且我们需要做出一般情况下都被认为是有道德感的决定。一个标准的案例就是自动驾驶汽车必须决定继续直走或者撞上前面的一辆车,再或者改变方向,但却有可能撞上人行道上的行人。你如何进行取舍呢?并且我觉得有些事请我们确实做出一些成果。这并不需要超级人工智能,简单的程序就可以用很多不同的方式决定取舍。”



But of course, knowing what decision to make will first require knowing exactly how our morality operates (or at least having a fairly good idea). From there, we can begin to program it, and that’s what Conitzer and his team are hoping to do.
但是当然了,知道做什么决定首先需要准确知道我们的道德运作方式(或者至少有一个相对较好的想法)。基于此,我们可以开始进行编程,并且这是科尼策和他的团队希望去做的事情。


So welcome to the dawn of moral robots.

This interview has been edited for brevity and clarity.

-----

码字不易,与君共勉!




评论 21
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值