人工智能分析人的情感状态_没有情感就不会有像人工智能这样的人

人工智能分析人的情感状态

AI in the movies is almost always about artificial “General” intelligence (AGI); human-like intelligence. In the original Star Trek series, the character Spock has the almost-superpower of cold, unemotional logic, that is, of reasoning devoid of emotion. Hardly an episode goes by without another character remarking about Spock’s logic unsullied by emotion. The idea is that “emotion” impairs “logic”

电影中的AI几乎总是与人工“一般”情报(AGI)有关; 类人智力。 在最初的《星际迷航》系列中,角色Spock具有几乎超强的冷淡,无感情的逻辑,即推理时没有情感。 几乎没有一个情节在没有另一个角色评论斯波克的逻辑不受情感影响的情况下就过去了。 这个想法是,“情感”削弱了“逻辑”

Reason has obvious tactical benefits. It helped our ancestors find food, fight off predators and eventually invent Legos. Since the Enlightenment, when the power of the church started giving way to the power of science, reason has become a driving force in justification, persuasion and often, oppression. Over the intervening years, the power of reason has been internalized into our speech and thought. To be unreasonable is to be irrational. In a “reasonable” society, emotion is the opposite of reason. Emotion is often considered unhinged and unreasonable. It’s a force to be tamed or tolerated. We are “vulnerable” when we are emotional and weak when we are vulnerable. Perhaps this partly explains why we design AI to predict and classify but not to “feel”.

理性具有明显的战术优势。 它帮助我们的祖先找到食物,与掠食者作斗争,并最终发明了乐高积木。 自启蒙运动以来,当教会的力量开始让位给科学的力量时,理性已成为理由,说服力和压迫的动力。 在过去的几年中,理性的力量已被内化到我们的言论和思想中。 不合理就是不理性。 在一个“合理”的社会中,情感是理性的反面。 情感通常被认为是不愉快和不合理的。 这是被驯服或容忍的力量。 当我们情绪激动时,我们是“脆弱的”;当我们脆弱的时候,我们是脆弱的。 也许这部分解释了为什么我们设计AI来预测和分类而不是“感觉”。

Now let’s consider reason and emotion, however superficially, from the standpoint of cognitive psychology. Why? Because we can’t engineer human-like AI without an understanding of plain old human intelligence. Human intelligence is a huge field so lets consider a subset; ethics. Most of us have some concept of “right” and “wrong”. Sure, this is socially and culturally determined but I believe that part of what gives rise to human ethics is empathy. I know there are technical definitions of empathy in the psychology literature, but I believe that at its core, empathy is an emotional response to another individuals emotional states. A system of rules can simulate ethics in a simplistic and constrained environment. Without empathy, ethics falls back on rule-based systems but these approaches are too fragile in my opinion to handle the complexities of real-world decision-making. Let’s look at one example.

现在,从认知心理学的角度,从表面上考虑理性和情感。 为什么? 因为如果不了解普通的旧人类智能,我们就无法设计出类似人类的AI。 人类智能是一个广阔的领域,因此让我们考虑一个子集。 伦理。 我们大多数人都有“正确”和“错误”的概念。 当然,这在社会和文化上是确定的,但我相信引起人类道德的部分原因是同理心。 我知道心理学文献中有移情的技术定义,但我相信移情从本质上讲是对另一个人的情绪状态的一种情绪React。 规则系统可以在简单而受约束的环境中模拟道德。 没有同理心,道德就会退回到基于规则的系统上,但是我认为这些方法太脆弱,无法处理现实世界中决策的复杂性。 让我们看一个例子。

One example of a rule-based approach to ethics is captured by the Trolley-car problem (or autonomous vehicle problem, if you prefer). In the Trolley car problem, a hypothetical trolley car has gone out of control and is hurtling toward a fork in the tracks. You (or the AI) can pull a lever directing the out of control trolley down one track or the other. On one track is one person or group of people and on the other is another person or group. The agent must choose which track the trolley will take and hence who it will hit. Moral weights can be assigned to the different tracks such as a “school bus full of children” or a “suicidal person”. These kinds of distractors change the dynamics of the decision (or, more likely tease out our own social biases).

无轨电车问题(或自动驾驶汽车问题,如果您愿意的话)记录了一个基于规则的道德方法示例。 在无轨电车问题中,一个假设的无轨电车已经失控,正朝着铁轨上的叉子驶来。 您(或AI)可以拉动控制杆,将失控的手推车从一个轨道或另一个轨道上拉下来。 在一个轨道上是一个人或一群人,在另一轨道上是另一个人或一群人。 代理人必须选择手推车走哪条轨道,然后选择撞到谁。 道德权重可以分配给不同的轨道,例如“装满孩子的校车”或“自杀者”。 这类干扰因素会改变决策的动力(或者更有可能挑逗我们自己的社会偏见)。

One rule-based approach to this could be Mill’s Utilitarianism which basically proposes optimizing for the greatest good for the greatest number of people. A developer could encode this as a series of if/than rules. Utilitarianism is a fragile approach to the problem because not every situation can be encoded in advance by the developer. Trying to encode moral judgments becomes complicated very quickly because real-world ethics almost always have significant gray areas in which clear-cut solutions are not available. Suppose we are discussing a self-driving car instead of a trolley. Should the car prioritize the safety of it’s occupants over everything else? (as Mercedes’ self driving cars will) Or should the car optimize for cost? How would Asimov’s rules of robotics handle this situation? Perhaps it should aim for the obstacle with the best insurance coverage? The AI character “HAL” in the movie 2001 demonstrates a worst case scenario in human-agent teaming when it murders it’s human colleagues because they are imperiling it’s mission.

一种基于规则的方法可能是密尔的功利主义,它基本上建议对最大数量的人实现最大利益的优化。 开发人员可以将此编码为一系列if / than规则。 功利主义是解决问题的一种脆弱方法,因为并非每种情况都可以由开发人员预先编码。 试图对道德判断进行编码变得非常复杂,因为现实世界中的道德几乎总是存在着明显的灰色地带,在这些地带没有明确的解决方案。 假设我们正在讨论的是自动驾驶汽车而不是手推车 。 汽车是否应该优先考虑乘员的安全? ( 就像梅赛德斯的自动驾驶汽车一样 )还是应该针对成本进行优化? 阿西莫夫的机器人规则将如何处理这种情况? 也许它应该针对具有最佳保险范围的障碍? 电影2001中的AI角色“ HAL”在人类代理团队中谋杀了人类同事,因为他们破坏了其使命,这表明了人类-代理团队中最坏的情况。

So how might giving AI emotion help? Let me say explicitly that I’m not an expert in this, but I think that having some concept of “empathy” (i.e. emotional resonance with another) might provide a moral framework for decision making. Something more like transfer learning in which an AI agent can generalize learning from one situation and apply aspects of it in another similar situation. In short, some kind of fuzzy heuristics rather than a fragile, rigid set of rules.

那么给AI情感带来怎样的帮助呢? 让我明确地说,我不是这方面的专家,但是我认为,具有“移情”的概念(即与另一个人的情感共鸣)可能为决策提供道德框架。 更像是转移学习,其中AI代理可以概括从一种情况开始的学习并将其各个方面应用到另一种相似的情况。 简而言之,是某种模糊的启发式方法,而不是一组脆弱,僵化的规则。

Secondly, consider emotion as a way to assign value to data. A strong emotional response in a receiver could work alongside a reward function in a reinforcement learning system. Perhaps a certain action brings a high reward in the RL system but triggers a highly negative emotional response (e.g. HAL might accomplish it’s mission but to do so it must kill it’s human team). This could help if the AI were using such a system to simulate, evaluate and choose from a number of possible actions.

其次,将情感视为为数据分配价值的一种方式。 接收者强烈的情感React可以与强化学习系统中的奖励功能一起发挥作用。 也许某种行动会在RL系统中带来很高的回报,但会引发高度消极的情绪React(例如HAL可能会完成其任务,但这样做必须杀死它的人员团队)。 如果AI使用这样的系统来模拟,评估和从许多可能的动作中进行选择,这可能会有所帮助。

Some of the first steps in these areas have been taken at places like the Affective Computing Group in MIT’s Media Lab. Aspects of natural language processing (NLP) like sentiment analysis are also relevant. But generally, affective computing is not a major part of AI research both because it’s hard but also because it’s culturally undervalued (in my opinion). It may be that human-like intelligence is a useful distraction on par with building an aircraft with flapping wings. Real progress on flight didn’t happen until we abandoned biomimicry of birds and started building fixed wing planes. Similarly, human-like AI might be a chimera but I think that imbuing AI with “emotion” is crucial to safe and ethical AI of the future regardless of what it looks like.

这些领域的第一步已在麻省理工学院媒体实验室的情感计算小组等地方采取。 诸如情感分析之类的自然语言处理(NLP)方面也很重要。 但是总的来说,情感计算并不是AI研究的主要内容,这不仅因为它很困难,而且因为它在文化上被低估了(我认为)。 类人的智力可能与建造带有拍打翅膀飞机同等重要 。 直到我们放弃了鸟类的仿生学并开始建造固定翼飞机,飞行的真正进展才发生。 同样,类人AI可能是一种嵌合体,但我认为,将AI充满“情感”对于未来的安全和道德AI至关重要,无论它看起来是什么样。

翻译自: https://medium.com/swlh/there-will-be-no-human-like-ai-without-emotion-669efe71ce4a

人工智能分析人的情感状态

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值