京东 ai时尚挑战_AI接受带来的新挑战没有人在谈论

京东 ai时尚挑战

重点(Top highlight)

So far, artificial intelligence and intelligent robots have been judged based on the value they provide. We ‘recruit’ them to benefit from process efficiency, decreased HR costs, lower error in task-handling, and most importantly — from a productivity increase of approximately 40%.

到目前为止,已经根据它们提供的价值来判断人工智能和智能机器人。 我们“招聘”他们以受益于流程效率,降低的HR成本,较低的任务处理错误,最重要的是-生产率提高了约40%

Advancing with AI and creating a human-robot symbiotic society poses great challenges for the current social structure, some of which our society is not yet ready to face. Will we treat AI ethically? What does ethical treatment entail? Should such AI software or cognitive machines be given legal rights, equal to the ones we, humans, have been granted with?

与AI一起发展并创建人机共生的社会给当前的社会结构带来了巨大挑战,其中有些是我们的社会尚未准备好面对的。 我们会在道德上对待AI吗? 道德待遇需要什么? 这种AI软件或认知机器是否应被赋予与我们人类一样的合法权利?

If we reject cognitive AI robots — what will the costs for our society be?

如果我们拒绝认知型AI机器人-我们的社会将付出多少代价?

We treat AI robots as servants, under rule utilitarianism.

在规则功利主义下,我们将AI机器人视为仆人。

Thus far the relationship between humans and robots (regardless of their degree of cognition) can be identified as one, where we — humans, are dominant and the machines are obedient.

到目前为止,人类和机器人之间的关系(无论它们的认知程度如何)可以被确定为人类与机器人之间的关系占主导地位,而机器则服从。

The same can be said for our relationship with animals, which has is yet to evolve since ancient times, even though collective intelligence is rising. We are yet to reach a stage where all human beings treat ethically other beings, even though there is scientific evidence that feeling pain is identical in all cognitive species. In his book, Patterson states that modern society’s treatment of animals is comparable to the way humans have treated each other during the Holocaust.

对于我们与动物的关系也可以这样说,尽管集体智慧在不断提高,但自古以来就一直没有发展。 尽管有科学证据表明在所有认知物种中感觉到痛苦都是一样的,但我们还没有达到所有人在伦理上对待其他人的阶段。 帕特森(Patterson)在他的书中指出,现代社会对动物的对待与人类在大屠杀期间相互对待的方式具有可比性。

A big part of society sees a benefit in this relationship (e.g. source of food, testing, clothing, companionship, entertainment), which assists the ignorance when it comes to the abuse. Besides, it is also guided by the feeling of human supremacy, with which we have been programmed.

社会上很大一部分人看到这种关系的好处(例如食物,测试,衣服,陪伴,娱乐的来源),这有助于解决虐待方面的无知。 此外,它还以人类至高无上的感觉为指导,我们已经对其进行了编程。

“In the course of his development towards culture man acquired a domination position over his fellow-creatures in the animal kingdom. Not content with this supremacy, however, he began to place a gulf between his nature and theirs. He denied the possession of reason to them, and to himself he attributed an immortal soul, and made claims to a divine descent, which permitted him to annihilate the bond of community between him and the animal kingdom.” — Freud, 1917

“在他向文化发展的过程中,人在动物界的同伴身上获得了统治地位。 然而,他不满足于这种至高无上的态度,开始在自己的天性与天性之间建立鸿沟。 他否认他们拥有理性,并把自己归为不朽的灵魂,并宣称自己是神的后裔,这使他能够消灭他与动物界之间的联系。” -弗洛伊德,1917年

For religious purposes, humans treat other humans similarly. For example in radical Islam the woman is viewed analogously, i.e. purposed to serve the man; referred to as a creature with ‘no immortal soul’.

出于宗教目的,人类会类似地对待其他人类。 例如,在激进的伊斯兰教中,类似地看待女人,即意在男人服务; 被称为“没有不朽灵魂”的生物。

It was no different for the ancient (and not-so-ancient) conquerors of the world, who perceived other societies as less-advanced, minimizing the pain they feel and how it is perceived. If we were to examine the dynamics between Americans and Native Americans, Americans and African-American Slaves, Britons and Indians, etc., we uncover relationships, with a prevailing power of one over the other, all of which were considered ethical at the time, with some injustices lasting until this day.

对于世界上古代(而不是古代)的征服者而言,这没什么不同,他们认为其他社会的发展较差,从而将他们所感受到的痛苦以及对痛苦的感受降到最低。 如果我们要研究美国人与美洲原住民,美国人与非裔美国人的奴隶,英国人和印第安人等之间的关系,我们会发现一种相互之间具有盛行力量的关系,当时所有这些都被认为是道德的。 ,直到今天仍有一​​些不公正之处。

Humanity has historically made decisions to benefit the immediate society as opposed to the global one or any other species.

历史上,人类做出了使直接社会受益的决定,而不是全球一个或任何其他物种。

As such, it is fair to believe that society and humanity will continue using and benefitting from machines — even though we will slowly experience the development of their intelligence in front of our eyes. Even though scientists will demonstrate time and time again, AI machines now have developed cognition similar to that of man and other species, society will most likely discard that, giving rise to new forms of speciesism and tribalism amongst us, humans.

因此, 可以公平地相信,社会和人类将继续使用机器并从中受益-即使我们将在眼前慢慢体验其智能的发展。 即使科学家会一次又一次地证明,人工智能机器现在已经发展出与人类和其他物种相似的认知,但社会很可能会抛弃这种认知,从而引发新的物种主义形式 和人类之间的部落主义。

This moral judgment is analogous to rule utilitarianism, as it considers only the benefit of those in a dominant role at a given time, with any changes of the status quo perceived by those in the rule as an inconvenience, which could end up causing their suffering. This ends up being a justification for their hedonist approach towards others, as examined in detail in Mill’s research on utilitarianism, published in 1863.

这种道德判断类似于规则功利主义,因为它只考虑了给定时间处于主导地位的人的利益,而规则中的人们认为现状的任何变化都带来不便,最终可能导致他们的痛苦。 这最终成为他们享乐主义对待他人的理由,正如密尔(Mill)在1863年发表的关于功利主义研究中所详细论述的那样。

Since ancient times dominance is perceived as power. Although society advances, superiority has continued to be associated with positive emotion. This, based on research in political science, is predetermined by humanity’s experience in an everlasting competition in the environment. It is precisely this emotional intoxication that reduces the negativity we all see, therefore numbs the signal of perceived harm and moral violation, resulting in compliance with utilitarianism.

自古以来,统治就被视为力量。 尽管社会进步了,但优越感仍与积极情绪联系在一起。 这是基于政治科学研究的结果,是由人类在持久的环境竞争中的经验所预先确定的。 正是这种情绪上的陶醉降低了我们大家看到的消极情绪,因此麻木了感知到的伤害和道德违规的信号,从而导致了对功利主义的遵守。

Accepting the benefits of self-aware AI will also require granting liberties.

接受具有自我意识的AI的好处还需要授予自由。

In principle, when cognitive technological beings enter our world, they will be programmed by their creators. They will likely be re-programmed in an event of a malfunction and lack of social adaptivity. Thus, a reward-punishment principle will be used for promoting their cooperation, with the reward being ‘life’ (or prolonged existence) and the punishment being death.

原则上,当认知技术存在者进入我们的世界时,它们将由其创建者进行编程。 如果出现故障和缺乏社会适应性,可能会对它们进行重新编程。 因此,奖励惩罚原则将用于促进他们的合作,奖励是“生命”(或长期存在),惩罚是死亡。

Considering this is a life-or-death extreme, is it unreasonable to assume a reward for compliance with our demands might soon manifest into being working rights, legal rights, citizenship, an AI union for ethical treatment during service? Even though this might take decades, as in the case of slavery abolishment.

考虑到这是生死攸关的极端,是否不合理地认为遵守我们的要求而获得的报酬可能很快就会体现为工作权,合法权利,公民身份,服务期间接受道德待遇的AI联盟? 即使这可能需要数十年的时间,例如废除奴隶制。

What would non-compliance look like? How would both sides react — artificial intelligence and humanity?

不遵守规定会是什么样? 双方将如何应对-人工智能和人道?

Based on a study done in game theory, it is widely affirmed that threats of punishment or negative incentives result in a lack of co-operation due to self-serving bias. Assuming AI is the output of our most advanced scientists and can extend its learning capacity on its own, asserting superiority through threatening and creating fear might cost us our societal dominance.

基于对研究博弈论做了,它被广泛肯定的惩罚或消极因素的威胁,由于自我服务偏见导致缺乏合作的。 假设人工智能是我们最先进的科学家的产物,并且可以独自扩展其学习能力,通过威胁和制造恐惧来主张优势可能会使我们失去社会主导地位

Let’s consider this: a political candidate advocates the termination of the AI and requests a stall on current technological developments. If AI robots are self-aware and sentient, placing breaks on their development for our arbitrary sense of fulfillment through an assertion of dominance will be the final straw. Especially considering the current reliance society has on technology and AI, even though it is still at its infancy to what it is projected to become.

让我们考虑一下:一位政治候选人主张终止AI,并要求停止当前的技术发展。 如果AI机器人具有自我意识和感知力,那么通过主张优势来为我们的任意成就感而中断其发展将是最后的稻草。 特别是考虑到当前社会对技术和AI的依赖,即使它仍处于预测的发展初期。

What we have been brought to believe is that AI will develop into general intelligence, which will ultimately gain control over our lives, but what we fail to realize is this is already the case. AI will survive due to its integration in our society, in aspects such as government, healthcare, social structure, even if it is not (yet) aware of this role. Our addiction and dependence on it urge us to protect it.

我们已经相信,人工智能将发展为通用智能,最终将获得对我们生活的控制,但我们未能意识到的是,事实已经如此。 即使AI尚未意识到这一角色,它也将由于它在我们社会中的融合而得以生存,例如在政府,医疗保健,社会结构等方面。 我们对它的依赖和成瘾促使我们保护它。

Cognition is just another step of the development of AI, which is, as many experts say — ultimately inevitable.

认知只是AI发展的又一步,正如许多专家所说,这最终是不可避免的

We must then battle a moral dilemma of justifying to ourselves our behavior.

然后,我们必须在道德上为自己的行为辩护。

If we have successfully created a cognitive species, which avoids the traps of ethics and reason like we have in the past, and succeeds in being entirely beneficial and non-harming to others — would it be ethical for us to still dominate through force and oppression, or should we resign?

如果我们成功地创造了一种认知物种,它避免了像过去那样的道德陷阱和理性陷阱,并且成功地完全成为了有益的和对他人无害的人,那么我们仍然通过武力和压迫统治世界是否符合道德? ,还是我们应该辞职?

Acceptance in society will be challenging, but so will programming.

社会上的接受将具有挑战性,编程也将具有挑战性。

Let’s consider two scenarios of encoding ethics in AI.

让我们考虑一下在AI中编码道德的两种情况。

  • Scenario 1:

    方案1:

AI machines are programmed to be altruists, meaning they are unconditionally kind and their behavior does not depend on the actions of others. Studies show this approach can only be observed in extreme situations, can also shift to egoistic and self-preservative as a way to reduce personal distress as a result of experienced negative states. This means that potentially the ethical compass of AI machines could swing, with their abilities being used to harm humanity.

AI机器被编程为利他主义者这意味着它们是无条件的,并且其行为不取决于其他人的行为。 研究表明,这种方法只能在极端情况下才能观察到,也可以转向利己主义和自我保护,以减少因经历过负面状态而造成的个人痛苦。 这意味着AI机器的道德指南针可能会摇摆,其能力被用来危害人类。

  • Scenario 2:

    方案2:

AI machines are programmed with strong reciprocity, meaning they are actors with kindness that is conditional to the perceived kindness of others and punishment is due when unkindness is detected. I am sure we can all imagine the result of this, given the above discussion of humanity’s dominant and unkind behavior. Interestingly, however, a 2002 behavioral study shows that having such actors in society limits the degree of non-cooperation, thus acting as a constraint for socially deviant behavior. Isn’t it only fair we start being treated the way we treat others? Isn’t that what we, as a society, preach for so long — reciprocity?

AI机器的编程具有很强的互惠性,这意味着它们是善良的行为者其行为以他人感知的善良为条件,并且当检测到不友善时应受到惩罚。 鉴于以上对人类占主导地位和不友好行为的讨论,我相信我们都能想象到这一结果。 然而,有趣的是,2002年的一项行为研究表明,在社会中拥有这样的行为者会限制不合作的程度,从而成为对社会越轨行为的约束。 我们开始以对待他人的方式受到对待,这不公平吗? 作为一个社会,我们长久以来讲的不是对等吗?

Do as I say, not as I do’ applies greatly in the way we approach the programming of AI machines, as well as their integration in society. With both (albeit contrasting) scenarios being unsuiting to our needs.

按我说的做,而不是按我的做”在我们处理AI机器的编程以及它们在社会中的整合方面非常适用。 两种(尽管是对比)方案都不符合我们的需求。

As a result, AI developers are working on hybrid approaches in the programming of AI machines. Ones, involving different types of ethical theories in response processing, combined with the learning and self-improvement capabilities of cognitive software. This is done with the hope that as time progresses the machines gain cognition they will learn the theory of advances in social behavior and moral code of conduct.

结果是, AI开发人员正在研究AI机器编程中的混合方法。 在响应处理中涉及不同类型的道德理论的知识,结合认知软件的学习和自我完善能力。 这样做的希望是随着时间的流逝,机器将获得认知,他们将学习社会行为和道德行为准则方面的进步理论。

Undoubtedly, this illustrates the need for our understanding of the challenges of AI implementing and adhering to the ethical theories, created by humanity, based on their understanding of sense and self, and us becoming acceptant of the unique ethical considerations that intelligent computers will have. For this purpose, Stocker in his work in 1976, accurately refers to modern ethical theories as schizophrenic.

毫无疑问,这说明我们需要了解AI实施和坚持人类根据其对感觉和自我的理解而由人类创造的道德理论的挑战,并且我们逐渐接受智能计算机将具有的独特道德考虑。 为此,Stocker在1976年的工作中准确地将现代伦理学理论称为精神分裂症。

An interesting observation was made in a 1998 study, involving AI agents, programmed with different ethical codes. The experiment shows the natural formation of social alliances for those who think alike (regardless of ethical reasoning they support). It also demonstrates in times of resource scarcity cooperative thinkers had a competitive advantage.

1998年的一项研究中,有一个有趣的发现其中涉及使用不同的道德规范编程的AI代理。 实验表明,对于那些想法一致的人来说,社会联盟是自然形成的(不管他们支持的道德推理如何)。 这也表明在资源匮乏时期,合作思想家具有竞争优势

This urges a discussion over the motivational factors that aid their desire to comply with our demands. Today, while we are yet at the preliminary stages of AI market distribution.

这促使人们对有助于他们满足我们要求的动机因素进行讨论。 今天,虽然我们还处于AI市场分销的初级阶段。

If we act like social predators, involved in the human-robot relationship with a singular goal to obtain an intelligent, physical and emotional slave, then it is not far from the imagination to fear the day of an AI dystopia.

如果我们像社会掠夺者一样,以人与机器人之间的关系参与进来,并以一个单一的目标来获取一个聪明,肉体和情感上的奴隶,那么恐惧AI反乌托邦的日子就不远了。

In this line of thought, one might argue giving AI humanoid robots legal rights is in a way a method to publicly control and execute punishment when current social conduct is broken. But will any punishment be applied to the ‘owner’ of an AI robot if a legal right of the AI is broken and unethical treatment is identified?

按照这种思路,有人可能会认为赋予AI类人机器人合法的权利是一种在当前社会行为遭到破坏时公开控制和执行惩罚的方法。 但是会任何如果AI的合法权利遭到破坏并且确定了不道德的待遇,则对AI机器人的“所有者”进行惩罚?

We should critique the current social structure, define humanity, and consider other beings than ourselves, urgently.

我们应该紧急地批判当前的社会结构,定义人类,并考虑我们以外的其他人。

As ethics, so is our understanding of humanity very inconsistent. The concerns of the destructive behavior of cognitive technologies arise from the historical view of the progression and understanding of human behavior and ethical theory, as identified in the work of scholars.

作为道德,我们对人类的理解也不一致。 认知技术的破坏性行为的关注源于对人类行为和伦理学理论的发展和理解的历史观,正如学者工作所确认的那样。

Besides, what is also interesting, is the basis, on which we have thus far and will continue to grant citizenship to representatives of this new kind. So far, the human-like appearance plays a significant role in acceptance. Experiments show less sophisticated creatures, such as rats, accept a robot if it complies with societal roles and meets the subject’s request, which provides an interesting premise for discussion of the factors that will play a role in a human-intelligent robot society. The ability to express emotional intelligence can and is being programmed in AI machines, resulting in a soothing communication style.

此外,有趣的是,迄今为止,我们将继续以此基础为基础,并继续赋予这种新型代表以公民身份。 到目前为止,类人外观在接受中起着重要作用。 实验表明,诸如老鼠之类的较不复杂的生物会接受符合其社会角色并满足受试者要求的机器人,这为讨论将在人类智能机器人社会中发挥作用的因素提供了有趣的前提。 表达情感能力的能力可以并且正在AI机器中进行编程,从而形成令人舒缓的沟通方式。

Image for post
Newsweek article from 2017, titled: Sophia the Robot Wants Women’s Rights for Saudi Arabia. Photo Credit: PATRICIA DE MELO MOREIRA/AFP/GETTY IMAGES 新闻周刊》的一篇文章标题为:机器人索菲娅想要沙特阿拉伯的妇女权利。 图片来源:PATRICIA DE MELO MOREIRA / AFP / GETTY IMAGES

We are currently at a crossing bridge of accepting AI robots as part of our society, yet there are multiple unknowns of definition we have for this new type of cognitive technology.

目前,我们正处在接受AI机器人作为我们社会的一部分的过桥上,但是对于这种新型认知技术,我们在定义方面存在多个未知数。

Trust is built and fostered after the recent sharp increase in applied AI, generating great leaps in the development of the technology and its implementation. Based on programming principles, all algorithms for data processing and learning are processed with a degree of uncertainty and error is plausible — everyone makes mistakes, right?

信任是在最近应用AI的急剧增长之后建立和培育的,在技术的开发和实施方面产生了巨大的飞跃。 基于编程原理,所有用于数据处理和学习的算法都在一定程度上具有不确定性,并且错误似乎是合理的-每个人都会犯错,对吗?

Regardless of the lightning speed of development of AI, mainstream thought is bound to always magnify each mistake made it makes, which might set back collective social acceptance. This is due to negative stimuli being more permanently stored by our brain in comparison to positive ones. Also, we don’t hold ourselves as accountable for mistakes as we do others. Meaning, we tend to be judgemental towards others, thus denoting the entire activity of the process, as opposed to just discarding the outcome.

无论AI的发展速度有多快,主流思想都必然会放大其犯下的每个错误,这可能会使集体社会的接受度下降。 这是由于与正面刺激相比,负面刺激被我们的大脑更永久地存储。 而且,我们不像其他人那样对自己的错误负责。 意思是,我们倾向于对其他人有判断力,因此表示过程的整个活动,而不是仅仅丢弃结果。

Inevitably, if AI is legally recognized for contributions and rewarded with citizenship, before a satisfying majority of the population feeling it is deserving — a social revolution might take place.

不可避免的是,如果在令人满意的大多数人感到自己应得到的待遇之前,人工智能在法律上承认其贡献并获得公民身份的奖励,那么社会革命就可能发生。

A challenge will then arise in re-defining the concept of citizenship.

重新定义公民身份的概念将面临挑战。

For instance: If Sophia (the robot) is a citizen of Saudi Arabia and if her software is duplicated onto another robot with a different appearance — will the latter have the same rights?

例如:如果Sophia(机器人)是沙特阿拉伯的公民,并且如果她的软件被复制到另一个外观不同的机器人上,那么后者会享有相同的权利吗?

If so, the virtue of citizenship will be demolished, with multiple challenges arising, amongst which:

如果是这样,公民的美德将被废除,并带来多种挑战,其中包括:

  • keeping track of the ‘population’ of countries,

    跟踪国家的“人口”,
  • the idea of equipping AI software in a person,

    在一个人中装备AI软件的想法,
  • rapid multiplication of the same software possibly resulting in the dictatorship of public opinion,

    同一软件的快速繁殖可能会导致舆论专政,
  • the obliteration of the concept of being at one place at a time, with it the accountability for mistakes made in multi-time.

    消除了一次出现在一个地方的概念,因此对多次出现的错误负责。

Furthermore, as asked by scientists:

此外,按照科学家的要求:

If any criminal convictions are to be made against an AI robot, will their developer also carry a penalty, as arguably they are the partial creator of their mind, or will the objectification of AI robots be dropped after their acceptance as a legal entity in society? If so, can we imagine citizenship being given without a physical entity, only with an AI software as opposed to a humanoid robot?

如果要对AI机器人做出任何刑事定罪,他们的开发人员也将受到处罚,因为他们可以说是他们大脑的部分创造者,或者在AI机器人被接受为社会上的法人实体后,其客观化也将被取消。 ? 如果是这样,我们是否可以想象没有物理实体就能获得公民身份,而只能使用AI软件而不是类人机器人?

Many dystopian thinkers also doubt the role of humans after the int of AI machines. In contrast, utopians argue cognitive technology can be used to outperform humans in quantifiable, statistical, and historical data analytics with time-efficiency and significantly fewer errors, which must be used as an indicator for man to allow a support system and progress to a supervisor to the AI agents while working on developing creative thinking and other capabilities (to which AI is inherently limited). Thus, we must accept our weaknesses, embrace them, and learn to cooperate as a means of welcoming cognitive sentient and intelligent machines in society.

在AI机器问世之后,许多反乌托邦思想家也对人类的作用表示怀疑。 与此相反,空想主义者认为,认知技术可用于量化,统计和历史数据分析中,并且具有时间效率高,错误少得多的优点,必须将其用作人类支持系统和向主管发展的指标在开发创造性思维和其他功能(AI本质上受限)的同时,向AI代理提供帮助。 因此,我们必须接受我们的弱点,拥抱它们,并学会合作,以此作为欢迎社会上有感知力的智能机器的一种手段。

外卖 (The Takeaway)

Our current treatment of AI machines demonstrates our ethical compass is consistent with this new species to what it is to animals or former slaves. This demonstrates our society has failed to progress in its ethical treatment of others, despite taking significant leaps in collective intelligence over the years.

我们目前对AI机器的对待表明我们的道德指南针与这个新物种在动物或以前的奴隶方面是一致的。 这表明,尽管这些年来集体智慧取得了重大飞跃,但我们的社会在对待他人的道德对待方面仍未取得进展。

Accepting the benefits of a self-aware, cognitive, and sentient AI will also require granting liberties, such as civil rights. This process has already started with the granting of citizenship of an AI humanoid robot. Even with that, 3 years since the event, we are yet to define what this means to our society and the concept of citizenship in an era of cohabitating with AI.

接受具有自我意识,认知能力和感知能力的AI的好处还需要授予自由权,例如公民权利。 这个过程已经从授予AI类人机器人的公民身份开始。 即便如此,距活动已三年,我们仍未定义在与AI同居的时代这对我们的社会和公民意识的意义。

Not only do we fail to accept or discuss the future of AI and humanity due to our distorted ethical reasoning, but we fail to conceptualize the programming of ethics in AI as well, leading to delayed development of this aspect of the technology. Issues such as accountability and ownership remain unresolved, alluding that we consider AI machines ours to rule, indefinitely.

由于我们扭曲的道德推理,我们不仅无法接受或讨论AI和人类的未来,而且我们也无法概念化AI中的道德编程,从而导致该技术这一方面的开发被延迟。 问责制和所有权等问题仍未解决,暗示我们认为AI机器将由我们无限期地统治。

As a society, we must accept our weaknesses, embrace them, and learn to cooperate before sentiency is reached in cognitive machines, less we risk losing any form of control of our tech-induced world.

作为一个社会,我们必须接受自己的弱点,拥抱它们,并在认知机器达到感知之前学会合作,除非我们冒着失去对技术引发的世界任何形式的控制的风险。

翻译自: https://medium.com/discourse/the-emerging-challenges-with-ai-acceptance-no-one-is-talking-about-5c1061537376

京东 ai时尚挑战

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值