机器学习算法的差异_我们的机器学习算法可放大偏差并永久保留社会差异

机器学习算法的差异

人工智能道德与考量 (AI Ethics and Considerations)

Shortly after I began my machine learning courses, it dawned on me that there is an absurd exaggeration in the media concerning the state of artificial intelligence. Many are under the impression that artificial intelligence is the study of developing conscious robotic entities soon to take over planet earth. I typically brace myself whenever someone questions what I study since my response is often prematurely met with a horrified gasp or angry confrontation. And understandably so.

在我开始机器学习课程后不久,我突然意识到媒体上关于人工智能状态的荒唐夸张。 许多人给人的印象是,人工智能是研究有意识的机器人实体即将接管地球的研究。 每当有人质疑我的学习内容时,我通常都会做好准备,因为我的回答经常过早地惊恐不已或充满生气。 可以理解的是。

有意识的机器人实体即将接管吗? (Conscious Robotic Entities Soon to Take Over?)

However, the reality is that machine learning is not a dangerous magic genie, nor is it any form of a conscious entity. For simplicity’s sake, I typically say that the essence of AI is math. Some say it’s just ‘glorified statistics’. Or as Kyle Gallatin has so eloquently put it, ‘machine learning is just y=mx+b on crack.

但是,现实情况是,机器学习不是危险的魔术精灵 ,也不是任何形式的有意识实体。 为了简单起见,我通常会说AI的本质是数学。 有人说这只是“华丽的统计数据” 。 或正如凯尔·加拉廷(Kyle Gallatin)雄辩地说的那样, 机器学习仅仅是y = mx + b。 '

Of course, this is a simplification since machine learning pulls from many disciplines such as computer science, neuroscience, mathematics, the scientific method, etc. But the point is that the media is suffused with verbiage that makes it feel as though we are in direct danger of being taken over by artificially intelligent beings.

当然,这是一种简化,因为机器学习来自许多学科,例如计算机科学,神经科学,数学,科学方法等。但是,关键是,媒体充斥着大量的语气,使我们感觉好像我们在直接被人工智能存在的危险。

The truth is, we are not. But there are many other insidious issues in the production of machine learning that often goes overlooked. Rachel Thomas, a co-founder of fast.ai, has mentioned that she, along with other machine learning experts, believe that the ‘hype about consciousness in AI is overblown’ but ‘other (societal) harms are not getting enough attention’. Today, I want to elaborate on one of these societal harms that Rachel addresses: that ‘AI encodes and magnifies bias’.

事实是,我们不是。 但是在机器学习的生产中还有许多其他隐患,这些问题经常被忽略。 fast.ai的联合创始人雷切尔·托马斯(Rachel Thomas)提到她与其他机器学习专家一起相信,``关于AI意识的炒作已被夸大了'',但``其他(社会)危害并未引起足够的重视'' 。 今天,我想详细阐述雷切尔(Rachel)解决的其中一种社会危害:“人工智能编码并放大了偏见”。

机器学习的真正危险:垃圾吞噬 (The Real Hazard of Machine Learning: Garbage In Garbage Out)

The most unsettling aspect of this — the idea of AI magnifying bias — is that the very promise of machine learning in the automation of social processes is to hold the highest degree of neutrality. It is well known that doctors can hold bias during diagnosis in healthcare or a jury may hold bias during sentencing in criminal justice. Machine learning should ideally synthesize a large amount of variables in the record and provide a neutral assessment.

其中最令人不安的方面是AI放大偏差的想法,即社交过程自动化中机器学习的最大前景就是保持最高的中立性。 众所周知,医生在医疗保健诊断过程中可能会产生偏见,而陪审团在刑事司法中的判刑过程中可能会产生偏见。 理想情况下,机器学习应该综合记录中的大量变量并提供中立的评估。

“But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot.” — Brian Resnick

“但是发生的事情是,机器学习程序使我们的偏见长期存在。 因此,它不是一个对非裔美国人有偏见的法官,而是一个机器人。” 布赖恩·瑞斯尼克

We expect the model to be objective and fair; it is this disillusioned position of objectivity that makes the entire ordeal feel insidious and particularly disappointing.

我们希望该模型是客观和公正的; 正是这种幻灭的客观性位置使整个折磨变得阴险而特别令人失望。

So how does this happen?

那么这是怎么发生的呢?

Image for post
(source) (来源)

“Garbage in garbage out” is a well known computer science axiom that means poor quality input produces poor quality output. Typically, ‘non-garbage’ input would refer to clean, accurate, and well-labeled training input. However, we can now see that our garbage input could very well be a polished, accurate representation of our society as it has acted in the past. The real hazard in machine learning has less to do with robotic conscious entities and more to do with another type of conscious entity — human beings. When societally biased data is used to train a machine learning model, the insidious outcome is a discriminatory machine learning model that predicts the societal biases we aim to eliminate.

“垃圾中的垃圾”是一种众所周知的计算机科学公理,这意味着质量差的输入会产生质量差的输出。 通常,“非垃圾”输入是指干净,准确且标签明确的培训输入。 但是,我们现在可以看到,我们的垃圾输入很可能可以像过去一样准确地代表着我们的社会。 机器学习中的真正危险与机器人意识实体无关,而与另一种意识实体-人类有关。 当使用社会偏见的数据来训练机器学习模型时,阴险的结果是一种歧视性的机器学习模型,该模型预测了我们旨在消除的社会偏见。

更高的准确性!=更好的社交成果 (Higher Accuracy != Better Social Outcomes)

The issue extends further from prediction and towards perpetuation; we create a type of reinforcement loop.

这个问题进一步从预测延伸到永续; 我们创建一种加固循环。

For example, let’s say that a business owner wants to predict which of their customers would be likely to buy certain products so they could offer a special bundle. They go on to ask a data scientist to build a predictive algorithm and use this to advertise to the select group. At this point, the model is not simply predicting which customers will purchase — it is reinforcing it.

例如,假设企业主希望预测他们的哪些客户可能会购买某些产品,以便他们可以提供特殊捆绑销售商品。 他们继续要求数据科学家构建一种预测算法,并使用它来向选定的群体做广告。 在这一点上,该模型不仅在预测将要购买的客户,而且还在加强它。

While innocuous in this example, this can lead to harmful outcomes for social processes. This is exactly what led to these unanticipated headlines:

尽管在此示例中是无害的,但这可能导致社会过程的有害结果。 这正是导致这些意外标题的原因:

Image for post
Figure II. Photo by author. Citations at the end of the article.
图二。 图片由作者提供。 文章结尾的引文。

Again, if our application is directed towards medical care for the purpose of predicting which group should get more attention based on prior data, we are not simply predicting for the sake of optimization, we are now actively magnifying and perpetuating prior disparities.

同样,如果我们的应用程序是针对医疗的,目的是根据先前的数据预测哪个组应该引起更多关注,那么我们并不是仅仅出于优化目的而进行预测,而是正在积极扩大和延续先前的差异。

那么,我们是否因为知道会导致世界毁灭而废除机器学习? (So do we abolish machine learning because we knew it would lead to world destruction?)

In short, no. But perhaps we should reimagine the way we practice machine learning. As previously mentioned, when I first began to practice machine learning, the over-exaggerated commonplace fear of artificial intelligence developing consciousness began to humor me a bit. I thought that perhaps the worst thing that could happen would be misuse like that of any tool we have, albeit misuse is perhaps more apparent of a physical tool than that of a digital tool.

简而言之,没有。 但是也许我们应该重新想象我们练习机器学习的方式。 如前所述,当我刚开始练习机器学习时,对人工智能发展意识的过度夸张的恐惧开始让我有些幽默。 我认为,可能发生的最糟糕的事情将是像我们拥有的任何工具一样的滥用,尽管滥用实际上可能是物理工具而不是数字工具。

However, the short film, ‘Slaughterbots’ by Alter on YouTube provoked a lot of thought regarding ethics and the possible dangers of autonomous artificial intelligence. The primary reason that the ‘Future of Life Institute’ created the film was to communicate the following idea: “Because autonomous weapons do not require individual human supervision, they are potentially scalable weapons of mass destruction — unlimited numbers could be launched by a small number of people.”

但是,短片 Alter》在YouTube上的短片《 Slaughterbots》激起了人们对伦理学和自治人工智能的潜在危险的许多思考。 “生命的未来研究所”制作这部电影的主要原因是传达以下想法: “由于自动武器不需要个人的监督,它们可能是可扩展的大规模杀伤性武器,少数人可以发射无限数量的武器。人。”

In the context of this short film, the drones were exploited with the intent to harm. However, could disastrous unintentional repercussions arise from the use of A.I. systems? What would happen if we create AI to optimize for a loosely defined goal and loosely defined restraints without any supervisory precautions, and realize it was more than we bargained for? What if we create a system with great intentions to be used for social good but we wind up with catastrophic and irreversible damages? The lack of consciousness becomes irrelevant and yet doesn’t minimize the potential harm.

在这部短片的背景下,无人驾驶飞机受到了伤害。 但是,使用AI系统是否会造成灾难性的意外影响? 如果我们创建AI来优化松散定义的目标和松散定义的约束而没有任何监督预防措施,并且意识到它超出了我们的讨价还价,将会发生什么? 如果我们创建了一个用于社会公益的强烈意图的系统,但是却遭受了灾难性且不可逆转的损失,该怎么办? 意识的缺乏变得无关紧要,但是并没有将潜在的危害降到最低。

Then, I began stumbling across relevant resources that challenged the current standard model of artificial intelligence and addressed these issues which is what ultimately led to this synthesis of a blog post.

然后,我开始涉足有关资源,这些资源挑战了当前的人工智能标准模型,并解决了这些问题,最终导致了博客文章的综合。

逆向强化学习 (Inverse Reinforcement Learning)

The first had been ‘Human Compatible’ by Stuart Russell which suggests that the standard model of AI is problematic due to the lack of intervention. In the current standard model, we focus on optimizing our initially set metrics without any human-in-the loop supervision. Russell challenges this with the hypothetical situation that we realize after some time that the consequences of our initial goals weren’t exactly what we wanted.

第一个是Stuart Russell撰写的“ Human Compatible”,它表明AI的标准模型由于缺乏干预而存在问题。 在当前的标准模型中,我们专注于优化初始设置的指标,而无需任何人工干预。 拉塞尔用一种假设的情况来挑战这一点,我们在一段时间后意识到,最初目标的结果并不完全是我们想要的。

Instead, Stuart proposes that rather than using our AI systems to optimize for a fixed goal, that we create them with flexibility to adapt to our potentially vacillating goals. This means programming in a level of uncertainty in the algorithm where it cannot be completely certain that it knows our goals, so it will deliberately ask if it needs to be redirected or switched off. This is known as ‘Inverse Reinforcement Learning.’

相反,Stuart提出,与其使用我们的AI系统为固定目标进行优化,不如让我们灵活地创建它们以适应潜在的不稳定目标。 这意味着无法在算法中以一定程度的不确定性进行编程,无法完全确定它是否知道我们的目标,因此它将故意询问是否需要重定向或关闭它。 这就是所谓的“逆向强化学习”。

Below you can see the difference between the common reinforcement learning goals and inverse reinforcement learning goals:

在下面,您可以看到常见的强化学习目标和逆强化学习目标之间的区别:

Image for post
source) 来源 )

With traditional reinforcement learning, the goal is to find the best behavior or action to maximize reward in a given situation. For example, in the domain of self-driving cars, the model will receive a small reward for every moment it it remains centered on the road and receives a negative reward if it runs a red light. The model is moving through the environment trying to find the best course of action to take to maximize reward. Therefore, a reinforcement learning model is fed a reward function and attempts to find the optimal behavior.

在传统的强化学习中,目标是找到最佳行为或行动,以在给定情况下最大化回报。 例如,在自动驾驶汽车领域,该模型将在其始终以道路为中心的每时每刻获得少量奖励,如果闯红灯则将获得负奖励。 该模型正在整个环境中移动,试图找到最佳的行动方案以最大化回报。 因此,强化学习模型会获得奖励函数,并试图找到最佳行为。

However, sometimes the reward function is not obvious. To account for this, inverse reinforcement learning is fed a set of behaviors and it tries to find the optimal reward function. Given these behaviors, what does the human really want? The initial goal of IRL was to uncover the reward function under the assumption that the given behavior is the most favorable behavior. However, we know that this isn’t always the case. Following this logic, this process may help us unveil the ways in which humans are biased which would, in turn, allow us to correct future mistakes through awareness.

但是,有时奖励功能并不明显。 为了解决这个问题,逆向强化学习被提供了一组行为,并试图找到最佳的奖励函数。 鉴于这些行为, 人类真正想要什么? IRL的最初目标是在给定行为是最有利行为的假设下发现奖励函数。 但是,我们知道情况并非总是如此。 按照这种逻辑,这个过程可以帮助我们揭示人类偏见的方式,从而使我们能够通过意识来纠正未来的错误。

有偏COMPAS算法 (Biased COMPAS Algorithm)

Another resource that is relevant and timely is the episode ‘Racism, the criminal justice system, and data science’ from Linear Digressions. In this episode, Katie and Ben tactfully discuss COMPAS, an algorithm that stands for Correctional Offender Management Profiling for Alternative Sanctions. This algorithm is legal to be utilized by judges during sentencing in a few US states to predict the likeliness of a defendant committing a crime again.

另一个与时俱进相关的资源是《线性新闻》中的“种族主义,刑事司法系统和数据科学”。 在这一集中,Katie和Ben巧妙地讨论了COMPAS,该算法代表针对替代制裁的惩教人员管理分析。 该算法在美国一些州的法官判刑中使用是合法的,以预测被告再次犯罪的可能性。

J. DRESSEL ET AL., SCIENCE ADVANCES, EAAO5580, 2018, ADAPTED BY C. AYCOCK/SCIENCE
Dressel et al., Science Advances, EAAO55850, 2018 ( J.Dressel等人,《科学进展》,EAAO55850,2018年 ( source) 来源 )

However, various studies have challenged the accuracy of the algorithm, discovering racially discriminatory results despite lacking race as an input. Linear Digressions explores potential reasons that racially biased results arise and wraps up with lingering string of powerful, thought-provoking ethical questions:

但是,各种研究都对算法的准确性提出了挑战,尽管缺乏种族作为输入,却发现了具有种族歧视性的结果 。 线性题外话探究了产生种族偏见的潜在原因,并缠结了缠绵的一连串强有力的,发人深省的道德问题:

What is a fair input to have for an algorithm? Is it fair to have an algorithm that is more accurate if it introduces injustice when you consider the overall context? Where do the inputs come from? In what context will the output be deployed? When inserting algorithms into processes that are already complicated and challenging, are we spending enough time examining the context? What are we attempting to automate, and do we really want to automate it?

算法有什么合理的输入? 如果考虑到整体情况,引入不公正的算法是否更准确? 输入来自哪里? 输出将在什么情况下部署? 在将算法插入已经非常复杂和具有挑战性的流程中时,我们是否花费了足够的时间检查上下文? 我们正在尝试自动化什么,我们真的要自动化吗?

This last string of questions that Katie so neatly presents at the end of episode are wonderfully pressing questions that left a lasting impression given the fact that I am such a huge proponent of machine learning for social good. I am positive that these considerations will become an integral part of each complicated, data-driven, social problem I aim to solve using an algorithm.

凯蒂在剧集结束时如此巧妙地提出的最后一串问题极好地紧迫地提出了一些问题,这些问题给人留下了持久的印象,因为我是机器学习对社会公益的巨大支持者。 我很肯定这些考虑因素将成为我要使用算法解决的每个复杂的,数据驱动的社会问题的组成部分。

最后的想法和思考 (Final Thoughts and Reflections)

These models have hurt many people on a large scale while providing a false sense of security and neutrality, but perhaps what we can gain out of this is the acknowledgement of the undeniable underrepresentation in our data. The lack of data for certain minority groups are evident when the algorithms plainly do not work with these groups.

这些模型在给人一种错误的安全性和中立性的同时,已经伤害了许多人,但也许我们可以从中得到的是对数据中不可否认的代表性不足的认识。 当算法明显不适用于这些群体时,某些少数群体的数据明显不足。

Bias is our responsibility to, at the very least, recognize, so that we can push initiatives forward to reduce them.

偏差至少是我们的责任,这是我们的责任,以便我们可以推动倡议以减少倡议。

Moreover, in “What Do We Do About the Biases in AI,” James Manyika, Jake Silberg and Brittany Presten present six ways in which management teams can maximize fairness with AI:

此外,在“我们如何应对人工智能的偏见”中, James Manyika,Jake Silberg和Brittany Presten提出了六种管理团队可以最大程度地利用AI公平的方法:

  1. Remain up to-date on the research surrounding Artificial Intelligence and Ethics.

    保持有关人工智能和伦理学的最新研究。
  2. Establish a process that can reduce bias when AI is deployed

    建立可以减少部署AI时的偏见的流程
  3. Engage in fact-based conversations around potential human biases

    围绕潜在的人类偏见进行基于事实的对话
  4. Explore ways in which humans and machines can integrate to combat bias

    探索人与机器整合以克服偏见的方式
  5. Invest more efforts in bias research to advance the field

    加大对偏见研究的投入,以推动该领域的发展
  6. Invest in diversifying the AI field through education and mentorship

    通过教育和指导投资使AI领域多样化

Overall, I am very encouraged by the capability of machine learning to aid human decision-making. Now that we are aware of the bias in our data, we are responsible to take action to mitigate these biases so that our algorithm could truly provide a neutral assessment.

总体而言,我对机器学习有助于人类决策的能力感到非常鼓舞。 既然我们意识到了数据中的偏差,我们有责任采取措施缓解这些偏差,以便我们的算法能够真正提供中立的评估。

In light of these unfortunate events, I am hopeful that in the coming years, there will be more dialogue on AI regulation. There are already wonderful organizations such as AI Now, among others, that are dedicated to research that revolves around understanding the social implications of artificial intelligence. It is now our responsibility to continue this dialogue and move forward to a more transparent and just society.

鉴于这些不幸的事件,我希望在未来几年中,将有更多关于AI监管的对话。 已经有一些出色的组织,例如AI Now等,致力于研究围绕理解人工智能的社会含义而开展的研究。 现在,我们有责任继续这种对话,并朝着更加透明和公正的社会前进。

Image for post
source) )

Articles Used for Figure II:

用于图二的文章:

  1. AI is sending people to jail — and getting it wrong

    人工智能正在将人们送进监狱-并弄错了

  2. Algorithms that run our lives are racist and sexist. Meet the women trying to fix them

    运行我们生活的算法是种族歧视和性别歧视。 认识试图修复她们的女人

  3. Google apologizes after its Vision AI produced racist results

    Google在Vision AI产生种族歧视结果后道歉

  4. Healthcare Algorithms Are Biased, and the Results Can Be Deadly

    医疗保健算法存在偏差,结果可能是致命的

  5. Self-Driving cars more likely to drive into black people

    自动驾驶汽车更有可能驶入黑人

  6. Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women

    为什么亚马逊的招聘AI偏向女性完全不奇怪

翻译自: https://towardsdatascience.com/our-machine-learning-algorithms-are-magnifying-bias-and-perpetuating-social-disparities-6beb6a03c939

机器学习算法的差异

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值