算法偏见是什么_人工智能中的偏见有什么作用?

算法偏见是什么

In our daily lives, the term “bias” is often thrown around rather carelessly and contemptuously. Currently, many are labeling artificial intelligence systems as biased, causing many eyes to light up with alarm. However, as we will later see, artificial intelligence systems would be absolutely useless without bias— as would humans.

在我们的日常生活中,“偏见”一词经常被粗心大意地轻描淡写。 当前,许多人将人工智能系统标记为有偏见,导致许多人眼睛受到警报的照亮。 但是,正如我们稍后将看到的那样, 如果没有偏差,人工智能系统将毫无用处 ,就像人类一样。

什么是偏见? (What is Bias?)

We hear the term “bias” often and in many different contexts— including artificial intelligence, probability and statistics, and most commonly as a negative personality trait. Unsurprisingly, the true definition of the term has been muddled over recent years, as the word has adopted a negative connotation.

我们经常在许多不同的环境中听到“偏见”一词,包括人工智能,概率和统计数据,最常见的是作为负面的人格特质。 毫不奇怪,由于该词采用了否定含义,因此近年来对该词的真正定义感到困惑。

Bias: “A particular tendency, trend, inclination, feeling, or opinion” (from dictionary.com).

偏见:“一种特定的趋势,趋势,倾向,感觉或观点” (来自dictionary.com )。

From this, we can determine that the term “bias” doesn’t automatically refer to a negative belief of some sort. In fact, we see biases everywhere. The reason behind why you eat is because you have a particular inclination (bias) towards satiation and survival. Without biases, we wouldn’t do anything. Put simply, biases are the core drivers of action.

据此,我们可以确定术语“偏见”不会自动指代某种否定信念。 实际上,我们到处都有偏见。 吃东西的原因是因为您对饱腹感和生存有特别的倾向(偏见)。 没有偏见,我们将不会做任何事情。 简而言之,偏见是行动的核心动力。

人工智能系统中的偏见 (Bias in AI Systems)

Headlines such as “Artificial Intelligence Can Be Biased. Here’s What You Should Know” and “How to avoid bias in Artificial Intelligence“ become laughable when we take into account the true definition of bias. Of course, artificial intelligence can be biased, how else would they be of any use? And avoiding bias in AI would be rendering it useless, and defeating its purpose. Article titles such as these play to the negative connotation of the term, drawing in more readers due to upfront alarm.

诸如“ 人工智能可以被偏见 ”之类的标题 当我们考虑到偏差的真实定义时, 这就是“您应该知道的 ”和“ 如何避免人工智能中的偏差”。 当然,人工智能可能会产生偏差,它们还有什么用? 而避免对AI的偏见将使它变得毫无用处,并打败其目的。 诸如此类的文章标题起到了该术语的负面含义的作用,由于前期预警,吸引了更多的读者。

However, the concerns they bring are valid— there are many cases of negative biases present in artificial intelligence systems. AI systems that determine how likely an offender is to re-offend were more likely to give African American offenders a medium to high-risk score than Caucasian offenders (58% and 33%, respectively). An AI recruiting tool created by Amazon hired significantly fewer women than men (and so, Amazon claims the tool was scrapped). Face detection machines have been shown to have error rates up to 34.7% on darker-skinned females and a maximum error rate of 0.8% on lighter-skinned males- a disparity far too large to ignore.

然而,他们带来的担忧是有效─ 负偏差的许多情况下呈现的人工智能系统。 与高加索犯罪者相比,决定高加索犯罪者可能性的人工智能系统更有可能使非裔美国人犯罪者获得中等到高风险评分(分别为58%和33%)。 亚马逊创建的AI招聘工具招聘的女性人数明显少于男性(因此,亚马逊声称该工具被废弃了)。 事实证明,面部检测机器在肤色较黑的女性上的错误率高达34.7%,在肤色较浅的男性上的最大错误率是0.8%-差异太大,不能忽略。

A rash conclusion one may jump to is that AI systems are explicitly prejudiced against minority groups. But, like most things, it’s not as clear as you may think. In the first case, AI systems weren’t even given information about the defendant’s ethnicity. In the second case, Amazon later reprogrammed the tool to ignore explicit gender-identifying words, such as “she” and “woman.” The issue persisted, and they realized that bias wasn’t induced through these words— it was actually based on the verbs that candidates used to describe themselves, favoring words such as “executed” and “captured,” words more commonly found on male profiles. The crux of the bias of the third situation stemmed from the use of predominantly Caucasian males in training sets— a bias on our part, not the AIs.

得出的一个粗鲁的结论是,人工智能系统明显偏向少数群体。 但是,就像大多数事情一样,它并不像您想象的那样清晰。 在第一种情况下,甚至没有提供 AI系统有关被告种族的信息。 在第二种情况下,亚马逊随后对工具进行了重新编程,以忽略明确的性别识别字词,例如“她”和“女人”。 问题仍然存在,他们意识到这些词并没有引起偏见, 它实际上是基于候选人用来形容自己的动词的,偏爱诸如“被处决”和“被俘获”之类的词,这些词在男性形象中更常见。 。 第三种情况的偏见的症结源于在训练中使用了占主导地位的白人男性,这是我们的偏见,而不是认可机构。

Image for post
(JavaScript!) Photo by Markus Spiske on Unsplash
(JavaScript!) Markus SpiskeUnsplash

From this, we can conclude that AI systems aren’t programmed explicitly to perpetuate the negative biases in our society. However, there is something causing these biases, and it’s us. In the reoffending rate scenario, African Americans did, in fact, re-offend more (52% chance of reoffending vs a 39% chance for those of Caucasian descent), because the police are more inclined to arrest an individual of color than a Caucasian individual— and their inclinations are passed through the data to the AI. So ultimately, where does the bias come from? Us.

由此,我们可以得出结论,没有明确地为AI系统编程以使社会中的负面偏见永存。 然而,也有一些是造成这些偏见,这是我们的。 实际上,在重新犯罪率的情况下,非洲裔美国人的确犯了更多的罪行(重新犯罪的机率是52%,而白人的犯罪率是39%), 因为警察比白人更倾向于逮捕有色人种个人-他们的倾向会通过数据传递给AI。 最终,偏见从何而来? 我们。

结论 (Conclusion)

Negative biases in AI systems have close to nothing to do with them at all— they didn’t create the data, we gave it to them, expecting them to ignore the subtle prejudice buried within the data. It’s not the AI’s fault- it’s our fault. If we have any hope of fixing our AI, we must fix ourselves- and we can do so by making conscious efforts to promote equality for people of all backgrounds.

人工智能系统中的负面偏见与它们几乎没有任何关系-它们没有创建数据,而是将它们提供给我们,希望它们能够忽略隐藏在数据中的细微偏见。 这不是AI的错,而是我们的错 。 如果我们有修复自己的AI的希望,我们就必须修复自己-我们可以通过有意识地努力促进各种背景的人们的平等来做到这一点。

资料来源 (Sources)

“A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear.” Washington Post; Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel, 17 Oct. 2016.

“用于保释和判决判决的计算机程序被标记为偏向黑人。 实际上还不清楚。” 华盛顿邮报; Sam Corbett-Davies,Emma Pierson,Avi Feller和Sharad Goel,2016年10月17日。

“Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Jeffrey Dastin, 9 Oct 2018.

路透社杰弗里·达斯汀(Jeffrey Dastin),2018年10月9日,``亚马逊报废了显示出对女性偏见的秘密AI招聘工具 。''

“Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Joy Buolamwini, 2018.

“性别阴影:商业性别分类中的交叉精度差异”, Joy Buolamwini,2018年。

翻译自: https://medium.com/the-black-box/whats-the-deal-with-biases-in-ai-2061ca6477cc

算法偏见是什么

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值