ai人工智能_AI偏见如何发生?

本文探讨了AI偏见的来源,解释了如何在机器学习模型中无意中引入偏见,并提出了防止AI系统中偏见的方法。通过理解这些偏见如何发生,我们可以更好地设计公正的人工智能系统。
摘要由CSDN通过智能技术生成

ai人工智能

IBM cite that there are 180 possible human biases. Each one of these will play a part in how and why we make certain decisions.

IBM引用了180种可能的人为偏见。 这些因素中的每一个都将影响我们如何以及为什么做出某些决定。

We know that humans are integral to the creating, training and maintaining of software bots and in this article we assess why it is vital to keep AI software neutral and unaffected by bias as much as possible.

我们知道人类是软件bot的创建,培训和维护不可或缺的组成部分,在本文中,我们评估了保持AI软件中立且不受偏见影响的重要性,这是至关重要的。

Human bias can be established at an early age and many factors may affect an individual’s outlook including: upbringing, social status, race, sex, education and social context.

人的偏见可以在很小的时候就建立,许多因素可能影响一个人的观点,包括:养育,社会地位,种族,性别,教育和社会背景。

AI bias is an important topic especially where Machine Learning, a branch of AI, is involved. We have another blog post which introduces Machine Learning but in a nutshell it is the concept that computers evolve and ‘learn’ from its human counterparts and vast quantities of data.

AI偏见是一个重要的话题,尤其是在涉及到机器学习(AI的一个分支)的地方。 我们有另一篇博客文章介绍了机器学习,但简而言之是计算机从人类的对等物和大量数据中发展和“学习”的概念。

Why is human bias an issue?

为什么人类偏见是一个问题?

M. Scott Peck, the late American psychiatrist, eloquently states: “Human beings are poor examiners, subject to superstition, bias, prejudice, and a PROFOUND tendency to see what they want to see rather than what is really there.”

已故的美国精神病医生斯科特·派克 ( M. Scott Peck)雄辩地说:“人类是可怜的检查员,易受迷信,偏见,偏见和鼓吹的倾向,看到他们想要看到的东西而不是真正的东西。”

AI is a neutral platform and so, in theory, could be superior to human decision making if maintained responsibly.

人工智能是一个中立的平台,因此,从理论上讲,如果负责任地维护,它可能会优于人类的决策。

AI偏见如何发生? (How does AI bias happen?)

Software bots are ‘trained’ to automate tasks to make our personal and business lives more efficient, they can also ‘learn’ to be biased — not exclusively from the people who train them but also from the data they analyze. The more frequently terms appear alongside one another, the more the bot will recognize something to be true and begin predicting bias outcomes. Vox refers to a study that proved an AI system ‘learnt’ to be racist and sexist by analyzing existing online content provided by humans.

软件机器人经过“培训”以自动执行任务,以使我们的个人和企业生活更有效率,它们也可以“学习”有偏见的人-不仅是受过培训的人,还包括他们所分析的数据。 术语出现得越频繁,机器人就越会识别出真实的事物并开始预测偏差结果。 Vox是指一项通过分析人类现有的在线内容证明AI系统“学习”为种族主义和性别歧视的研究。

Humans are inherently biased and therefore create biased content which provides a skewed dataset for AI to learn from.

人类天生就有偏见,因此会产生偏见的内容,从而为AI提供了倾斜的数据集以供学习。

There are many examples of this, such as facial recognition software incorrectly identifying a black male which led to him being arrested for a crime he did not commit; and voice activated technology accurately interpreting male voices more frequently than female voices.

有许多这样的例子,例如面部识别软件错误地识别了一名黑人男性,导致他因未犯罪而被捕; 语音激活技术比女性语音更准确地解释男性语音。

This bias is also apparent when hiring managers use AI to find candidates, for example the bot will learn that previous employees in the role have been men and so the AI software is likely to suggest that a man would be the best option for future positions. If this is not flagged as an issue, it may not be long before the software dismisses female candidates altogether for those roles.

当招聘经理使用AI查找候选人时,这种偏见也很明显,例如,机器人会得知该职位以前的雇员是男人,因此AI软件很可能暗示男人将是未来职位的最佳选择。 如果未将其标记为问题,则该软件可能很快就将这些职位的女性候选人全部解雇。

开发团队和技术领导者如何避免这种情况发生? (How can development teams and tech leaders avoid this happening?)

1.创造机器人的参与者越广泛,越好 (1. The broader the diversity of those involved in creating bots, the better)

This means more people from different backgrounds in technical roles. In a 2018 study, the World Economic Forum found only 22% of AI professionals globally were female and at Google and Facebook black employees occupied less than 2% of technical roles.

这意味着会有更多来自不同背景的人担任技术职务。 在2018年的一项研究中, 世界经济论坛发现全球AI专业人员中只有22%是女性,而在Google和Facebook上,黑人员工所担任的技术职务不到2%。

In the same way that it ‘takes a village to raise a child’, it takes people from all walks of life to create a well-rounded and un-bias bot. It is important to have people with different opinions and various methods of problem solving as they could identify an issue no-one else picked up on and excel technology further than it could have reached without another point of view.

就像“带一个村庄抚养孩子”一样,它需要各行各业的人来创造一个全面而公正的机器人。 重要的是,要让人们有不同的见解和解决问题的各种方法,因为他们可以发现一个人遇到的其他问题,并且超越了没有其他观点而无法解决的技术。

2.注意我们自己的偏见并自觉避免 (2. Be aware of our own bias and consciously avoid it)

When teaching the bot, try to provide as many different examples as possible so the bot can examine a broad range of views and hopefully come to a more neutral conclusion.

在教机器人时,请尝试提供尽可能多的不同示例,以便机器人可以检查广泛的观点并希望得出更中立的结论。

3.训练机器人时,请务必提供各种体验 (3. When training the bots, always provide diverse experiences)

To continue with the voice activation example, this would require sampling as many female voices as male, testing both equally and continuing to teach and test until the results for both were even.

为了继续进行语音激活示例,这将需要采样与男性一样多的女性语音,进行均等测试,并继续进行教学和测试,直到两者的结果均等。

The good news is AI is a blank canvas in the sense that it has no bias or prejudice until it is taught to have it. It will be sculpted by those who train it and the material used to do so.

好消息是,人工智能是一块空白的画布,从某种意义上说,除非学会了人工智能,否则它不会有偏见或偏见。 将由训练它的人和用来训练它的材料雕刻而成。

Having said this, AI is certainly not something to fear. It is something to develop responsibly and ethically because the potential for its uses could be phenomenal. It could save lives by detecting, preventing and treating diseases, it could predict and help avoid another global pandemic, it could save the planet by identifying patterns to help tackle serious environmental issues, preserving wildlife and removing plastic from the ocean.

话虽如此,人工智能当然不是什么可担心的。 它是负责任的和符合道德的发展,因为其用途的潜力是惊人的。 它可以通过检测,预防和治疗疾病来挽救生命,可以预测并帮助避免另一场全球性大流行,可以通过识别模式来拯救地球,以帮助解决严重的环境问题,保护野生动植物并从海洋中去除塑料。

AI can also simply make your day to day working life a little easier and more enjoyable by leveraging data to help you make better decisions.

人工智能还可以通过利用数据来帮助您做出更好的决策,使您的日常工作更轻松,更有趣。

At Roots Automation we work tirelessly to ensure our bots always leverage an unbiased view of their work. We use AI to help our bots learn from their human counterparts and over time they will anticipate tasks, as human colleagues would.

在Roots Automation,我们不懈努力,以确保我们的机器人始终利用公正的工作观点。 我们使用AI帮助机器人从人类同行中学习,随着时间的推移,他们将像人类同事一样预见任务。

Contact us today to find out more: info@rootsauotmation.com www.rootsautomation.com, Twitter, LinkedIn

立即联系我们以了解更多信息: info@rootsauotmation.com www.rootsautomation.comTwitterLinkedIn

翻译自: https://medium.com/the-innovation/ai-bias-keeping-bias-out-of-the-bots-2fbcf86d8b49

ai人工智能

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值