伦理网站_爱的伦理

伦理网站This is part 1/3 in my series on The Application of AI.这是我关于AI应用的系列文章的1/3。Most people view Artificial [Narrow] Intelligence (AI) in an idyllic way, a tool that we can use to automate tasks, beat ...
摘要由CSDN通过智能技术生成

伦理网站

This is part 1/3 in my series on The Application of AI.

这是我关于AI应用的系列文章的1/3。

Most people view Artificial [Narrow] Intelligence (AI) in an idyllic way, a tool that we can use to automate tasks, beat us in games, and improve our lives in general!

大多数人以一种田园诗般的方式看待人工智能[ AI ] ,这是我们可以用来自动执行任务,在游戏中击败我们并改善我们的生活的工具!

Image for post
Andy Kelly on 安迪·凯利( Unsplash Uny Splash)摄

While this is largely true, there are also many flaws and dangers of AI, whether put in intentionally or unintentionally. Even today, there are countless examples of AI being misused to disrupt and damage the lives of many, so why and how is this happening? In the future, when AI is more widespread and common, what potential does it have to really devastate our society? In this article, I will be diving into the subject of The Ethics of AI.

尽管这在很大程度上是正确的,但无论是有意还是无意地放置AI ,也存在许多缺陷和危险。 即使在今天,仍然有无数的例子被滥用以破坏和破坏许多人的生命,为什么会这样以及如何发生呢? 将来,当AI变得更加普遍和普遍时,它真正具有摧毁我们社会的潜力是什么 在这篇文章中,我将深入探讨人工智能伦理问题。

AI算法中的道德缺陷示例 (Examples of Ethical Flaws in AI Algorithms)

There are many unintentional flaws in AI that create ethical problems. These flaws could lie anywhere from the bias in the training data to the creation of feedback loops. Let’s look at some examples of how these flaws can affect people in the real world.

AI中有许多无意中的缺陷,这些缺陷会造成道德问题。 这些缺陷可能存在于从训练数据的偏差到创建反馈循环的任何地方。 让我们看一下这些缺陷如何影响现实世界中的人们的一些示例。

面部识别训练数据中的偏见 (Bias in the Training Data of Facial Recognition)

Imagine one day you were coming home from work (or school), and the police show up. They show you an arrest warrant, handcuff you, and take you to the detention center. This is all happening while you have no idea what you have done, and your family and neighbors are watching you. When you arrive at the station they pat you down, fingerprint you, take a mugshot, then throw you in a dirty cell overnight. You sit there waiting, wondering why you were arrested. You’re a good person, you haven’t done anything wrong, and you just want to see your family.

想象有一天,您下班(或上学)回家时,警察出现了。 他们会向您显示逮捕令,将您铐上手铐,然后带您到拘留所。 这一切都是在您不知道自己做了什么以及您的家人和邻居在看着您的时候发生的。 当您到达车站时,他们会拍打您,为您打上指纹,进行面部照像,然后将您放在肮脏的牢房中过夜。 您坐在那里等待,想知道您为什么被捕。 你是一个好人,没有做错任何事,只想见你的家人

Image for post
Detroit Free Press 底特律自由报的罗伯特·威廉姆斯和他的家人

This is exactly what happened to Robert Williams, an innocent black man who was wrongfully arrested in Farmington Hills, Michigan. When the police were interrogating Williams they showed him a picture of a black man in a store, shoplifting valuable items. Williams denied he had been in the store and was not that man. The next photo was a close-up of the shoplifter, a photo that looked nothing like Williams. Williams had said:

这正是罗伯特·威廉姆斯( Robert Williams)所发生的事情,他是无辜的黑人,他在密歇根州的法明顿希尔斯被错误逮捕。 当警察审问威廉姆斯时,他们给他看了一张商店里的黑人男子的照片,他们在偷窃有价值的物品。 威廉姆斯否认他曾在商店里,不是那个男人。 下一张照片是入店行窃者的特写镜头,这张照片看上去完全不像威廉姆斯。 威廉姆斯曾说过:

“No, this is not me. You think all black men look alike?”

“不,这不是我。 你以为所有黑人看起来都一样吗?”

So how could the police be so wrong to arrest the wrong person? In this case, it was an algorithm’s fault. The police were using AI to find suspects for crimes, and obviously it was faulty. But how could an AI be wrong? Aren’t they supposed to be very accurate?

那么警察怎么可能错到逮捕错误的人呢? 在这种情况下,这是算法的错误。 警察正在使用AI寻找犯罪嫌疑人,显然这是错误的。 但是,人工智能怎么可能是错误的呢? 他们不应该很准确吗?

Yes, AI algorithms are usually very accurate, but only if you train them correctly. The main factor which led this algorithm to misidentify Williams was bias in the training data. While facial recognition systems work fairly well for white people, it is much less accurate at recognizing minority demographics. Th

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值