伦理网站_爱的伦理

探讨了人工智能在发展和应用中面临的伦理问题,包括公平性、隐私和透明度等关键议题。
摘要由CSDN通过智能技术生成

伦理网站

This is part 1/3 in my series on The Application of AI.

这是我关于AI应用的系列文章的1/3。

Most people view Artificial [Narrow] Intelligence (AI) in an idyllic way, a tool that we can use to automate tasks, beat us in games, and improve our lives in general!

大多数人以一种田园诗般的方式看待人工智能[ AI ] ,这是我们可以用来自动执行任务,在游戏中击败我们并改善我们的生活的工具!

Image for post
Andy Kelly on 安迪·凯利( Unsplash Uny Splash)摄

While this is largely true, there are also many flaws and dangers of AI, whether put in intentionally or unintentionally. Even today, there are countless examples of AI being misused to disrupt and damage the lives of many, so why and how is this happening? In the future, when AI is more widespread and common, what potential does it have to really devastate our society? In this article, I will be diving into the subject of The Ethics of AI.

尽管这在很大程度上是正确的,但无论是有意还是无意地放置AI ,也存在许多缺陷和危险。 即使在今天,仍然有无数的例子被滥用以破坏和破坏许多人的生命,为什么会这样以及如何发生呢? 将来,当AI变得更加普遍和普遍时,它真正具有摧毁我们社会的潜力是什么 在这篇文章中,我将深入探讨人工智能伦理问题。

AI算法中的道德缺陷示例 (Examples of Ethical Flaws in AI Algorithms)

There are many unintentional flaws in AI that create ethical problems. These flaws could lie anywhere from the bias in the training data to the creation of feedback loops. Let’s look at some examples of how these flaws can affect people in the real world.

AI中有许多无意中的缺陷,这些缺陷会造成道德问题。 这些缺陷可能存在于从训练数据的偏差到创建反馈循环的任何地方。 让我们看一下这些缺陷如何影响现实世界中的人们的一些示例。

面部识别训练数据中的偏见 (Bias in the Training Data of Facial Recognition)

Imagine one day you were coming home from work (or school), and the police show up. They show you an arrest warrant, handcuff you, and take you to the detention center. This is all happening while you have no idea what you have done, and your family and neighbors are watching you. When you arrive at the station they pat you down, fingerprint you, take a mugshot, then throw you in a dirty cell overnight. You sit there waiting, wondering why you were arrested. You’re a good person, you haven’t done anything wrong, and you just want to see your family.

想象有一天,您下班(或上学)回家时,警察出现了。 他们会向您显示逮捕令,将您铐上手铐,然后带您到拘留所。 这一切都是在您不知道自己做了什么以及您的家人和邻居在看着您的时候发生的。 当您到达车站时,他们会拍打您,为您打上指纹,进行面部照像,然后将您放在肮脏的牢房中过夜。 您坐在那里等待,想知道您为什么被捕。 你是一个好人,没有做错任何事,只想见你的家人

Image for post
Detroit Free Press 底特律自由报的罗伯特·威廉姆斯和他的家人

This is exactly what happened to Robert Williams, an innocent black man who was wrongfully arrested in Farmington Hills, Michigan. When the police were interrogating Williams they showed him a picture of a black man in a store, shoplifting valuable items. Williams denied he had been in the store and was not that man. The next photo was a close-up of the shoplifter, a photo that looked nothing like Williams. Williams had said:

这正是罗伯特·威廉姆斯( Robert Williams)所发生的事情,他是无辜的黑人,他在密歇根州的法明顿希尔斯被错误逮捕。 当警察审问威廉姆斯时,他们给他看了一张商店里的黑人男子的照片,他们在偷窃有价值的物品。 威廉姆斯否认他曾在商店里,不是那个男人。 下一张照片是入店行窃者的特写镜头,这张照片看上去完全不像威廉姆斯。 威廉姆斯曾说过:

“No, this is not me. You think all black men look alike?”

“不,这不是我。 你以为所有黑人看起来都一样吗?”

So how could the police be so wrong to arrest the wrong person? In this case, it was an algorithm’s fault. The police were using AI to find suspects for crimes, and obviously it was faulty. But how could an AI be wrong? Aren’t they supposed to be very accurate?

那么警察怎么可能错到逮捕错误的人呢? 在这种情况下,这是算法的错误。 警察正在使用AI寻找犯罪嫌疑人,显然这是错误的。 但是,人工智能怎么可能是错误的呢? 他们不应该很准确吗?

Yes, AI algorithms are usually very accurate, but only if you train them correctly. The main factor which led this algorithm to misidentify Williams was bias in the training data. While facial recognition systems work fairly well for white people, it is much less accurate at recognizing minority demographics. This is because the datasets used have mainly white faces and fewer faces of minorities, thus lowering the accuracy in those demographics and leading to the arrest of Williams.

是的,AI算法通常非常准确,但前提是您必须正确训练它们。 导致该算法无法正确识别Williams的主要原因是训练数据中存在偏差。 尽管面部识别系统对于白人来说相当不错,但在识别少数族裔人口统计数据方面的准确性得多。 这是因为所使用的数据集主要具有白人面Kong,少数族裔面Kong较少,因此降低了这些人口统计信息的准确性,并导致威廉姆斯被捕。

Image for post
Lesson 5 of FastAI. FastAI第5课的图片

Though these are quite accurate in the “darker male” category, there is still quite a difference between lighter and darker faces and a severe difference in the “darker female” category. Keep in mind that these are the leading companies and may not have been what the police had used.

尽管这些在“较黑的男性”类别中非常准确,但是在较亮和较暗的脸部之间仍然存在很大差异,而在“较暗的女性”类别中则存在严重差异。 请记住,这些是领先的公司,可能不是警察使用的公司。

This information is sourced from The New York Times and The Washington Post.

此信息来自《纽约时报》《华盛顿邮报》

YouTube的反馈循环 (YouTube’s Feedback Loops)

Everyone’s been there, you see one interesting video on YouTube’s front page or on trending and then one hour later you’re still there, watching more videos without realizing how you got there. This is all caused by YouTube’s recommendation system. It shows you videos that they believe will keep you on their platform as long as possible, and thus generating more ad revenue.

每个人都去过那里,您会在YouTube的首页或趋势上看到一个有趣的视频,然后一个小时后,您仍然在那里,观看了更多视频,却没有意识到自己是如何到达那里的。 这都是YouTube的推荐系统造成的 它向您显示他们相信可以使您在平台上停留的时间更长的视频,从而产生更多的广告收入。

Image for post
Photo by Christian Wiediger on Unsplash
Christian WiedigerUnsplash上的 照片

You may be thinking right now, “Ok, so what if they keep me on the platform a bit longer? It’s not hurting anyone.” Well, this recommendation algorithm can lead to feedback loops of toxic and harmful content.

您可能现在正在想,“好吧,如果他们让我在平台上呆的时间更长呢? 这没有伤害任何人。” 好吧,此推荐算法可能导致有毒和有害成分的反馈回路。

One horrific problem of AI-driven feedback loops on YouTube is this:

YouTube上AI驱动的反馈循环的一个可怕问题是:

“The site’s recommendation algorithm was making it easier for pedophiles to connect and share child porn in the comments sections of certain videos. The discovery was horrifying for numerous reasons. Not only was YouTube monetizing these videos, its recommendation algorithm was actively pushing thousands of users toward suggestive videos of children.”

“该网站的推荐算法使恋童癖者更容易在某些视频的评论部分中连接和分享儿童色情内容。 由于许多原因,这一发现令人震惊。 YouTube不仅通过这些视频获利,其推荐算法还积极推动成千上万的用户转向儿童的暗示视频。”

The feedback loop can also cause echo-chambers which narrows a watcher’s exposure to content they may not agree with. This causes someone to never hear an argument against an opinion they may hold and drives polarization.

反馈回路还可能导致回声腔,从而使观看者对他们可能不同意的内容的了解范围缩小。 这导致某人永远听不到反对他们可能持有的观点的争论,并导致分化

Another problem this feedback loop can cause is the propagation of conspiracy theories. Similar to how the feedback loop can cause polarization among people, the feedback loop can cause someone who is mildly interested or dubious to conspiracy theories to believe in them. This is caused by watching more and more videos, all with the same message which are recommended to you.

这种反馈回路可能引起的另一个问题是阴谋论传播。 类似于反馈回路如何导致人与人之间的两极分化,反馈回路可以使对阴谋理论有轻微兴趣或怀疑的人相信他们。 这是由于观看了越来越多的视频而导致的,这些视频都带有相同的消息,建议您这样做。

Image for post
Photo by Markus Winkler on Unsplash
Markus WinklerUnsplash拍摄的照片

This information is sourced from Guillaume Chaslot who worked on Youtube’s recommendation system.

该信息来自负责Youtube推荐系统的Guillaume Chaslot

AI切割医疗保健 (AI Cutting Healthcare)

In 2016, a new algorithm was implemented in the Arkansas healthcare system. This algorithm was meant to allocate the resources and time of healthcare providers better by taking in several factors to determine the amount of assistance someone would get.

2016年,阿肯色州医疗保健系统实施了一种新算法。 该算法旨在通过考虑几个因素来确定某人将获得的援助数量,从而更好地分配医疗服务提供者的资源和时间。

However, this algorithm which was meant to increase efficiency in the healthcare system ended up cutting the healthcare of many people who needed it the most.

但是,这种旨在提高医疗保健系统效率的算法最终削减了许多最需要它的人们的医疗保健。

Image for post
The Verge 边缘

The software didn’t account for diabetic patients or cerebral palsy, thus it caused mass suffering to these patients. This problem was all due to a bug and some faulty code. This brings up obvious ethical questions, should we be using AI to determine the well-being of the needy? Why and how was this algorithm approved by the state without knowing or testing it for flaws?

该软件没有解决糖尿病患者或脑瘫的问题,因此给这些患者造成了巨大的痛苦。 这个问题全是由于错误和一些错误的代码造成的。 这提出了明显的道德问题,我们是否应该使用AI来确定有需要者的幸福感? 为什么以及如何在不知道或测试其缺陷的情况下由国家批准该算法?

A clear-cut example of how this algorithm impacted someone’s life is the example of Tammy Dobbs.

塔米·多布斯(Tammy Dobbs)就是一个明显的例子,说明该算法如何影响某人的生活。

Image for post
The Verge 边缘

Tammy Dobbs suffered from cerebral palsy for most of her life and had moved to Arkansas in 2008. Dobbs needed a lot of assistance, as she was constrained to her wheelchair and her hands were stiff. Because of this, the state healthcare allocated the maximum amount of home care, 56 hours a week.

塔米·多布斯(Tammy Dobbs)一生中大部分时间都患有脑瘫,并于2008年移居阿肯色州。多布斯需要大量帮助,因为她被束缚在轮椅上,手僵硬。 因此,州医疗保健人员每周最多分配56个小时的家庭护理服务

However, this changed when the AI algorithm was implemented in the Arkansas health system. Dobbs’ home care time was cut from 56 hours a week to only 32 hours! This is 56% of what she originally had! This reduction obviously made a huge impact on her life as now she would have to decide on what she could cut from her own care.

但是,当在阿肯色州卫生系统中实施AI算法时,情况发生了变化。 多布斯的家庭护理时间从每周56小时减少到只有32小时! 这是她最初拥有的东西的56%! 这种减少显然对她的生活产生巨大的影响,因为现在她必须决定自己可以从自己的护理中减少什么。

This information is sourced from The Verge.

此信息来自The Verge

滥用人工智能 (The Misuse of AI)

There are also several examples of how AI can be intentionally used in an unethical way. This is even scarier than the unintentional flaws as this is a conscious choice people make to misuse AI and algorithms to harm others.

还有几个例子说明如何以不道德的方式故意使用AI。 这比无意中的缺陷还要可怕,因为这是人们误用AI和算法来伤害他人的明智选择。

IBM与大屠杀 (IBM and the Holocaust)

You read the title of this example, IBM helped the Nazis in WWII with the Holocaust. You may be wondering, “They had AI back in WWII?” Well, sort of? IBM developed a tabulation machine for classification. It was more of a very crude algorithm to read punch cards and classify them. These machines were extremely tedious and required a lot of maintenance, not what someone would ordinarily envision as Artificial Intelligence. Nonetheless, I still believe that it is important to discuss how technology can be misused, whether you consider it AI or not.

您阅读了此示例的标题, IBM在第二次世界大战中帮助了纳粹大屠杀。 您可能想知道,“他们在第二次世界大战中使用了AI?” 好吧? IBM开发了用于分类的制表机。 读取打Kong卡并对其进行分类只是一种非常粗糙的算法。 这些机器非常繁琐,需要大量维护,而不是通常人所期望的人工智能。 尽管如此,我仍然相信,重要的是要讨论如何滥用技术,无论您是否将其视为AI。

Image for post
The Guardian. 卫报的图片。

An overview of how this machine worked was first, you would have a punch card which you would feed into the machine. The machine would read the information, keep track of it, and output a number. These numbers would represent which concentration camp people would go to, the type of prisoner, and how they died.

首先概述一下这台机器的工作方式,您将获得一张打Kong卡,然后将其插入机器中。 机器将读取该信息,对其进行跟踪并输出一个数字。 这些数字将代表人们将去哪个集中营,囚犯的类型以及他们如何死亡。

Howard J. Carter, an Economic Warfare Section chief investigator wrote this:

经济战科首席研究员霍华德·卡特写道:

“What Hitler has done to us through his economic warfare, one of our own American corporations has also done … Hence IBM is in a class with the Nazis. The entire world citizenry is hampered by an international monster.”

“希特勒通过他的经济战对我们做了什么,我们自己的一家美国公司也已经做了……因此,IBM与纳粹同在。 全世界的公民受到国际怪物的阻碍。”

This information is sourced from The Huffington Post.

此信息来自《赫芬顿邮报》

假货 (Deepfakes)

Deepfakes are a form of AI which artificial images, videos, and even audio can be produced such that they look real. They are created using GANs (generative adversarial networks), a deep learning method.

Deepfake是一种AI形式,可以生成人造图像,视频甚至音频,使它们看起来真实。 它们是使用GAN (通用对抗性网络)(一种深度学习方法)创建的。

Image for post
Forbes. 福布斯

Deepfakes can be used in harmless ways which are merely created for amusement, but there is a real danger to them and also ethical problems.

Deepfake可以以无害的方式使用,仅用于娱乐目的,但对他们而言确实存在危险,而且存在道德问题。

One way Deepfakes can be used to cause harm is in politics. If even a Tweet can cause tensions to rise, what would a video or an audio clip do? A clip which may say some nasty things about other political figures will go viral and also increase polarization and tensions if people believe it.

政治可以使用Deepfake造成损害的一种方式。 如果哪怕一条Tweet都可能引起紧张感上升,那么视频或音频剪辑会做什么? 如果人们相信这一片段,可能会说出其他政治人物的一些令人讨厌的话,将会传播开来,并加剧两极分化和紧张局势。

Another way Deepfakes can disrupt society is just by flooding the internet with tons of fake content. Whenever you go online, how would you know if anything is real? If the internet and social media become flooded with Deepfakes no one will ever be able to know what is real and what is genuine. Even realistic fake text can be generated, so how would you know if what you are reading was written by me, or some bot trying to spread disinformation?

Deepfake可以破坏社会的另一种方法是,使大量假内容泛滥到互联网上。 每当您上网时,您怎么知道是否真实? 如果互联网和社交媒体充斥着Deepfake,那么没人会知道真实的东西和真实的东西。 甚至可以生成真实的假文本,因此您如何知道您正在阅读的内容是我写的,还是某个试图传播虚假信息的机器人?

Image for post
MIT Technology Review. 麻省理工学院科技评论

The final way Deepfakes can be used to cause harm to others is pornography. From a report in September 2019, 96% of Deepfakes online are pornography. In a Forbes article they state:

Deepfake可以用来对他人造成伤害的最后方法是色情。 根据2019年9月的报告,在线的Deepfake中有96%是色情内容。 在《福布斯》的一篇文章中,他们指出:

Deepfake pornography is almost always non-consensual, involving the artificial synthesis of explicit videos that feature famous celebrities or personal contacts.

Deepfake色情几乎总是在未经同意的情况下进行的,涉及人工合成露骨的视频,这些视频以著名的名人或个人接触为特征。

Non-consensual Deepfakes of other people bring up several ethical questions and is extremely wrong for obvious reasons.

他人的非自愿Deepfake提出了一些道德问题,出于明显原因是极端错误的。

Most of this information is sourced from Forbes.

这些信息大部分来自《福布斯》

即将出现的道德问题 (The Ethical Problems to Come)

Though this all sounds very scary, there may be worse to come.

尽管这听起来很吓人,但情况可能会更糟。

One of these problems is privacy. As AI and technology get better and better, we may end up in an Orwellian situation. AI allows governments to be more authoritarian and gives them the ability to know where we are at all times and what we are doing. How? Facial recognition installed in cameras to watch your every move, microphones recognizing who is talking and what they’re saying, and predictive algorithms that will know what you will do next.

这些问题之一是隐私。 随着AI和技术变得越来越好,我们可能会陷入Orwellian的境地。 人工智能可以使政府变得更加专制,使他们能够随时了解我们的处境和所做的事情。 怎么样? 摄像头中安装了面部识别功能,可以观察您的一举一动;麦克风可以识别谁在说话,他们在说什么;预测算法可以知道您下一步将做什么。

Image for post
Contexture. 语境

This also plays into another problem, the easier it is for governments to track you, the easier it is to get rid of the people they don’t like. This would allow the next Hitler to be much more efficient, instead of needing to find a population they don’t like, they already know where all of them are, at any moment. This could also allow governments to find the people that oppose them and makes it easier to punish them. Though this may not seem realistic for some countries like Canada, other countries which are more authoritarian, or lack a stable government, can do this easily with better and better AI.

这也带来了另一个问题,政府越容易跟踪您,就越容易摆脱他们不喜欢的人。 这将使下一个希特勒更加高效,而不需要寻找他们不喜欢的人口,他们随时都可以知道所有人的位置。 这也可以使政府找到反对他们的人,并使其更容易受到惩罚。 尽管对于像加拿大这样的一些国家来说这似乎不切实际,但其他一些更加专制或缺乏稳定政府的国家可以通过越来越好的人工智能轻松地做到这一点。

As I briefly mentioned before, convincing fake texts and videos can flood the internet, thus rendering most online information untrustworthy and valueless.

正如我之前简要提到的那样,令人信服的伪造文字和视频可能会淹没互联网,从而使大多数在线信息变得不可信任且毫无价值

I will go more in-depth into the potential futures of AI in my final article in this series.

在本系列的最后一篇文章中,我将更深入地探讨AI的潜在前景。

更多资源和致谢: (More Resources & Acknowledgments:)

The topic of this article was inspired by Lesson 5 of FastAI’s course on the Practical Deep Learning for Coders and the chapter in their book on ethics. These were written and created by: Rachel Thomas, Jeremy Howard, and Sylvain Gugger.

本文的主题灵感来自FastAI的《面向程序员的实用深度学习》课程第5课以及他们关于道德的书中的章节。 这些由Rachel Thomas,Jeremy Howard和Sylvain Gugger编写和创建。

If you want to learn more about the ethics of AI, Rachel Thomas has a great course on FastAI. You can check out the full course here.

如果您想了解有关AI伦理的更多信息,雷切尔·托马斯(Rachel Thomas)开设了很棒的FastAI课程。 您可以在此处查看完整课程。

Thanks to Dickson Wu for proofreading and editing!

感谢Dickson Wu的校对和编辑!

Thanks for reading part 1 of my 3 part series on the Application of AI! The next part is The Current Applications of AI!

感谢您阅读我关于AI应用的3部分系列的第1部分! 下一部分是AI的当前应用!

Feel free to message me on LinkedIn if you have any questions or check out my website!

如有任何疑问,请随时在LinkedIn上给我发消息或查看我的网站

翻译自: https://medium.com/swlh/the-ethics-of-ai-643a97be0514

伦理网站

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值