编码冠状病毒React中相同偏见的人工智能局限性

娜塔莉·格罗弗(Natalie Grover) (by Natalie Grover)

As the coronavirus pandemic endures, the socio-economic implications of race and gender in contracting Covid-19 and dying from it have been laid bare. Artificial intelligence (AI) is playing a key role in the response, but it could also be exacerbating inequalities within our health systems — a critical concern that is dragging the technology’s limitations back into the spotlight.

随着冠状病毒大流行的持续,种族和性别对Covid-19的收缩和死亡的社会经济影响已经暴露。 人工智能(AI)在应对中发挥着关键作用,但它也可能加剧我们医疗系统中的不平等现象-至关重要的担忧正在将技术的局限性重新引起人们的关注。

The response to the crisis has in many ways been mediated by data — an explosion of information being used by AI algorithms to better understand and address Covid-19, including tracking the virus’ spread and developing therapeutic interventions.

应对危机的方式在许多方面都由数据来调解-AI算法使用信息的爆炸式增长来更好地理解和解决Covid-19 ,包括跟踪病毒的传播和制定治疗性干预措施

AI, like its human maker, is not immune to bias. The technology — generally designed to digest large volumes of data and make deductions to support decision making — reflects the prejudices of the humans who develop it and feed it information that it uses to spit out outcomes. For example, years ago when Amazon developed an AI tool to help rank job candidates by learning from its past hires, the system mimicked the gender-bias of its makers by downgrading resumes from women.

人工智能,就像它的人类制造者一样,也无法避免偏见。 该技术通常旨在消化大量数据并进行推论以支持决策,它反映了开发数据并将其用于产生结果的信息的人类的偏见。 例如,几年前,当亚马逊开发一种AI工具来通过向过去的员工学习以帮助对求职者进行排名时,该系统通过降低女性履历来模仿其制造商的性别偏见。

‘We were seeing AI being used extensively before Covid-19, and during Covid-19 you’re seeing an increase in the use of some types of tools,’ noted Meredith Whittaker, a distinguished research scientist at New York University in the US and co-founder of AI Now Institute, which carries out research examining the social implications of AI.

美国纽约大学的杰出研究科学家Meredith Whittaker指出:“在Covid-19之前,我们看到了AI的广泛使用,而在Covid-19期间,您正在看到对某些类型工具的使用的增加。 AI Now Institute的联合创始人,该研究所进行研究AI的社会意义的研究。

Monitoring tools to keep an eye on white collar workers working from home and educational tools that claim to detect whether students are cheating in exams are increasingly growing common. But Whittaker says that most of this technology is untested — and some has been shown to be flawed. However, that hasn’t stopped companies from marketing their products as cure-alls for the collateral damage caused by the pandemic, she adds.

监视工具监视在家工作的白领,以及声称检测学生是否在考试中作弊的教育工具越来越普遍。 但是惠特克(Whittaker)表示,这项技术中的大多数未经测试-有些已被证明存在缺陷。 她补充说,然而,这并没有阻止公司将其产品作为大流行所造成的附带损害的万灵药。

In the US for instance, a compact medical device called a pulse oximeter, designed to gauge the level of oxygen in the blood, had some coronavirus patients glued to its tiny screens to decide when to go to the hospital, in addition to its use by doctors to aid in clinical decision making within hospitals.

例如,在美国,一种紧凑的医疗设​​备称为脉搏血氧仪 ,旨在测量血液中的氧气水平。除冠状病毒的使用方式外,还有一些冠状病毒患者被粘在其微小的屏幕上以决定何时去医院。医生协助医院内的临床决策

The way the device works, however, is prone to racial bias and was likely calibrated on light skinned users. Back in 2005, a study definitively showed the device ‘mostly tended to overestimate (oxygen) saturation levels by several points’ for non-white people.

但是,该设备的工作方式容易出现种族偏见,并且很可能已针对肤色浅的用户进行了校准。 早在2005年,一项研究就明确表明,对于非白人,该设备“大多倾向于高估 (氧气)饱和度数点”。

The problem with the pulse oximeter device has been known for decades and hasn’t been fixed by manufacturers, says Whittaker. ‘But, even so, these tools are being used, they’re producing data and that data is going on to shape diagnostic algorithms that are used in health care. And so, you see, even at the level of how our AI systems are constructed, they’re encoding the same biases and the same histories of racism and discrimination that are being shown so clearly in the context of Covid-19.’

惠特克说,脉搏血氧仪的问题已经有几十年了,制造商还没有解决。 “但是,即使如此,这些工具仍在使用,它们正在生成数据,并且这些数据正在影响医疗保健中使用的诊断算法。 因此,即使在我们AI系统的构建水平上,您也可以看到它们正在编码相同的偏见和相同的种族主义和歧视历史,而这些偏见和种族主义和歧视的历史却在Covid-19的背景下清楚地展现了出来。

Evidence

证据

Meanwhile, as the body of evidence accumulates that people of colour are more likely to die from Covid-19 infections, that diversity has not necessarily been reflected in the swathe of clinical trials christened to develop drugs and vaccines — a troubling pattern that has long preceded the pandemic. When it comes to gender diversity, a recent review found that of 927 trials related to Covid-19, more than half explicitly excluded pregnancy, and pregnant women have been excluded altogether from vaccine trials.

同时,随着大量证据表明有色人种更有可能死于Covid-19感染 ,这种多样性并不一定反映在为开发药物和疫苗而进行的一系列临床试验中 -这是一个令人困扰的模式,早在很久以前就出现了大流行。 关于性别多样性,最近的一项评论发现,在涉及Covid-19的927项试验中,有一半以上明确排除了妊娠,并且孕妇也被排除在疫苗试验之外

The outcomes of products in these clinical trials will not necessarily be representative of the population, notes Catelijne Muller, a member of an EU high-level expert group on AI and co-founder of ALLAI, an organisation dedicated to fostering responsible AI.

欧盟AI高级专家组成员,致力于培育负责任AI的组织ALLAI的联合创始人Catelijne Muller指出,这些临床试验中的产品结果不一定代表人群。

‘And if you then use those outcomes to feed an AI algorithm for future predictions, those people will also have a disadvantage in these prediction models,’ she said.

她说:“然后,如果您使用这些结果为未来的预测提供AI算法,那么这些人在这些预测模型中也将处于劣势。”

The trouble with use of AI technology in the context of Covid-19 is not different from the issues of bias that plagued the technology before the pandemic: if you feed the technology biased data, it will spout biased outcomes. Indeed, existing large-scale AI systems also reflect the lack of diversity in the environments in which they are built and the people who have built them. These are almost exclusively a handful of technology companies and elite university laboratories — ‘spaces that in the West tend to be extremely white, affluent, technically oriented, and male,’ according to a 2019 report by the AI Now Institute.

在Covid-19中使用AI技术的麻烦与大流行之前困扰技术的偏见问题没有什么不同:如果您提供技术偏见的数据,它将喷出偏见。 确实,现有的大型AI系统也反映出其构建环境以及构建它们的人员缺乏多样性。 AI Now Institute在2019年的一份报告显示,这些几乎完全是少数几家技术公司和一流的大学实验室-“西方的空间往往是非常白人,富裕,技术导向和男性化的空间 ”。

But the technology isn’t simply a reflection of its makers — AI also amplifies their biases, says Whittaker.

惠特克说,但这项技术不仅仅是制造商的反映,人工智能也加剧了他们的偏见。

‘One person may have biases, but they don’t scale those biases to millions and billions of decisions,’ she said. ‘Whereas an AI system can encode human biases and then can distribute those in ways that have a much greater impact.’

她说:“一个人可能有偏见,但他们没有将这些偏见扩大到成千上万的决定。” “而人工智能系统可以对人为偏差进行编码,然后以影响更大的方式分布这些偏差。”

Complicating matters further, there are automation bias concerns, she adds. ‘There is a tendency for people to be more trusting of a decision that is made by a computer than they are of the same decision if it were made by a person. So, we need to watch out for the way in which AI systems launder these biases and make them seem rigorous and scientific and may lead to people being less willing to question decisions made by these systems.’

她补充说,问题进一步复杂化,存在自动化偏差的问题。 “人们倾向于信任计算机做出的决定,而不是人们做出的相同决定。 因此,我们需要提防AI系统如何洗净这些偏见并使其显得严谨和科学,并可能导致人们不愿意质疑这些系统做出的决定。''

‘We need to watch out for the way in which AI systems launder these biases and make them seem rigorous and scientific.’

``我们需要提防AI系统如何洗净这些偏见并使其显得严谨和科学。''

- -Meredith Whittaker, New York University, US

美国纽约大学的梅雷迪思·惠特克(Meredith Whittaker)

Safe

安全

There is no clear consensus on what will make AI technology responsible and safe en masse, experts say, though researchers are beginning to agree on useful steps such as fairness, interpretability and robustness.

专家们说,尽管什么使研究人员开始就诸如公平,可解释性和鲁棒性之类的有用步骤达成共识,但关于什么将使AI技术负责和安全起来尚无明确共识。

The first step is to ask ‘question zero’, according to Muller: what is my problem and how can I solve it? Do I solve it with artificial intelligence or with something else? If with AI, is this application good enough? Does it harm fundamental rights?

根据穆勒的说法,第一步是问“零问题”:我的问题是什么,我该如何解决? 我要用人工智能还是其他方法解决它? 如果使用AI,此应用程序是否足够好? 它是否损害基本权利?

‘What we see is that many people think that sometimes AI is sort of a magic wand…and it’ll kind of solve everything. But sometimes it doesn’t solve anything because it’s not fit for the problem. Sometimes it’s so invasive that it might solve one problem, but create a large, different problem.’

“我们看到的是,许多人认为,有时候AI就像是魔杖……它将解决一切。 但是有时它不能解决任何问题,因为它不适合该问题。 有时它是如此具有侵略性,以至于它可以解决一个问题,但却会产生一个巨大的,不同的问题。”

When it comes to using AI in the context of Covid-19 — there is an eruption of data, but that data needs to be reliable and be optimised, says Muller.

Muller说,在Covid-19的背景下使用AI时,会发生数据喷发,但数据必须可靠且需要优化。

‘Data cannot just be thrown at another algorithm’ she said, explaining that algorithms work by finding correlations. ‘They don’t understand what a virus is.’

她说,“数据不能仅仅丢给另一种算法”,并解释说算法通过找到相关性而起作用。 他们不了解什么是病毒。

Fairness issues with AI showcase the biases in human decision making, according to Dr Adrian Weller, programme director for AI at the Alan Turing Institute in the UK. It’s wrong to assume that not using algorithms means everything will be just fine, he says.

英国艾伦·图灵研究所(Alan Turing Institute)的AI计划主任Adrian Weller博士表示,人工智能的公平问题彰显了人类决策的偏见。 他说,假设不使用算法意味着一切都会好起来是错误的。

There is this hope and excitement about these systems because they operate more consistently and efficiently than humans, but they lack notions of common sense, reasoning and context, where humans are much better, Weller says.

这些系统具有这种希望和兴奋,因为它们比人类更一致,更有效地运行,但是它们缺乏常识,推理和上下文的概念,而人类则更好。

Accountability

问责制

Having humans partake more in the decision-making process is one way to bring accountability to AI applications. But figuring out who that person or persons should be is crucial.

让人们更多地参与决策过程是使AI应用程序承担责任的一种方法。 但是弄清楚那个人应该是至关重要的。

‘Simply putting a human somewhere in the process does not guarantee a good decision,’ said Whittaker. There are issues such as who that human works for and what incentives they’re working under which need to be addressed, she says.

惠特克说:“仅仅在流程中放置一个人并不能保证一个好的决定。” 她说,存在一些问题,例如人类为谁工作以及他们在做什么工作下的激励机制。

‘I think we need to really narrow down that broad category of “human” and look at who and to what end.’

“我认为我们需要真正缩小广义的“人类”范畴,并研究谁以及目的何在。”

Human oversight could be incorporated in multiple ways care to ensure transparency and mitigate bias, suggest ALLAI’s Muller and colleagues in a report analysing a proposal EU regulators are working on to regulate ‘high-risk’ AI applications such as for use in recruitment, biometric recognition or in the deployment of health.

ALLAI的Muller及其同事在一份报告中分析了人类监督,可以通过多种方式纳入人的监督,以确保透明度并减轻偏见,该报告分析了欧盟监管机构正在努力规范“高风险” AI应用的提案,例如用于招聘,生物识别的应用或在健康部署中。

These include auditing every decision cycle of the AI system, monitoring the operation of the system, having the discretion to decide when and how to use the system in any particular situation, and the opportunity to override a decision made by a system.

这些措施包括审核AI系统的每个决策周期,监视系统的运行,在任何特定情况下自行决定何时和如何使用系统的酌处权,以及超越系统决策的机会。

For Whittaker, recent developments such as EU regulators’ willingness to regulate ‘high-risk’ applications or community organising in the US leading to bans on facial recognition technology are encouraging.

对于Whittaker而言,诸如欧盟监管机构愿意监管“高风险”应用程序或美国社区组织导致禁止面部识别技术之类的最新发展令人鼓舞。

‘I think we need more of the same…to ensure that these systems are auditable, that we can examine them to ensure that they are democratically controlled, and that people have a right to refuse the use of these systems.’

“我认为我们需要更多相同的东西……以确保这些系统是可审核的,我们可以对其进行检查以确保它们受到民主控制,并且人们有权拒绝使用这些系统。”

Meredith Whittaker and Catelijne Muller will be speaking at a panel to discuss tackling gender and ethnicity biases in artificial intelligence at the European Research and Innovation Days conference which will take place online from 22–24 September.

Meredith Whittaker和Catelijne Muller将在 9月22日至24日在线 召开 欧洲研究与创新日会议上 讨论解决人工智能中的性别和种族偏见的小组讨论

也可以看看 (See also)

Originally published at horizon-magazine.eu.

最初发布于 horizo​​n-magazine.eu

翻译自: https://medium.com/horizon-magazine/encoding-the-same-biases-artificial-intelligence-s-limitations-in-coronavirus-response-19879785b0db

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值