偏向于管理的技术人员_面部识别技术偏向还是太好?

偏向于管理的技术人员

Over the last couple of months, amidst all the reckoning, three of the biggest companies in tech, IBM, Amazon, and Microsoft, announced that they will stop offering facial recognition software to law enforcement agencies.

在过去的几个月中,经过种种估算,三大科技公司,IBM,亚马逊和微软宣布将停止向执法机构提供面部识别软件。

IBM led the way with a letter to Congress, in which its CEO, Arvind Krishna, called for “shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”

IBM在致国会信中起到了领导作用,在该中,其首席执行官Arvind Krishna呼吁“共同承担责任,以确保在执法中使用Al进行偏见,特殊性测试,并对此类偏见测试进行审核和报告。”

Soon after this announcement, leading news companies like the New York Times, The Washington Post, BBC, and Quartz put out articles stating that IBM has ended its facial recognition program and credited this partially to a landmark research conducted in 2018.

此消息发布后不久, 《纽约时报》《华盛顿邮报》 BBC Quartz等领先新闻公司 发表文章指出IBM已经结束了面部识别程序,并将其部分归功于2018年进行的一项具有里程碑意义的研究

2018年性别阴影研究 (The 2018 Gender Shades research)

This influential research by Joy Buolamwini and Timnit Gebru, titled Gender Shades, brought the issue of bias in facial recognition algorithms to the public spotlight by analyzing the accuracy of gender classification products provided by IBM, Microsoft, and Face++.

Joy Buolamwini和Timnit Gebru进行的这项颇有影响力的研究名为“ 性别阴影” ,通过分析IBM,Microsoft和Face ++提供的性别分类产品的准确性,将面部识别算法的偏差问题引起了公众的关注。

Buolamwini talks about what led her to do this research and summarizes its findings in this short and insightful video:

Buolamwini讨论了促使她开展这项研究的原因,并在这段简短而有见地的视频中总结了其发现:

The key finding was that these algorithms had a very high accuracy rate overall, but when broken down by gender and color, some disturbing statistics emerge.

关键发现是这些算法总体上具有很高的准确率,但是当按性别和肤色细分时,会出现一些令人不安的统计数据。

The algorithms all performed better on males than females, and on lighter subjects than darker subjects. For example, IBM had an accuracy rate of 96.8 percent on lighter subjects, but only 77.6 percent on darker subjects. The worst affected group was darker females, who had accuracy rates as low as 65 percent.

该算法在男性上的表现均优于女性,在较轻的受试者上的表现优于较黑暗的受试者。 例如,IBM在较亮的主题上的准确率为96.8%,而在较暗的主题上只有77.6%。 受影响最严重的人群是深色女性,准确率低至65%。

Owing to these startling findings, it is natural to assume that this research had a key role to play in IBM’s and Microsoft’s decision, but the events that followed raise further questions.

由于这些惊人的发现,自然可以假设这项研究在IBM和Microsoft的决策中起着关键作用,但是随后发生的事件引起了进一步的疑问。

The authors sent IBM the findings of the research in January 2018 and the company responded within 24 hours. In its statement, IBM acknowledged what the research showed but stated that the firm had substantially increased the accuracy of its algorithms in a new version of the software different from the one used by the study.

作者于2018年1月向IBM发送了该研究的结果,该公司在24小时内做出了回应。 IBM在声明中承认该研究的结果,但表示该公司已在与该研究使用的软件不同的新版本中大大提高了算法的准确性。

When the company ran tests on a dataset similar to the research, it found that the new software had an error rate of merely 3.46 percent on darker females. The research, on the other hand, found an error rate of 34.7 percent on this subgroup. With an almost ten-fold reduction in error rate, IBM’s claim meant that it made great leaps in solving the issue of bias, at least when it comes to gender classification.

当该公司在与研究相似的数据集上进行测试时,发现新软件对深色女性的错误率仅为3.46%。 另一方面,研究发现该亚组的错误率为34.7%。 IBM的错误率降低了将近十倍,这意味着它在解决偏见问题方面取得了巨大飞跃,至少在性别分类方面。

后续研究 (The follow-up research)

Of course, a company’s own report of accuracy should be taken with a grain of salt. And that’s what authors Joy Buolamwini and Inioluwa Deborah Raji did when they published follow-up research titled Actionable Auditing in which they analyzed the impact of the original research. More specifically, they sought to find out if publicly calling out the biased performance of commercial AI products did any good. And, fortunately, they found it did. The companies targeted in the first study were found to have addressed the bias and showed significant improvements within 7 months.

当然,公司自己的准确性报告应以一粒盐为准。 这就是作者Joy Buolamwini和Inioluwa Deborah Raji在发表名为“ 可行审计 ”的后续研究时所做的工作。 他们在其中分析了原始研究的影响。 更具体地说,他们试图确定是否公开呼吁商业AI产品的性能有偏差。 而且,幸运的是,他们发现了。 发现第一项研究的目标公司已解决了这一偏见,并在7个月内显示出明显的改善。

The follow-up study from August 2018 showed that IBM’s new algorithms yielded an error rate of 16.97 percent for darker females, half of what was found in the original research. Microsoft had an even better improvement. It was able to bring down the error rate from 20.8 percent to 1.5 percent for darker females. Both companies explicitly referenced Gender Shades in their product update releases.

从2018年8月开始的后续研究表明,IBM的新算法为深色女性带来了16.97%的错误率,仅为原始研究的一半。 微软有了更​​好的改进。 对于深色女性,它可以将错误率从20.8%降低到1.5%。 两家公司在其产品更新版本中均明确引用了Gender Shades

These improvements suggest that the companies were closing in on the bias problem, making it all the more hard to understand why they would stop selling these products now.

这些改进表明,这些公司正在解决偏差问题,这使得人们更加难以理解为什么他们现在将停止销售这些产品。

除算法偏差外的因素 (Factors other than algorithmic bias)

If IBM was able to improve by ten-fold (according to its own claim) or even two-fold (according to second research) in seven months, it’s not too far fetched to assume that they could’ve have made even more meaningful improvements in the last year and a half.

如果IBM能够在7个月内提高10倍(根据自己的主张)或什至是2倍(根据第二项研究),那么假设他们可以做出更有意义的改进并不为过在过去的一年半中。

Perhaps, it was a business decision. As CNBC points out, IBM facial recognition business did not generate significant revenue for the company.

也许,这是一个商业决定。 正如CNBC指出的那样 ,IBM面部识别业务并未为公司带来可观的收入。

Perhaps, IBM is not actually exiting this market entirely. The letter merely says that IBM no longer offers “general purpose” facial recognition software. “It’s worth bearing in mind that a lot of the work IBM does is actually customized work for their customers, so it’s possible we’re not seeing the end of IBM doing facial recognition at all, they’re just changing the label,” Eva Blum-Dumontet, a senior researcher at Privacy International, told Business Insider.

也许,IBM 实际上没有完全退出这一市场 。 这封信只是说IBM不再提供“通用” 面部识别软件。 “值得牢记的是,IBM所做的许多工作实际上都是为客户量身定制的,因此很可能我们根本看不到IBM进行面部识别的终结,他们只是在改变标签,” Eva Privacy International的高级研究员Blum-Dumontet 告诉《商业内幕》

Perhaps, the media is blowing this out of proportion. Microsoft does not sell its facial recognition services to police departments. All it did in its recent announcement was reaffirm this stance.

也许,媒体正在夸大其词。 微软不向警察部门出售其面部识别服务 。 它在最近的公告中所做的一切都重申了这一立场。

Or perhaps, the real danger is not with bias in the algorithm, but rather with the algorithm being too good. “A perfectly accurate system also becomes an incredibly powerful surveillance tool,” Clare Garvie, a senior associate at Georgetown Law School’s Center on Privacy and Technology, told CNET last year.

或许,真正的危险不是算法中存在偏差,而是算法太好。 乔治敦法学院隐私与技术中心的高级助理克莱尔·加维(Clare Garvie)去年对CNET表示: “完美准确的系统也将成为强大的监视工具。”

问题是完全精确的系统吗? (Is a perfectly accurate system the problem?)

The stance taken by the three companies makes more sense when seen in this context. In IBM’s letter, Arvind calls for a “national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” In Amazon’s announcement, the firm calls for “stronger regulations to govern the ethical use of facial recognition technology.” And Microsoft vowed to continue no selling its facial recognition technology to police departments until there is a “national law in place, grounded in human rights, that will govern this technology.” These three stances reflect a concern on how this technology will encroach people’s privacy and democratic freedom rather than algorithmic bias.

在这种情况下,三家公司采取的立场更有意义。 在IBM的信中 ,Arvind呼吁进行“关于国内执法机构是否应该使用面部识别技术以及如何使用面部识别技术的全国性对话”。 在亚马逊的公告中 ,该公司呼吁“更严格的法规来管理人脸识别技术的道德使用。” 微软誓言将继续不将其面部识别技术出售给警察部门,直到有“以人权为基础的国家法律来规范该技术”。 这三种立场反映出人们对这种技术如何侵犯人们的隐私和民主自由而不是算法偏见的担忧。

This is not to say there is no bias. While the algorithmic bias has significantly reduced over the years, bias clearly exists in how the technology is being used. For example, in most law enforcement applications, any footage is compared to mugshot databases. But these databases contain a disproportionate amount of black people because black people have a higher arrest rate than white people for the same crime. For example, black people are four times more likely to be arrested for marijuana possession even though the use rates are about the same for white and black people. This makes identifying a black person easier with facial recognition technology and “exacerbates racism in a criminal legal system that already disproportionately polices and criminalizes Black people,” writes ACLU President Kade Crockford. He also points out that more surveillance cameras are installed in black and brown neighborhoods. Debiasing technology is within arms reach, but the same cannot be said for people.

这并不是说没有偏见。 尽管这些年来算法偏差已大大减少,但在技​​术使用方式中显然存在偏差。 例如,在大多数执法应用程序中,会将任何镜头与面部照片数据库进行比较。 但是这些数据库包含的黑人人数不成比例,因为在同一罪行下,黑人的逮捕率高于白人。 例如,即使白人和黑人的使用率大致相同,黑人因拥有大麻而被捕的可能性也要高出四倍 。 美国公民自由联盟主席卡德·克罗克福德(Kade Crockford) 写道 ,这使得通过面部识别技术更容易识别黑人,并“在已经不成比例地将黑人定罪和定罪的刑事法律体系中加剧了种族主义。” 他还指出,在黑色和棕色附近安装了更多的监视摄像机。 消除偏见技术触手可及,但人们却无法做到这一点。

这不仅仅是种族问题 (It’s more than a racial issue)

The concerns surrounding facial recognition stretches far beyond race. Microsoft President Brad Smith outlined some of these in a detailed blog post back in 2018. For example, he says, public establishments could install cameras connected to the cloud that run real-time facial recognition services, intruding the privacy of people giving them no choice. Footage from multiple cameras like these can be combined together to form longer-term histories on a person. “Stores could know immediately when you visited them last and what you looked at or purchased, and by sharing this data with other stores, they could predict what you’re looking to buy on your current visit,” writes Smith.

关于面部识别的担忧远远超出了种族。 微软总裁布拉德·史密斯(Brad Smith)在2018年的一篇详细博客文章中概述了其中的一些内容。例如,他说,公共场所可以安装连接到云的摄像头,这些摄像头可以运行实时面部识别服务,从而侵犯了人们的隐私,使他们别无选择。 可以将来自多个此类摄像机的镜头组合在一起,以形成一个人的长期历史。 “商店可以立即知道您上次访问他们的时间以及您浏览或购买的商品,并且通过与其他商店共享这些数据,他们可以预测您当前访问时想要购买的商品,” Smith写道。

A far more concerning scenario is if a government uses such technology for continuous surveillance of its people. In a democracy, this deprives people of their fundamental freedoms, and in non-democracies, it can be used as a weapon to crush those who dissent the ruling regime. Today’s technology has reached a level that makes Orwell’s 1984 possible. “It could follow anyone anywhere, or for that matter, everyone everywhere,” writes Smith.

更令人担忧的情况是,政府是否使用此类技术对其人民进行持续监视。 在民主国家,这剥夺了人们的基本自由;在非民主国家,它可以用作镇压持不同政见者的武器。 今天的技术已经达到了使奥威尔(1984)成为可能的水平。 史密斯写道:“它可以跟随任何地方的任何人,或者就此而言,任何地方的每个人。”

Back in 2017, an algorithm claimed that it can distinguish between gay and straight men based on headshots with an accuracy of 81 percent. “Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay,” Ashland Johnson, the Human Rights Campaign’s director of public education and research, told Vox.

早在2017年,一种算法声称它可以根据爆头来区分男同性恋者和异性恋男人,准确度为81%。 人权运动公共教育和研究部主任阿什兰·约翰逊(Ashland Johnson) 告诉Vox: “请立即想象一下,如果这项有缺陷的研究被用来支持残酷政权为查明和/或迫害他们认为是同性恋的人所做的努力,那么可能带来的后果,”

Most recently, the misuse of this technology was showcased on the very protests that sought to curb it. Police departments across the US ran protest footage and photos through facial recognition software to identify BLM protestors. This is not without precedent. In the past, the police used this technology to identify and target protestors in the Freddie Gray protests, arresting those with outstanding warrants.

最近,在试图遏制该技术的抗议活动中,有人滥用了该技术 。 美国各地的警察部门通过面部识别软件运行抗议录像和照片,以识别BLM抗议者。 这并非没有先例。 过去,警察使用该技术来识别和定位房地美抗议活动中的示威者,并逮捕有出色逮捕令的示威者

Facial recognition technology is dangerous when it isn’t accurate. But it’s far more dangerous when it is. For this reason, the recent announcements made by Microsoft, IBM, and Amazon, are noteworthy but they are only the tip of the iceberg. There are a countless number of smaller companies that continue to develop and provide facial recognition technology that is far more sophisticated to governments and organizations across the world. One such company is Clearview AI, which has a database of over 3 billion photos! The company lets its clients upload any photo and find matches against this database. If you’re wondering how bad could this be, read this article titled I Got My File From Clearview AI, and It Freaked Me Out or watch this segment by John Oliver.

人脸识别技术不准确时会很危险。 但是,要是这样的话,那就危险得多了。 因此,微软,IBM和亚马逊最近发布的公告值得关注,但它们只是冰山一角。 有无数的小型公司继续开发并提供对世界各国政府和组织来说更为复杂的面部识别技术。 这样的公司之一就是Clearview AI,它拥有超过30亿张照片的数据库! 该公司允许其客户上传任何照片,并找到与此数据库匹配的照片。 如果您想知道这有多严重,请阅读这篇标题为“我从Clearview AI获得文件”,以及它使我感到震惊的文章。 观看约翰·奥利弗(John Oliver)的这一部分

Thank you for reading. If you enjoyed this article, you can sign up for my monthly digest — a curation of my most popular articles.

感谢您的阅读。 如果您喜欢这篇文章,可以 注册 我的每月摘要,这是我最受欢迎的文章的精选。

翻译自: https://medium.com/@sarveshmathi/is-facial-recognition-tech-biased-or-too-good-6cac4d93b0fa

偏向于管理的技术人员

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值