科技公司亚马逊名字由来
In 2016, Amazon released Rekognition (with a ‘k’), its cloud platform for doing face recognition and face classification. Those two things sound similar but they are not and the difference between them matters. Bundling them together has been one contributing factor to the huge amount of confusion regarding face recognition among the general public. Here’s why.
2016年,亚马逊发布了Rekognition (带有'k'),它是用于进行面部识别和面部分类的云平台。 这两件事听起来很相似,但事实却并非如此,它们之间的区别很重要。 将它们捆绑在一起是导致公众对面部识别产生大量困惑的一个因素。 这就是为什么。
Face recognition (with a ‘c’) is the process of identifying someone with a picture of their face. Face recognition systems attempt to answer one question and one question only: “who is this”. For the techies out there, the term is defined in an international technical standard called ISO/IEC 2382–37. That standard has been in place, and accepted by the 164 member countries of the International Standards Organization (ISO), since 2012.
人脸识别(带有“ c”)是用某人的头像识别某人的过程。 人脸识别系统仅尝试回答一个问题和一个问题:“这是谁”。 对于那里的技术人员,该术语是在称为ISO / IEC 2382–37的国际技术标准中定义的。 自2012年以来,该标准已经实施,并被164个国际标准组织(ISO)成员国接受。
Face classification does something very different. It answers the question, “what is this”. Face classification systems attempt to label people with things like gender, age, race, emotional state, or other markers.
人脸分类的作用非常不同。 它回答了“这是什么”的问题。 人脸分类系统尝试使用性别,年龄,种族,情绪状态或其他标记来标记人。
As a computer scientist, it’s hard for me to understate the difference between face recognition and face classification computer programs.
作为计算机科学家,我很难低估人脸识别和人脸分类计算机程序之间的区别。
Face recognition creates a model of what you look like using only pictures of you. Face classification creates a model of what something else, usually an abstract concept, like gender or happiness, “looks like”. To do that, makers of gender classification programs (for example) would generally scrape a dataset of images they think represent categories like “male” and “female” from the internet and direct an AI system to learn a model from those images. This process, of building a model with things that 1) are dictated by someone else’s ideas and 2) are not you, is a huge source of errors and bias.
人脸识别创建的你是什么样子只用你的照片的典范。 人脸分类为其他事物(通常是抽象概念)(例如性别或幸福)“看起来像”创建模型。 为此,性别分类程序的制造商(例如)通常会从互联网上抓取他们认为代表“男性”和“女性”等类别的图像数据集,并指导AI系统从这些图像中学习模型。 建立模型的过程包括:1)由他人的思想所决定,而2)不是您的模型,这是错误和偏见的巨大来源。
How big of a source of error and bias? A seminal 2018 study by MIT researchers found error rates in gender classification were around 35% for black females and 1% for white males. These are big error rates and big differences in error rates between groups. What about error rates in face recognition? A 2020 NIST study recorded error rates of 0.1% for black females and 0.4% for white males. Not only are the error rates for face recognition considerably lower but the disparity between groups is almost non existent.
错误和偏见有多大? 麻省理工学院研究人员在2018年进行的一项开创性研究发现,黑人女性的性别分类错误率约为35%,白人男性为1% 。 这是很大的错误率,并且组之间的错误率差异很大。 面部识别中的错误率如何? 2020年NIST的一项研究记录了黑人女性的错误率是0.1%,白人男性的错误率是0.4% 。 不仅人脸识别的错误率大大降低,而且群体之间的差异几乎不存在。
The differences don’t stop there. People sometimes claim face recognition is an “untested” technology but that’s actually more true of face classification. Face recognition has been continually evaluated by the U.S. National Institute of Standards and Technology (NIST) since 2002. That’s almost two decades of constant attention from one of the leading science organizations in the world. There are also entire international bodies of scientists that develop standards for how to talk, test, and deploy face recognition. As a class of technology, face recognition may be one of the most tested and studied computer programs in existence. The same cannot be said for face classification.
差异不止于此。 人们有时会说人脸识别是一种“未经测试”的技术,但实际上在人脸分类方面更是如此 。 自2002年以来, 人脸识别一直由美国国家标准技术研究院(NIST) 进行评估 。 世界领先的科学组织之一一直持续关注着将近二十年。 也有整个国际科学家机构制定有关如何交谈 , 测试和部署面部识别的标准。 作为一种技术,人脸识别可能是现有的经过测试和研究最多的计算机程序之一。 面部分类不能说同样的话。
That doesn’t mean every face recognition algorithm has been tested, but a lot are. At last count, there were almost 200 different face recognition programs tested by NIST. Ironically, one of the vendors who hasn’t been NIST tested is… Amazon Rekognition(!), a fact that I think perfectly illustrates the point I tried to lay out in my title:
这并不意味着每种面部识别算法都已经过测试,但是很多都经过了测试。 最终,NIST测试了近200种不同的面部识别程序。 具有讽刺意味的是,尚未经过NIST测试的供应商之一是…… Amazon Rekognition (!),我认为这一事实完美地说明了我试图在标题中阐明的观点:
- Amazon Rekogntion: untested ❌ Amazon Rekogntion:未经测试
Face Recognition: highly tested ✔️
人脸识别: 高度测试✔️
Finally, let’s think about the use cases for face classification versus face recognition. People occasionally use studies of face classification to advocate for restricting face recognition use by the police but here’s the thing. Ban face classification use by the police. Do it today. I don’t think it would be controversial whatsoever and is likely something that computer scientists, law enforcement, and civil liberties groups could all agree on.
最后,让我们考虑一下人脸分类与人脸识别的用例。 人们偶尔使用面部分类研究来主张限制警察使用面部识别 ,但这就是事实。 禁止警察使用面部分类 。 今天就做。 我认为这不会引起争议,并且可能是计算机科学家,执法人员和公民自由团体都可以达成共识的。
There are almost no legitimate reasons for a police department to be using a computer program to guess someone’s age, gender or level of happiness. There is a need for police departments to identify individual people (what face recognition does). Police have been doing this since the dawn of policing. They need to find missing persons, victims of crimes, owners of property, and yes, people suspected of committing a crime. Whats more, the alternative to computer programs helping with these tasks is the status quo, meaning only humans do this work, and there is abundant evidence that we are absolutely terrible at these tasks.
警察部门几乎没有正当理由使用计算机程序来猜测某人的年龄,性别或幸福程度。 警察部门需要识别个人(面部识别的作用)。 自维持治安以来,警方一直在这样做。 他们需要找到失踪人员,犯罪受害者,财产所有人,是的,要找到涉嫌犯罪的人。 更重要的是,替代计算机程序来完成这些任务的是现状,这意味着只有人类才能完成这项工作,并且有充分的 证据表明我们在这些任务上绝对 可怕 。
Since its launch, AWS Rekognition, has been criticized for a number of reasons, many of them valid, but the act of bundling face classification and recognition together is one that gets too little attention. In doing so, they have muddied the conversation regarding face recognition technology (this John Oliver skit is a good example). This confusion has real world consequences, with U.S. legislation recently introduced at both the state and federal levels that, like Rekognition, bundles the concepts of face classification and recognition together, potentially putting those bills at odds with international standards and scientists.
自推出以来,AWS Rekognition因多种原因而受到批评,其中许多原因都是有效的,但是将面部分类和识别捆绑在一起的行为却很少引起关注。 这样一来,他们就使人脸识别技术的讨论陷入混乱( 这个John Oliver的小品就是一个很好的例子 )。 这种混乱带来了现实世界的后果,最近在州和联邦两级引入了美国立法,例如《识别》(Rekognition),将面部分类和识别的概念捆绑在一起,可能使这些法案与国际标准和科学家背道而驰。
In general, cloud platforms like Amazon’s have lead the way in democratizing access to technology, which is a good thing. However, when it comes to face recognition technologies, they have some work to do.
总的来说,像亚马逊这样的云平台已经引领了对技术的普及化, 这是一件好事 。 但是,在人脸识别技术方面,他们还有一些工作要做。
Amazon and others need to clearly separate technologies that have a police application from those that don’t. The use of the later should be scrutinized and debated by the public. There should be rules about how technologies with a police application are used, so they aren’t misused. Unfortunately, right now confusion between classification and recognition has become a part of that discussion. Hopefully, this helps clarify just how different those two things are and contributes to a well-informed and respectful dialog moving forward.
亚马逊和其他公司需要明确区分具有警用应用程序的技术和没有警用应用程序的技术。 公众应仔细审查和讨论后者的用法。 应该有关于如何使用带有警务应用程序的技术的规则, 这样才不会被滥用 。 不幸的是,目前分类和识别之间的混淆已成为该讨论的一部分。 希望这有助于弄清这两件事之间的区别,并有助于推动消息灵通和尊重的对话向前发展。
~John J. Howard, Ph.D.
〜约翰·霍华德(John J. Howard)博士
The views presented here are solely those of the author. They do not represent the views of any of his affiliates. If you found this interesting, you can follow me on Twitter or LinkedIn.
这里提出的观点仅是作者的观点。 它们不代表他的任何分支机构的观点。 如果您觉得这很有趣,可以在 Twitter 或 LinkedIn 上关注我 。
科技公司亚马逊名字由来