百度人脸识别 人脸识别模型_人脸识别的现代君主制

百度人脸识别 人脸识别模型

重点 (Top highlight)

Facial recognition, along with its accompanying mission, is not a new human endeavor. It has been tried with a multitude of failures since as early as 1964 when Bledsoe, along with Helen Chan and Charles Bisson, tried utilizing the computer to recognize the human face. According to historians, they were the Pioneers of automated face recognition. Bledsoe, in particular, was proud of his work. Still, since the project was financed by a covert intelligence agency, it did not get too much publicity; hence little was publicly published. Based on limited data available, Bledsoe’s initial approach included the manual marking of several landmarks on the face, such as the center of eyes, mouth, etc. After that, the markers were mathematically rotated by computer to compensate for pose variation or facial expression.

F 社会认可及其伴随的使命,并不是新的人类努力。 自1964年以来,布莱德(Bledsoe)就与海伦·陈(Helen Chan)和查尔斯·比森(Charles Bisson)一起尝试使用计算机来识别人脸时,已经尝试了很多失败。 根据历史学家的说法,它们是自动面部识别的先驱。 特别是布莱索(Bledsoe)为他的工作感到自豪。 但是,由于该项目是由秘密情报机构资助的,因此并未获得太多宣传。 因此很少公开发表。 基于可用的有限数据,Bledsoe的最初方法包括手动标记脸部几个标志,例如眼中心,嘴巴等。之后,这些标记通过计算机进行数学旋转,以补偿姿势变化或面部表情。

The distances between points of reference on the face and image were also automatically computed and compared between the photos to determine identity. However, given an extensive database of images and photographs, the obstacle was to database extraction into a small set of records so that one of the image records matched the picture. The significant difficulty, according to Bledsoe, was due to the considerable inconsistency in the position of the head, facial expression, and aging. Yet the scheme of correlation (or pattern matching) of unprocessed optical data, which is often used by some researchers, is sure to fail in circumstances where the variability is prominent. Notably, the correlation is very low among the two portraits of the same person with two different head rotations.

面部和图像上参考点之间的距离也被自动计算,并在照片之间进行比较以确定身份。 但是,由于有大量的图像和照片数据库,因此,障碍在于将数据库提取到少量记录中,以便其中一个图像记录与图片匹配。 Bledsoe认为,最大的困难是由于头部位置,面部表情和衰老严重不一致。 然而,一些研究人员经常使用的未处理光学数据的关联(或模式匹配)方案在可变性很明显的情况下肯定会失败。 值得注意的是,在同一个人的两幅肖像中,头部旋转两次不同,相关性非常低。

Image for post
Photo by Bernard Hermant on Unsplash
照片由 Bernard HermantUnsplash拍摄

The challenges continued until the recent decades when the creation of the silos of big data and availability of quantum computers with humongous processing power was discovered and put to the work.

挑战一直持续到最近几十年,直到发现大数据孤岛和具有巨大处理能力的量子计算机的可用性并付诸实践。

Today, the resolution of one technical problem precedes another major issue, one- the individual privacy and second- Discrimination and racism, something that has recently gained overwhelming attention. In fact, indiscriminate use of high profiling facial recognition is finding itself at the center of courtrooms across many countries. For example, not too long ago, South Wales Police in the United Kingdom was sued in court over discriminatory use of the facial recognition system that was given the green light to use by their administration.

今天,解决一个技术问题先于另一个重大问题,一个是个人隐私,其次是歧视和种族主义,最近引起了广泛关注。 实际上,在许多国家中,随意使用高轮廓的面部识别已成为法庭的中心。 例如,不久前, 联合王国的南威尔士警察因歧视性使用面部识别系统而被法院起诉,面部识别系统已被其行政部门批准使用。

种族特征和面部识别 (Racial Profiling and Facial Recognition)

Facial-Recognition Software may be racially biased. Of course, no technology is inherently racist; however, depending on how the facial recognition algorithms are equipped, they are more accurate when they are trained differently to identify white faces from persons of African or Asian descent. To be more precise, as mentioned earlier, historically, some of the most underwood failures of creating mathematical formulas that would accurately do the job were merely the matter of tactical vs. strategic alignment of its creator. For instance, recent research, according to the Atlantic publication, suggests that advancing accuracy rates are not distributed equally within the given community. Many current algorithms reveal troubling disparities in precision across race, gender, and other demographics. Based on a study performed in 2011 by one of the organizers of NIST’s vendor tests, discovered that algorithms developed in some Asian countries such as South Korea, Japan, and china recognized East Asian faces far better than Caucasians. And similarly, but in reverse order algorithms acquired in France, Germany, and the United States, were significantly better at recognizing Caucasian facial attributes.

面部识别软件可能存在种族偏见。 当然,没有任何技术是天生的种族主义。 但是,根据面部识别算法的配置,当对它们进行不同的训练以从非洲或亚洲裔的人中识别出白脸时,它们会更加准确。 更确切地说,如前所述,从历史上看,创建能够准确完成工作的数学公式的一些最不为人知的失败仅仅是其创造者在战术上与战略上保持一致的问题。 例如,根据Atlantic出版物 ,最近的研究表明,在给定社区中,提高的准确率分布不均。 当前许多算法揭示了种族,性别和其他人口统计数据之间的精度差异,令人不安。 根据NIST供应商测试的组织者在2011年进行的一项研究,发现在一些亚洲国家(例如韩国,日本和中国)开发的算法可以识别出东亚人的面Kong远胜于白种人。 同样,但在法国,德国和美国获得的算法却相反,它们在识别白种人面部特征方面明显更好。

Image for post
Photo by chester wade on Unsplash
切斯特·韦德Unsplash上的 照片

The conditions in which an algorithm is constructed, expressly the type of racial makeup of its developers and test photo databases most likely has a significant influence on the accuracy of its outcome. That is why to overcome such a barrier, it may be strategically feasible for the developer to make a shortcut and meddle through the creation of individual database profiles based on race, gender, and ethnicity.

构建算法的条件,明确表示其开发人员的种族构成和测试照片数据库的类型极有可能对其结果的准确性产生重大影响。 这就是为什么要克服这种障碍的原因,开发人员通过基于种族,性别和种族的个人数据库配置文件的创建来捷径和干预可能在战略上是可行的。

外行用语中的面部识别过程 (Process of Facial Recognition in Laypersons Terms)

To understand how facial recognition functions and how it relates to data security and discrimination, we must first make ourselves familiar with its historical development as well as fundamentals.

要了解面部识别的功能以及它与数据安全性和歧视的关系,我们首先必须熟悉面部识别的历史发展和基本原理。

本质上,如何实现面部识别需要两个步骤。 (Essentially, how face recognition achieved is over two Steps.)

First- it necessitates the feature extraction of the target subject and selection.

首先,必须提取目标对象并进行选择。

Second- classification of the objective finding of the extracted image data. Historically, some of the most notable techniques to perform the tasks mentioned above include the following:

提取图像数据的客观发现的第二分类。 从历史上看,执行上述任务的一些最著名的技术包括:

传统方法 (Traditional Methods)

Some face recognition algorithms recognize facial features by extracting landmarks, or topographies, from an image of the person’s face. The latter includes items such as relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw, then the result is used to search for other images with matching features.

一些面部识别算法通过从人的面部图像中提取地标或地形来识别面部特征。 后者包括诸如眼睛,鼻子,che骨和下巴的相对位置,大小和/或形状的项目,然后将结果用于搜索具有匹配特征的其他图像。

Other algorithms create a set of facial data profiles normalizing a gallery of face images and then compress the data, only saving the data in the image that is useful for face recognition, after the probe image is compared against the data collected from the face.

其他算法会创建一组面部数据配置文件,以标准化面部图像库,然后压缩数据,仅将探针图像与从面部收集的数据进行比较之后,将对面部识别有用的数据保存在图像中。

Image for post
Photo by Anthony Riera on Unsplash
Anthony RieraUnsplash拍摄的照片

识别算法可分为两种基本方法:几何和光度法。 (Recognition Algorithms can be divided into two Fundamental Methods, Geometric and Photometric.)

The geometric approach peeks at distinguishing characteristics, whereas photometric takes a statistical approach by distilling an image into values and associates those values with templates to eliminate discrepancies, which can further sub-categorizes it into the all-inclusive and feature-based models. The former tries to recognize the face in totality while the feature-based subdivide into components. It provides contrast according to features and analyzes each as well as its spatial location concerning other features.

几何方法旨在区分特征,而光度法则采用统计方法,将图像提取为值并将这些值与模板相关联以消除差异,这可以将其进一步细分为全包和基于特征的模型。 前者试图整体识别人脸,而基于特征的细分为组件。 它根据特征提供对比度,并分析每个特征以及与其他特征有关的空间位置。

三维识别的概念 (The Concept of 3-Dimensional Recognition)

A three-dimensional face recognition procedure uses 3-D sensors to catch data about facial contour. The captured data is then utilized to identify unique details on the facial skin, like the outline of the eye sockets, nose, and chin. One benefit of 3-D face recognition is that it is not affected by changes in lighting like other techniques and also identifies a face from a range of viewing angles, including a profile view.

三维人脸识别程序使用3-D传感器捕获有关人脸轮廓的数据。 然后,使用捕获的数据来识别面部皮肤的独特细节,例如眼窝,鼻子和下巴的轮廓。 3-D人脸识别的一个好处是,它不会像其他技术一样受到光照变化的影响,并且还可以从包括视角在内的多个视角识别人脸。

皮肤纹理分析 (Skin Texture Analysis)

Skin texture analysis, turns the unique skin lines, patterns, and spots into a mathematical formula. It works much the same way as facial recognition. In addition to skin texture analysis, performance in recognizing faces can increase 20 to 25 percent.

皮肤纹理分析可将独特的皮肤线条,图案和斑点变成数学公式。 它的工作方式与面部识别大致相同。 除了皮肤纹理分析之外,识别面部的性能还可提高20%到25%。

结合不同技术的面部识别 (Facial Recognition combining different Techniques)

As every technique has its purposes and flaws, technology companies have combined the traditional, 3D face recognition and Skin Texture analysis, to form recognition systems that have higher rates of success.

由于每种技术都有其目的和缺陷,因此技术公司已将传统的3D人脸识别与皮肤纹理分析相结合,以形成具有较高成功率的识别系统。

Combined methods have an advantage over other systems, as it is relatively insensitive to fluctuations in expression, involving blinking, frowning or smiling. It also can compensate for mustache or beard growth and the appearance of eyeglasses. The system is also uniform concerning race and gender.

组合方法相对于其他系统具有优势,因为它对表情波动(包括眨眼,皱眉或微笑)相对不敏感。 它还可以补偿胡须或胡须的增长以及眼镜的外观。 该系统在种族和性别方面也是统一的。

将热量整合到面部识别技术中 (Incorporation of Thermal in Facial Recognition Technology)

A different form of data extraction for face recognition usage is the utility of thermal or infrared cameras. This procedure helps cameras detect the shape of the head, excluding the associated accessories such as glasses, hats, or makeup. It will also capture facial imagery in low-light and nighttime conditions without using a flash and exposing the position of the camera. However, due to low sensitivity for details, the thermal cameras are almost always coupled with other technologies, as described earlier.

用于人脸识别的另一种数据提取形式是热像仪或红外热像仪。 此过程可帮助相机检测头部的形状,但不包括相关的附件,例如眼镜,帽子或化妆。 它还可以在弱光和夜间条件下拍摄面部图像,而无需使用闪光灯和曝光相机的位置。 但是,由于对细节的敏感度低,热像仪几乎总是与其他技术结合在一起,如前所述。

因此,人脸识别在哪里变得歧视? (So- Where does Facial Recognition become Discriminatory?)

As noticeable from the description of methodologies, there is always a level of profiling involved with the design of every facial recognition algorithms that are merely unavoidable without incorporating careful, ethical consideration. I personally foresee the profiling pattern can be avoided by implementing proper, hence unbiased Deep Learning algorithms; however, some entities may bypass that (at least for the time being) due to fiscal restraints and opportunity for gaining a competitive edge. Therefore, not surprisingly, most current facial recognition technologies, irrespective of the technology they use, are flagged for discrimination and biased profiling practices.

从方法的描述中可以看出,每个面部识别算法的设计始终涉及一定程度的概要分析,这是不可避免的,没有结合仔细的道德考虑。 我个人预见,可以通过实施适当的,因此无偏的深度学习算法来避免使用分析模式; 但是,由于财政限制和获得竞争优势的机会,一些实体(至少暂时)可能会绕开它。 因此,毫不奇怪,大多数当前的面部识别技术,无论使用何种技术,都被标记为歧视和有偏见的剖析习惯。

Image for post
Photo by Koshu Kunii on Unsplash
Koshu Kunii Unsplash

For instance, according to a recent report, arrest and incarceration rates across Los Angeles, California, have surged among hotlists are disproportionately subjects of African descent. Because based on further investigation, the algorithms backing those facial-recognition technologies may work poorly on Black demographics. Moreover, since Facial recognition technology is being rolled out by law enforcement across the country, parallel to it, there is increase profiling and incarceration of law-abiding citizens with little effort by the legislatures to explore and correct such prejudice.

例如,根据最近的一份报告, 加利福尼亚州洛杉矶的逮捕监禁率激增,在非洲裔后裔的热门名单中,比例最高。 因为根据进一步的调查,支持这些面部识别技术的算法在黑人人口统计上可能效果不佳。 此外,由于面部识别技术正在全国范围内被执法机构推出,与此同时,立法机关几乎没有努力探索和纠正这种偏见,因此越来越多地遵守法律的公民受到监禁。

Reportedly, Businesses that market facial recognition technology claims that their product is highly efficient and accurate with reliability over 95%. But, in reality, the latter claims are almost impossible to substantiate. Because it is the common notion that the facial-recognition algorithms adopted by Police are not obligated to undergo a public or independent examination to determine the correctness or check for prejudice before being used on ordinary citizens. More bothersome, yet, the insufficient testing that has been executed on the most popular facial recognition systems has exposed some pattern of racial bias.

据报道,销售面部识别技术的企业声称其产品高效且准确,可靠性超过95% 。 但是,实际上,后一种说法几乎不可能得到证实。 因为通常的观念是,警察采用的面部识别算法在对普通市民使用之前,没有义务进行公开或独立的检查以确定其正确性或偏见。 更麻烦的是,在最流行的面部识别系统上执行的测试不足,暴露了某些种族偏见的模式。

Racial profiling is not coincidental, particularly for public surveys. The latter further reinforces the fact that as to why entities like Police, along with the vendors they use, may have been exempted from disclosing their Proprietary algorithms.

种族剖析并非偶然,尤其是对于公共调查而言。 后者进一步加强了这样一个事实,即为什么像警察这样的实体及其使用的供应商可以免于披露其专有算法。

Racial profiling is a discriminatory practice by nature often employed by law enforcement officials across the world to target individuals for suspicion of crime based on the individual’s race, ethnicity, religion, or national origin. Another pattern of racial profiling is the targeting, ongoing since the September 11th attacks, of Muslims, Arabs, and South Asians for detention on minor immigrant violations without any association to the attacks on the World Trade Center. In reality, Racial profiling is a longstanding and deeply troubling national problem despite claims that the United States has entered a “post-racial era.”

种族貌相是一种歧视性行为,自然而然地被世界各地的执法人员用来根据个人的种族,种族,宗教或国籍来针对个人进行犯罪嫌疑。 自9月11日袭击以来,种族貌相的另一种模式是针对穆斯林,阿拉伯人和南亚人,对他们进行轻度侵犯移民的拘留,与世贸中心的袭击无关。 实际上,尽管有人声称美国已进入“后种族时代”,但种族剖析是一个长期存在且深深困扰着的民族问题。

面部识别技术的通用效用及其陷阱 (Common Utility of Facial Recognition Technology and its Pitfalls)

Facial recognition technology has a multitude of uses, from preventing retail crime and finding a missing person to tracking school attendance. The market for technology is growing exponentially. According to research, the Facial Recognition Market is expected to surge from $3.2 billion in 2019 to $7.0 billion by 2024 in the U.S.

面部识别技术具有多种用途,从预防零售犯罪到寻找失踪人员,再到跟踪上学等。 技术市场正在成倍增长。 根据研究,到2024年,美国面部识别市场预计将从2019年的32亿美元激增至70亿美元

Image for post
Photo by Ramón Salinero on Unsplash
RamónSalineroUnsplash上的 照片

The most important uses for the technology being for surveillance and marketing. This raises concerns for many people. The leading cause of concerns amongst civilians is the lack of appropriate federal statutes surrounding the use of facial recognition technology. One issue, for instance, is that according to studies, the technology has been proven inaccurate at identifying people of color, especially black women.

该技术最重要的用途是监视和营销。 这引起了许多人的关注。 引起平民关注的主要原因是缺乏有关面部识别技术使用的适当联邦法规。 例如,一个问题是,根据研究,该技术已被证明在识别有色人种,尤其是黑人妇女方面不准确。

With the increasing number of anxieties and privacy concerns encompassing facial recognition software and its application, cities around the U.S. will experience additional dilemmas as they attempt to tackle these concerns.

随着包括面部识别软件及其应用在内的焦虑和隐私问题日益增多,美国各地的城市在尝试解决这些问题时将面临更多的困境。

As any other technologies out there, when it comes to errors, false-negative and false-positive results of facial recognition technologies are real issues that need to be considered too. False-negative is when the system fails to match a person’s face to an image that is, in fact, contained in a database, or method will erroneously return zero results in response to a query. False-positive, on the other hand, is when the system fails to match the person’s face with a picture stored in the database, but that match is actually wrong. This is when a police officer presents a description of a suspect, but the system mistakenly alerts the officer that the photo is of someone else’s.

像其他任何技术一样,当涉及到错误时,面部识别技术的假阴性和假阳性结果也是需要考虑的真实问题。 假阴性是指系统无法使人的脸部与实际上包含在数据库中的图像匹配,或者方法将错误地返回零结果以响应查询。 另一方面,假阳性是指系统无法将人的脸部与数据库中存储的图片进行匹配,但是这种匹配实际上是错误的。 这是当警务人员出示嫌疑人的描述时,但是系统错误地警告警官该照片是别人的照片。

面部识别算法确定其任务 (Facial Recognition Algorithm determines its Task)

Facial recognition technologies are as good as and as bad as their algorithms. In other words, it is another technology that will function what it is given. For instance, inventors of facial-recognition technology struggle to adapt to a world where people routinely cover their faces to avoid spreading disease, like what we see today with coronavirus pandemics.

面部识别技术的优劣与算法相同。 换句话说,这是另一种技术,它将发挥所提供的功能。 例如,面部识别技术的发明者努力适应这样一个世界,在这个世界上人们经常遮住脸以避免传播疾病,就像我们今天在冠状病毒大流行中看到的那样。

Facial recognition has grown more popular and accurate in recent years, as an artificial intelligence technology called deep learning made computers much better at interpreting images. But some experts say, the current facial recognition algorithms are generally less reliable when a face is veiled, whether by an obstacle, a camera angle, or a mask because there are merely fewer data prepared for comparative analysis.

近年来,随着被称为深度学习的人工智能技术使计算机在解释图像方面变得更加出色,面部识别已变得越来越普及和准确。 但是一些专家说,当遮盖脸部时,无论是通过障碍物,摄像机角度还是遮罩,当前的面部识别算法通常都不太可靠 ,因为准备进行比较分析的数据很少。

面部识别只与分析有关 (Facial Recognition is all about Profiling)

Under science, Facial Recognition is about comparison and matching, and the latter requires considering common similarities and matches; therefore, irrespective of the intention, there will always be a point where profiling must be the subject of consideration. Nonetheless, it does not necessarily be at the expense of individual and civil liberty. That is why due to overwhelming criticism, some manufacturers, particularly those with lesser partnership missions with the government agencies, at least temporarily, abandoned their facial technology projects. For instance, IBM recently quit the facial-recognition market over Police racial-profiling concerns, calling U.S. Congress for ‘national dialogue’ about their use in law enforcement. Likewise, Microsoft’s chairman, Brad Smith, told the Guardian that the company was willingly withholding its facial recognition technology from governments that would use it for mass surveillance.

在科学上,人脸识别是关于比较和匹配的,而后者需要考虑共同的相似性和匹配; 因此,无论意图如何,总会有一个点必须将概要分析作为考虑的主题。 尽管如此,并不一定以牺牲个人和公民自由为代价。 因此,由于强烈的批评,一些制造商,尤其是那些与政府机构的合作伙伴关系较少的制造商,至少暂时放弃了面部技术项目。 例如,IBM最近因警察对种族特征的关注而退出了面部识别市场,呼吁美国国会对其在执法中的使用进行“全国性对话”。 同样, 微软董事长布拉德·史密斯(Brad Smith)告诉《卫报》 ,该公司愿意拒绝将其面部识别技术用于政府进行大规模监视的政府。

面部识别算法有偏 (Facial Recognition Algorithms are Biased)

It is the prevailing theory that most facial recognition solutions are biased. It is tough to go wrong going that route since the majority of technologies are used in law enforcement, public scenes and are exempted from the proper validation process as well as disclosure.

流行的理论是大多数面部识别解决方案都是有偏见的。 由于大多数技术都用于执法,公共场合,并且不受适当的验证过程和披露的约束,因此走这条路线很难出错。

Congressional Democrats are currently probing the FBI and other federal agencies to determine if the surveillance software has been deployed against demonstrators of “Black lives matter” while states including California and New York are assessing law to ban police use of the technology. Concomitant to that, major tech corporations in the players are edging away from their artificial intelligence inventions. For instance, Amazon, after years of pressure from civil rights advocates, recently announced a one-year delay in police use of its dubious facial recognition product, called Rekognition. IBM, once again, announced its intention to vacate the facial-recognition research program altogether, citing concerns about the human rights implications.

国会民主党人目前正在调查联邦调查局和其他联邦机构,以确定是否已针对 “黑死病” 示威者部署监视软件 ,而包括加利福尼亚和纽约在内的各州正在评估法律以禁止警察使用该技术。 随之而来的是,参与者中的主要技术公司正在逐渐摆脱其人工智能发明。 例如,在受到民权倡导者多年压力之后亚马逊最近宣布警方将其可疑的面部识别产品Rekognition推迟一年使用。 IBM再次表示打算撤出面部识别研究计划,理由是担心其对人权的影响。

Face surveillance is by far one of the most exposed and dangerous of the technologies accessible to law enforcement- because it is discriminatory in a variety of ways, as it stands today. First- the technology by itself can be racially biased. Second- Police in many domains in the U.S. uses mugshot databases to classify people with face recognition algorithms. However, using mugshot databases for face recognition reuses racial preference from the past, supercharging that bias with 21st-century surveillance technology.

脸部监视是迄今为止可用于执法的技术中最暴露和最危险的技术之一,因为它具有多种歧视性,就像今天一样。 首先,技术本身可能会在种族方面产生偏见。 美国许多地区的第二警察使用面部照片数据库对具有面部识别算法的人进行分类。 但是,使用面部照片数据库进行人脸识别会重用过去的种族偏爱,并用21世纪的监视技术来弥补这种偏见。

Research conducted showed that algorithms can be racist.

进行的研究表明, 算法可以是种族主义的

A 2018 study conducted by Buolamwini and Gebru’s showed that some facial analysis algorithms misclassified Black women nearly 35 percent of the time, while almost invariably get it right for white men. A subsequent study by Buolamwini and Raji at the Massachusetts Institute of Technology confirmed these problems persisted with Amazon’s software.

Buolamwini和Gebru于2018年进行的一项研究表明,一些面部分析算法将黑人女性错误分类的几率高达35%,而白人男性几乎总是正确。 麻省理工学院的Buolamwini和Raji随后进行的一项研究证实,亚马逊的软件仍然存在这些问题。

企业巨头不愿公开其面部识别算法 (Corporate Giants are Reluctant to be Transparent about their Facial Recognition Algorithms)

The recent rebuff in the U.K. against the NEC ( the provider of the facial recognition technology) could potentially prop U.S. activists’ movements and consequently spread globally. Among all, NEC, which has more than 1,000 contracts around the globe, is one of, if not the principal targets. NEC’s response to the lawsuit against the South Wales police department has lacked detail. The company merely refused to provide details of what data is used to train its algorithms to differentiate one face from another. Allegedly, a test of NEC’s technology in 2018 had a 98% failure rate, and a 2019 audit found an 81% false-positive rate.

英国最近对NEC (面部识别技术的提供者)的拒绝可能会刺激美国激进分子的运动,从而在全球范围内蔓延。 其中,NEC是全球主要目标之一,而NEC在全球拥有1000多个合同。 NEC 对针对南威尔士警察局诉讼的回应缺乏细节。 该公司只是拒绝提供使用什么数据来训练其算法以区分一张脸和另一张脸的算法的详细信息。 据称, NEC的技术在2018年的测试失败率为98%,而2019年的审计发现错误率为81%

A 2019 report written by researchers from the Human Rights, Big Data & Technology Project, based at the University of Essex Human Rights Center, identified notable flaws with the way live facial recognition technology was utilized in London by the Metropolitan Police Service. Additionally, it was found that Black and minority ethnic people being falsely profiled and taken into questioning because Police have slipped to test how adequately their systems deal with non-white faces.

埃塞克斯大学人权中心大学人权,大数据与技术项目的研究人员在2019年发表的一份报告中指出,大都会警察局在伦敦利用实时面部识别技术的方式存在明显缺陷。 此外,还发现黑人和少数民族被错误地描写并受到讯问,因为警察已经滑行测试他们的系统对非白人面Kong的处理能力。

Amid the developing turmoil around the facial recognition technologies, as pointed out earlier, some tech companies are opting out of the latter market, at least for the time being.

如前所述,在面部识别技术周围不断发展的动荡中,至少在目前,一些科技公司选择退出后者。

It is my personal impression that since various law enforcement procedures are profile-driven, they may explicitly demand technology that is enhanced based on secondary input data on human traits. Probably the facial recognition bias and discriminatory behavior may well be the outcome of the need for convenience and lower cost. For example, if the police department already performs screening processes via a manual profile of, let’s say, black people, then that will also be potentially used in their operational and technical requirements. Law enforcement agencies have used profiling techniques for centuries. Thus, it should not be surprising to the request the same pattern of practice from their facial recognition tools.

我个人的印象是,由于各种执法程序是由配置文件驱动的,因此它们可能明确要求基于人性特征的辅助输入数据进行增强的技术。 面部识别的偏见和歧视性行为很可能是对便利性和较低成本的需求的结果。 例如,如果警察局已经通过对黑人的手动配置文件进行了筛选过程,那么这也将潜在地用于他们的操作和技术要求中。 执法机构使用剖析技术已有数百年历史了。 因此,从他们的面部识别工具中请求相同的练习模式就不足为奇了。

Image for post
Photo by Ricardo Arce on Unsplash
Ricardo ArceUnsplash拍摄的照片

Once we put the puzzles of the business relationship between the NEC and the South Wales Police Department together, then it will be much brighter as to why the company is reluctant to disclose the hidden algorithm. NEC, today has more than 1,000 public biometrics deployments across the world, including operations in 20 U.S. states. It is expected that the company has many non-disclosure clauses within their respective agreements. Or corporations such as IBM simply alienate their facial recognition project for the sake of avoiding future questions.

一旦我们将NEC和南威尔士警察局之间的业务关系的困惑放在一起,那么关于该公司为何不愿透露隐藏算法的原因将变得更加光明。 NEC今天 在全球范围内部署1000多种公共生物识别技术 ,其中包括在美国20个州的业务。 预计该公司在各自的协议中有许多保密条款。 或者像IBM这样的公司只是为了避免将来的问题而疏远了他们的面部识别项目

面部识别是种族主义,将隐蔽地针对他们的技术要求要求 (Facial Recognition is Racist and will Covertly Target what their Technical Requirement Dictates)

Despite the overwhelming negative publicity, Facial recognition technology is a valuable asset to any industry. However, just like any other tool, it can be mistreated and strategically pivoted to fulfill different tasks at any time by its architect. And when they do, expectedly, they will do whatever in their power to disguise it from the public. Keeping this in mind, it is also convenient to profiles people based on their given traits, such as color, sex, race, and deformities. Although it may be attractive to law enforcement, yet comes at the expense of those who hold the trait but have done nothing wrong to be the target of humiliation and falling under the radar. It is simply not fair!

尽管有大量的负面宣传,面部识别技术对于任何行业都是宝贵的资产。 但是,就像其他任何工具一样,它的架构师可能会对其进行虐待和策略性调整,以随时完成不同的任务。 而且,当他们这样做时,他们将竭尽所能伪装成公众。 牢记这一点,根据人的特定特征(例如肤色,性别,种族和畸形)对人进行描述也很方便。 尽管它可能对执法部门具有吸引力,但会损害那些拥有特质但没有做错任何事情的人,以使其成为屈辱和卑鄙的对象。 这根本不公平!

Facial recognition is an instrument that followers the mathematically driven commands of its engineers written based on the requirements of the law enforcement agency. Therefore, in the event where the provided algorithms are detected racially-biased, one must question the entire players in its chain of development, from Business requirements to validation and utility.

面部识别是一种工具,可遵循根据执法机构要求编写的工程师的数学驱动命令。 因此,如果检测到所提供的算法存在种族偏见,则必须质疑其开发链中的所有参与者,从业务需求到验证和实用性。

“Facial recognition is as racist as its developers and users” thus, it is unethical, prejudice and must also be illegal.

“面部识别与其开发者和用户一样种族主义”,因此,这是不道德的,有偏见的,而且也必须是非法的。

翻译自: https://medium.com/illumination/the-modern-monarchy-of-facial-recognition-607aa8657d63

百度人脸识别 人脸识别模型

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值