一种新的工具用数字Doppelgängers干扰了面部识别技术

There are many reasons why the movement to ban the police from using facial recognition technology is growing. This summer, reporters at the New York Times and Detroit Free Press revealed that Detroit police officers used faulty facial recognition to misidentify and wrongfully arrested two Black men, one for supposedly stealing watches, and the other for allegedly grabbing someone else’s mobile phone. Recent reporting at Gothamist revealed the New York Police Department deployed facial recognition technology to investigate “a prominent Black Lives Matter Activist.”

Ť这里有许多原因移动到禁止警方使用面部识别技术也越来越大。 今年夏天, 《纽约时报》底特律自由报的记者透露,底特律警察利用错误​​的面部识别技术来识别并错误地逮捕了两名黑人,一个人据说是在偷手表 ,另一个人则是在抢别人的手机哥谭(Gothamist)的最新报道显示,纽约警察局部署了面部识别技术来调查“ 杰出的黑人生活问题积极 分子 ”。

Technology companies have been harshly criticized for providing law enforcement with facial recognition technology. While IBM got out of the business and Microsoft and Amazon emphasize that they’re not currently providing facial recognition technology to police, companies like Clearview AI, Ayonix, Cognitec, and iOmniscient are continuing to work with law enforcement. Not every technology company, however, beats to the same drum. There are startups that are geared to limiting the dangers posed by facial recognition technology.

技术COMPAñIES已经严厉批评提供执法与面部识别技术。 尽管IBM退出了业务, 微软亚马逊强调他们目前不向警察提供面部识别技术,但Clearview AI,Ayonix,Cognitec和iOmniscient等公司仍在与执法部门合作 。 但是,并非每个科技公司都表现出色。 有些初创公司致力于限制面部识别技术带来的危险。

Berlin-based startup brighter AI recently launched a public interest campaign to help solve the problem of authorities using facial recognition technology to identify protesters. The campaign website, Protect Photo, provides a free privacy engineering service that quickly removes “facial fingerprints” from user-uploaded images.

总部位于柏林的初创公司Brighter AI最近发起了一项公益运动,以帮助解决使用面部识别技术识别抗议者的当局的问题。 该活动的网站Protect Photo提供了一项免费的隐私工程服务,该服务可快速删除用户上传的图像中的“面部指纹”。

Deploying proprietary “deep natural anonymization” software, it scans the original photos, pinpoints a large number of facial features and mathematical relations between them (how far apart the nose and mouth are, for example), infers demographic information (including age, ethnicity, and gender), and combines this data to create new images that look strikingly similar to the originals but which contain an essential difference. The new photos have new synthetic faces. CEO Marian Gläser claims these are facial recognition proof: Automated systems can’t identify the digital doppelgängers.

部署专有的“深度自然匿名化”软件,它可以扫描原始照片,查明大量面部特征及其之间的数学关系(例如,鼻子和嘴巴的距离),推断人口统计信息(包括年龄,种族,和性别),然后结合这些数据来创建新图像,这些图像看起来与原始图像非常相似,但两者之间存在本质差异。 新照片具有新的合成面Kong。 首席执行官玛丽安·格莱瑟(MarianGläser)声称,这些都是面部识别的证明:自动化系统无法识别数字多普勒仪。

Image for post
Image for post
Before and after using brighter AI’s Protect Photo tool, which creates a synthetic face to guard against facial recognition. The company’s technology only works on group photos, which this image is cropped from. ( OneZero editor in chief Damon Beres pictured here, with permission.) Credit: brighter AI
在使用更明亮的AI的“保护照片”工具之前和之后,该工具会创建合成的面部以防止面部识别。 该公司的技术仅适用于集体照,该集体照是从中裁剪出来的。 ( OneZero主编达蒙·贝雷斯(Damon Beres) 于此处,未经许可。)信誉:更加明亮的AI

用数字多普勒仪保护面部 (Protecting faces with digital doppelgängers)

I can’t emphasize strongly enough that privacy engineering can only provide limited relief from facial recognition technology — particularly not when it involves a manual upload process from an end user. The widespread dangers posed by this technology fundamentally require legal remedy. Facial recognition technology should be banned.

我不能过分强调隐私保护技术只能从面部识别技术中获得有限的缓解,尤其是当它涉及最终用户的手动上传过程时,尤其如此。 这项技术带来的广泛危险从根本上需要法律补救。 面部识别技术应被禁止

But since the ban movement is taking time to grow, privacy-enhancing technologies can play an important role just like they do in other contexts. Nobody expects surveillance capitalism to be destroyed by using DuckDuckGo as a search engine, Signal as a messaging service, or ProtonMail for email. Still, these aren’t pointless services.

但是,由于禁令运动需要时间来发展,因此增强隐私技术可以像在其他情况下一样发挥重要作用。 没有人期望使用DuckDuckGo作为搜索引擎,Signal作为消息传递服务或ProtonMail作为电子邮件来破坏监视资本主义。 但是,这些并不是毫无意义的服务。

Since I wasn’t given special access to brighter AI’s software, I couldn’t ask a technical expert to directly assess whether the altered facial template data will consistently jam facial recognition systems, whether the synthetic faces could somehow be reverse-engineered by a committed adversary, if the demographic inferences are accurate across large samples (or if there’s a risk of changing the meaning of altered photos, for example, by unintentionally whitewashing them), or if the photos uploaded to Protect Photo are, as promised, readily deleted. To justifiably trust the software, this is important information to know, and how diverse the software team is has relevance, too. Members of communities who experience discrimination can have a heightened sensitivity to problems their communities face, including prevalent misunderstandings about their individual and group identities. But Gläser says the software is being externally audited, and if his confidence is justified, protesters and journalists might want to consider the benefits of using it. While face-swapped pictures can still contain personal identifiers, it should take the police more time and effort to match people by tattoos or signature clothing and accessories than by using automated facial recognition systems. Because the path of least resistance is often the one that gets taken, imposing transaction costs can deter unfair police scrutiny.

由于我没有获得更明亮的AI软件的特殊访问权限,因此我无法要求技术专家直接评估更改后的面部模板数据是否会始终卡住面部识别系统,是否可以通过承诺的方式对合成面部进行逆向工程对手,如果人口统计推论在大样本中是准确的(或者存在改变被更改照片的含义的风险,例如,通过无意地粉饰它们),或者是否已按承诺将上传到保护照片的照片轻易删除。 为了合理地信任软件,这是要了解的重要信息,并且软件团队的多样性也具有相关性。 遭受歧视的社区成员可以更加敏感地对待社区所面临的问题,包括对个人和群体身份的普遍误解。 但是格拉瑟(Gläser)说,该软件正在接受外部审核,如果他的信心成立,那么抗议者和新闻工作者可能希望考虑使用该软件的好处。 尽管换过脸的照片仍然可以包含个人标识符,但与使用自动面部识别系统相比,警察需要花费更多的时间和精力来通过纹身或签名服装和配饰来匹配人。 由于阻力最小的途径通常是采取的途径,因此施加交易成本会阻止不公平的警察审查。

In order for the Protect Photo campaign to make as big of an impact as this type of intervention can create, two things need to happen. For starters, a well-designed and easy to use mobile app version needs to be developed and released. Perhaps most importantly, people will need to see a compelling reason for choosing the service. They already have easy options for obscuring faces in photos, such as blurring them or replacing them with boxes or emoji.

为了使“保护照片”活动产生最大的影响,这种干预措施可以产生两种效果。 首先,需要开发和发布设计良好且易于使用的移动应用程序版本。 也许最重要的是,人们需要选择一个令人信服的理由来选择服务。 他们已经可以使用简单的选项来遮盖照片中的面Kong,例如将其模糊化或用盒子或表情符号替换它们。

Gläser sees the secret sauce in the emotional quality of brighter AI’s face-swapped photos: the determination, grit, and even anger in the protesters’ faces can remain intact through the synthetic creations. Whether others will agree and find the emotional continuity sufficiently compelling remains to be seen. The big institutional question is whether journalists who feel torn between the obligation to not manipulate photos and the responsibility to “minimize harm” and “give special consideration to vulnerable subjects” will see this type of synthetic face as close enough to an unaltered one to justify updating the norms of protest coverage. Even if they do, they might worry about the slippery slope potential of sliding from face-swapping for protective purposes to sanctioning alternative facts and propaganda. Perhaps using captions or watermarks could mitigate anxiety.

格拉瑟(Gläser)看到了更明亮的AI脸部交换照片的情感品质中的秘诀:通过合成的创作,抗议者脸上的决心,毅力甚至愤怒可以保持原样。 其他人是否会同意并找到足够的引人注目的情感仍有待观察。 体制上的一个大问题是,在不操纵照片的义务与“ 最大限度地减少伤害 ”和“ 特别考虑弱势群体 ”的责任之间感到不安的记者是否会认为这种合成面Kong足够接近未改变的面Kong来证明其合理性更新抗议报道的规范。 即使他们这样做,他们也可能担心从出于保护目的的面部交换到批准替代性事实和宣传的滑坡可能性。 也许使用标题或水印可以减轻焦虑。

When I experimented with the Protect Photo software I was surprised to learn that the system rejected portraits of individuals. The only photos I could get it to alter contained multiple protesters. I was surprised by this since the goal of the Protect Photo campaign is to safeguard probe photos against facial recognition technology. I asked Gläser why this was happening and learned it’s not a glitch. Instead, he explained it’s an ethical limitation baked into the codean attempt to prevent bad actors from using software to create socially detrimental deepfakes. That’s an interesting restriction, one that highlights a key difference from Fawkes, an image cloaking service recently covered by Kashmir Hill at the New York Times. Fawkes allegedly jams facial recognition systems by making “tiny, pixel-level changes” to pictures of faces so that they corrupt the training models that power facial recognition systems. Since Fawkes is based on a Trojan Horse, data pollution model, I couldn’t get it to detect faces in photos containing multiple protestors.

当我尝试使用Protect Photo Photo软件时,得知该系统拒绝了个人肖像,我感到很惊讶。 我唯一能更改的照片包含多个抗议者。 我对此感到惊讶,因为“保护照片”活动的目标是保护探测器照片免受面部识别技术的侵害。 我问格拉塞尔(Gläser)为什么会这样,并得知这不是小问题。 相反,他解释说,这是代码中存在的道德限制-试图防止不良行为者使用软件创建对社会有害的深造。 这是一个有趣的限制,突出了与Fawkes的主要区别,Fawkes是最近由《纽约时报》 克什米尔·希尔 ( Kashmir Hill)报道的图像伪装服务。 据称,福克斯通过对面部图片进行“ 微小的像素级更改 ”来堵塞面部识别系统,从而破坏了为面部识别系统提供动力的训练模型。 由于Fawkes基于Trojan Horse数据污染模型,因此我无法检测到包含多个抗议者的照片中的面Kong。

商业背景 (The business background)

Brighter AI was ready to spring into action because it already developed the deep natural anonymization software for its business transactions. Working with European clients in the self-driving car industry (like Volvo and Valeo) and the public transportation industry (like Deutsche Bahn), brighter AI has been carving out a niche as a data-minimization service that can enhance compliance with the General Data Protection Regulation (GDPR). When clients use brighter AI’s software to swap faces found on the photos and videos they’ve taken for research and product development purposes, Gläser says they process less personal information that they would if they analyzed unaltered facial images. That’s less personal information they need consent for. It’s less personal information that can be abused during a data breach. And it’s less personal information that someone might want to delete under the right to be forgotten.

Brighter AI准备行动起来,因为它已经为商业交易开发了深层次的自然匿名软件。 与自动驾驶汽车行业(如沃尔沃和法雷奥)和公共交通行业(如德国铁路)等欧洲客户合作,更聪明的AI一直在挖掘数据市场最小化服务的利基地位,以增强对通用数据的合规性保护条例(GDPR)。 当客户使用更明亮的AI软件交换他们在研究和产品开发目的所拍摄的照片和视频中发现的面Kong时,Gläser表示,他们处理的个人信息要少于分析未经改变的面部图像所需要的个人信息。 那是他们需要同意的较少个人信息。 较少的个人信息会在数据泄露期间被滥用。 而且,根据个人被遗忘的权利,人们可能希望删除的个人信息也更少。

Since the synthetic faces contain demographic data and are programmed to, as best as the software allows, maintain the original eye positioning and facial expressions, Gläser contends the information is far more useful as training data for neural networks than are blurred, blacked out, or pixelated faces. Specifically, brighter AI’s clients can use the demographic information to balance their datasets and combat the biases that arise when inputs are too homogeneous. The practical payoffs of doing so range from increasing the chances of clients developing user-detecting software that recognizes diverse types of people to creating analytics that help clients to better understand how different types of people are inclined to respond to a situation. For example, self-driving car companies need to ensure their vehicles can recognize all kinds of pedestrians who are crossing in front of them (not just white males) and that their driver monitoring systems can detect whether all kinds of drivers (not just white males) are paying attention to the road. And public transportation companies want to ensure that people of all ages can successfully navigate their environments — that older people, for instance, won’t be confused when looking for their trains.

由于合成人脸包含人口统计数据,并且在软件允许的范围内进行了编程,以保持原始的眼睛位置和面部表情,因此Gläser认为,该信息作为神经网络的训练数据远比模糊,涂黑或模糊的有用。像素化的面Kong。 具体来说,聪明的AI客户可以使用人口统计信息来平衡其数据集,并应对输入过于均匀时出现的偏差。 这样做的实际收益范围从增加客户开发可识别各种类型人的用户检测软件的机会到创建有助于客户更好地了解不同类型的人如何对情况做出响应的分析的机会。 例如,自动驾驶汽车公司需要确保自己的车辆能够识别在他们面前横过的各种行人(不仅是白人男性),并且他们的驾驶员监控系统可以检测到是否有各种驾驶员(不仅是白人男性) )正在注意这条路。 公共交通公司希望确保各个年龄段的人都能成功地驾驭他们的环境,例如,老年人在寻找火车时不会感到困惑。

为什么保护个人晦涩是至关重要的,但还不够 (Why protecting personal obscurity is crucial but not enough)

For many years, Woodrow Hartzog and I have been championing conceptualizing a range of so-called privacy issues as obscurity dilemmas and defending obscurity as essential to personal development, intimacy, and political participation. From our perspective, brighter AI’s face-switching service can be classified as an obscurity filter.

多年来,我和伍德罗·哈佐格(Woodrow Hartzog)一直在倡导将一系列所谓的隐私问题概念化为晦涩难解,并捍卫晦涩难懂对个人发展,亲密关系和政治参与至关重要。 从我们的角度来看,更明亮的AI的面部切换服务可以归类为模糊过滤器

“Obscurity” is the degree to which unwanted parties like government, corporate, or social snoops are prevented from doing bad things with our personal information because they find it hard to get or hard to properly interpret. Before facial recognition technology became ubiquitous, a default layer of obscurity protected our faces when we went out in public. That’s because only a limited number of people recognize them and know our names.

“模糊不清”是指政府,企业或社会窥探之类的有害团体由于难以获取或难以正确解释而无法利用我们的个人信息进行不良行为的程度。 在人脸识别技术无处不在之前,当我们在公共场合外出时,默认的遮盖保护了我们的脸部 。 那是因为只有少数人认出他们并知道我们的名字。

Given the threats a host of emerging surveillance technologies pose, it’s worth considering developing and using obscurity-enhancing tools while robust legal solutions are advocated for and legislated. We’re going to need protection from far more than facial recognition technology. Among other obscurity-eviscerating dangers, there’s gait and heartbeat recognition, as well as automated lip-reading.

考虑到许多新兴监视技术带来的威胁,值得一提的是,开发和使用模糊处理增强工具,同时提倡并立法通过强有力的法律解决方案。 我们需要的不仅仅是面部识别技术,还需要保护。 除其他可避免模糊不清的危险外,还有步态心跳识别功能,以及自动唇读功能

The thing is, while obscurity filters can be useful tools for fighting against all kinds of robot surveillance, they have limitations just like all privacy engineering products and services do. Deploying technological solutions to protect people from surveillance threats sets off a cat-and-mouse arms race. And depending on how obscurity filters are configured and the protections existing laws offer, the technological safeguards can exacerbate some problems while minimizing others. For example, when protecting obscurity also facilitates the automated collection of demographic data about race, gender, and ethnicity, there’s a risk of intensifying social problems even if that information isn’t linked to any personal identifiers.

事实是,尽管模糊过滤器可以作为对抗各种机器人监视的有用工具,但它们像所有隐私工程产品和服务一样具有局限性。 部署技术解决方案以保护人们免受监视威胁的影响,引发了一场猫捉老鼠的军备竞赛。 并且,根据模糊性过滤器的配置方式和现有法律提供的保护,技术保护措施可以使某些问题更加严重,同时将其他问题最小化。 例如,当保护默默无闻还有助于自动收集有关种族,性别和种族的人口统计数据时,就存在加剧社会问题的风险,即使该信息未与任何个人标识符相关联也是如此。

The risk exists because even though there are responsible ways to collect and analyze demographic information, automating information about race incentivizes automated racism. Automating information about gender incentivizes threats to queer people as well as sexism. And automating information about age incentivizes ageism. These incentives exist because the various forms of essentialism are linked to unfounded prejudices and unfair displays of power that automation, even when well-intentioned, risks perpetuating and amplifying. Even trying to automate the interpretation of facially expressed emotions inherently poses discrimination risks. No matter how diligent companies like brighter AI are or how selective they are when choosing clients, these incentives will remain so long as biases remain socially embedded.

之所以存在该风险,是因为即使有负责任的方式来收集和分析人口统计信息,有关种族的自动化信息也会激发种族主义的自动化 。 自动化有关性别的信息会激发酷儿和性别歧视的威胁 。 自动化有关年龄的信息会激发年龄歧视。 这些动机之所以存在,是因为各种形式的本质主义都与毫无根据的偏见和权力的不公正表现联系在一起,这种权力即使具有良好的意图,也可能会长期存在并不断放大 。 即使试图自动解释面部表情的情绪,也固有地存在歧视的风险 。 不管像更聪明的AI这样勤奋的公司,还是他们在选择客户时有多高的选择性,只要偏见仍能在社会上得到体现,这些激励措施就将一直存在。

翻译自: https://onezero.medium.com/a-new-tool-jams-facial-recognition-technology-with-digital-doppelg%C3%A4ngers-47fca4154372

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值