这些帽衫使您“看不见”某些监视算法

重点 (Top highlight)

The spread of artificial intelligence into surveillance technology has given every CCTV camera the potential to turn into a spy for the state. And on the internet, images scraped from social media sites or videos can be used to build massive surveillance databases like Clearview AI.

吨他传播人工智能进入监控技术已经给每个CCTV摄像机变成为国家间谍的潜力。 在互联网上,可以使用从社交媒体网站或视频中抓取的图像来构建像Clearview AI这样的大型监视数据库。

A hoodie might change that.

连帽衫可能会改变这一点。

Researchers from Facebook and the University of Maryland have made a series of sweatshirts and T-shirts that trick surveillance algorithms into not detecting the wearer. They’ve dubbed them “invisibility cloaks” for A.I.

来自Facebook和马里兰大学的研究人员制造了一系列运动衫和T恤衫,它们欺骗了监视算法,使其无法检测穿用者。 他们将它们称为AI的“隐形斗篷”

The shirts exploit a quirk that was found in computer vision algorithms nearly five years ago. These algorithms use a simple, even naive approach to identifying objects: They look for patterns of pixels in a new image that resemble patterns they’ve seen before. Humans can use complex clues or real-world knowledge when they’re looking at something new, but algorithms just use pattern matching.

这些衬衫利用了将近五年前在计算机视觉算法中发现的怪癖。 这些算法使用一种简单甚至幼稚的方法来识别对象: 他们在新图像中寻找与以前所见图案相似的像素图案。 人们在寻找新事物时可以使用复杂的线索或现实世界的知识,但是算法仅使用模式匹配。

That means if you know the pattern the algorithm is looking for, you can hide it. In order to create the algorithm-fooling shirts, the Facebook and Maryland team ran 10,000 images of people through a detection algorithm. When a person was detected, they were replaced with randomized changes of perspective, brightness, and contrast. Another algorithm was then used to find which of the randomized changes was most effective at tricking the algorithm.

这意味着,如果您知道算法正在寻找的模式,则可以将其隐藏。 为了创建算法欺骗衬衫,Facebook和马里兰团队通过检测算法对10,000张人的图像进行了处理。 当检测到一个人时,将其替换为视角,亮度和对比度的随机变化。 然后使用另一种算法来找出哪个随机变化最有效地欺骗了该算法。

When those randomized patterns were printed on physical objects, like posters, paper dolls, and finally clothing, the detection algorithms were still tricked. Researchers noted however that the accuracy of these real-world tests was lower compared to the purely digital tests. When a person is wearing the sweatshirt, the detector’s ability to recognize them goes from nearly 100% to 50%, the likelihood of a coin toss.

当那些随机的图案被印在诸如海报,纸娃娃,最后是衣服的物理物体上时,检测算法仍然被欺骗。 但是研究人员指出,与纯数字测试相比,这些实际测试的准确性较低。 当一个人穿着运动衫时,检测器识别它们的能力从将近100%增至50%,这就是抛硬币的可能性。

This research continues a line of work being done by the University of Maryland computer science department, some of whom joined Facebook in 2018 and 2019. Previously, the lab researched how these same principles of tricking A.I. could be used to fool copyright detection algorithms, like the ones used by YouTube to prevent unauthorized use of copyrighted music, in order to call attention to how easy they were to evade.

这项研究继续了马里兰大学计算机科学系的工作,其中一些人于2018年和2019年加入Facebook。此前,该实验室研究了如何利用这些欺骗AI的相同原理来欺骗版权检测算法 ,例如YouTube用来防止未经授权使用受版权保护的音乐的方式,以便引起人们注意其逃逸的容易程度。

The work could benefit Facebook, too. The attack fundamentally works because image recognition algorithms lack any context or understanding of the images they analyze. Figuring out how they fail is a first step to make algorithms that don’t fall for these kinds of tricks. It’s the beginning of a research process that would not only make algorithms more resistant to attack but, in theory, greatly boost their accuracy and flexibility since their view of the images is less simplistic. In other words, the research could be a way of bolstering the strength of image-detection algorithms rather than destroying them.

这项工作也可以使Facebook受益。 该攻击从根本上起作用,因为图像识别算法缺乏对所分析图像的任何上下文或了解。 弄清楚它们是如何失败的,这是使算法不适合这些技巧的第一步。 这是研究过程的开始,这不仅将使算法具有更强的抵抗攻击能力,而且从理论上讲,由于图像的显示方式不那么简单,因此极大地提高了算法的准确性和灵活性。 换句话说,该研究可能是增强图像检测算法强度而不是破坏它们的一种方法。

You can actually buy a T-shirt or sweatshirt printed with the algorithm-fooling design, but right now, it wouldn’t likely protect your identity from surveillance technology. The researchers tested the designs on popular open-source algorithms and not the proprietary algorithms built by surveillance firms like NEC.

实际上,您可以购买印有算法欺骗设计的T恤或运动衫 ,但是目前,它可能无法保护您的身份不受监视技术的影响。 研究人员使用流行的开源算法而不是像NEC这样的监视公司构建的专有算法来测试设计。

The design also is meant to evade person detection, not facial recognition, which specifically targets aspects of a person’s face rather than their entire body. Person detection is used in public spaces for tasks like counting crowds or seeing if a person is approaching a smart doorbell and, in some cases, to augment facial recognition. But the research, and its turn into reproducible fashion, represents a shifting landscape in surveillance technology in which people can subvert a state-of-the-art algorithm with a simple piece of clothing and then manufacture the design for anyone who wants it.

该设计还意在逃避人的识别,而不是面部识别,后者专门针对人的面部而不是整个身体。 人体检测在公共场所中用于执行以下任务,例如计算人群数或查看人员是否正在接近智能门铃,在某些情况下还可以增强面部识别能力 。 但是,这项研究及其可复制的方式,代表了监视技术的一种变化,人们可以用一件简单的衣服来颠覆最新的算法,然后为任何需要它的人制造设计。

And even if it doesn’t work, a sweatshirt plastered with an A.I.-generated surveillance spoofer is a great conversation piece.

即使不起作用,贴有AI生成的监视踏板的运动衫也是很好的交谈工具。

翻译自: https://onezero.medium.com/these-hoodies-make-you-invisible-to-some-surveillance-algorithms-8d791339fb87

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值