简历造假_深造假的兴起

简历造假

Deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate and generate visual and audio content with a high potential to deceive. The purpose of this article is to enhance and promote efforts into research and development and not to promote or aid in the creation of nefarious content.

Deepfake利用机器学习和人工智能中的强大技术来操纵和生成具有极高欺骗潜力的视觉和音频内容。 本文的目的是加强和促进研发工作,而不是促进或帮助创建有害内容。

介绍 (Introduction)

Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. In recent months videos of influential celebrities and politicians have surfaced displaying a false and augmented reality of one's believes or gestures.

d eepfakes是,其中一人的现有图像或视频被置换了别人的肖像合成介质。 在最近几个月中,有影响力的名人和政客的视频浮出水面,显示出自己的信念或手势的虚假和增强现实。

Whilst deep learning has been successfully applied to solve various complex problems ranging from big data analytics to that of computer vision the need to control the content generated is crucial alongside that of it’s availability to the public.

尽管深度学习已成功应用于解决从大数据分析到计算机视觉的各种复杂问题,但控制生成的内容以及向公众提供内容的需求至关重要。

Within recent months, a number of mitigation mechanisms have been proposed and cited with the use of Neural Networks and Artificial Intelligence being at the heart of them. From this, we can distinguish that a proposal for technologies that can automatically detect and assess the integrity of visual media is therefore indispensable and in great need if we wish to fight back against adversarial attacks. (Nguyen, 2019)

在最近几个月中,已提出并引用了许多缓解机制,其中以神经网络和人工智能为核心。 由此,我们可以区分出,对于可以自动检测和评估视觉媒体完整性的技术的提议是必不可少的,并且如果我们希望对抗对抗性攻击,则非常需要。 (阮,2019)

2017年初 (Early 2017)

Deepfakes as we know them first started to gain attention in December 2017, after Vice’s Samantha Cole published an article on Motherboard.

在Vice的Samantha Cole在主板上发表文章之后,我们所知道的Deepfake于2017年12月首次开始受到关注。

The article talks about the manipulation of celebrity faces to recreate famous scenes and how this technology can be misused for blackmail and illicit purposes.

这篇文章讨论了操纵名人面Kong以重现著名场景的方法,以及如何将该技术滥用于勒索和非法目的。

The videos were significant because they marked the first notable instance of a single person who was able to easily and quickly create high-quality and convincing deepfakes.

这些视频之所以重要,是因为它们标志着一个人的第一个显着实例,该人能够轻松,快速地创建高质量且令人信服的深造品。

Cole goes on to highlight the juxtaposition in society as these tools are made free by corporations for students to gain sufficient knowledge and key skills to enhance their general studies at University and school.

Cole继续强调社会上的并置,因为公司免费提供这些工具,以使学生获得足够的知识和关键技能,以增强他们在大学和学校的通识教育。

Open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning. — Samantha Cole

Google等免费开放给研究人员,研究生和任何对机器学习感兴趣的人使用的TensorFlow等开源机器学习工具。 —萨曼莎·科尔

Whilst deepfakes have the potential to differ in general quality from previous efforts of superimposing faces onto other bodies. A good deepfake, created by Artificial Intelligence that has been trained on hours of high-quality footage creates such extremely high-quality content humans struggle to understand whether it is real or not. In turn, researches have shown interest in developing neural networks to help understand the accuracy of such videos. From this, they are able to then distinguish them as fake.

尽管深造假有可能在总体质量上与以前将面Kong叠加到其他物体上的努力有所不同。 由AI创造的高质量的深层假面经过数小时的高质量素材培训后,就可以创造出如此高品质的内容,人们难以理解它是否真实。 反过来,研究显示出对开发神经网络以帮助理解此类视频的准确性的兴趣。 由此,他们便能够将其区分为假货。

In general, a good deepfake can be found where the insertions around the mouth are seamless alongside having smooth head movements and appropriate coloration to surroundings. Gone have the days of simply superimposing a head onto a body and animating it by hand as the erroneous is still noticeable leading to dead context and mismatches.

通常,在口周围的插入物是无缝的,并且头部运动顺畅且周围环境具有适当的颜色的情况下,可以找到良好的Deepfake。 仅仅将头叠加到身体上并用手对其进行动画处理的日子已经一去不复返了,因为错误仍然很明显,导致死境和不匹配。

2018年初 (Early 2018)

In January 2018, a proprietary desktop application called FakeApp was launched. This app allows users to easily create and share videos with their faces swapped with each other. As of 2019, FakeApp has been superseded by open-source alternatives such as Faceswap and the command line-based DeepFaceLab. (Nguyen, 2019)

2018年1月,启动了专有的桌面应用程序FakeApp。 这个应用程式可让使用者轻​​松建立和分享彼此互换的影片。 截至2019年,FakeApp已被Faceswap和基于命令行的DeepFaceLab等开源替代品取代。 (阮,2019)

With the availability of this technology being so high, websites such as GitHub have sprung to life in offering new mythologies of combatting such attacks. Within the paper ‘Using Capsule Networks To Detect Forged Images and Videos’ Huy goes on to talk about the ability to use forged images and videos to bypass facial authentication systems in secure environments.

随着这项技术的可用性如此之高,诸如GitHub之类的网站如雨后春笋般冒出,提供了应对此类攻击的新神话。 在“ 使用胶囊网络检测伪造的图像和视频 ”一文中,Huy继续谈论了在安全环境中使用伪造的图像和视频绕过面部认证系统的能力。

The quality of manipulated images and videos has seen significant improvement with the development of advanced network architectures and the use of large amounts of training data that previously wasn't available.

随着高级网络体系结构的发展以及使用以前无法获得的大量训练数据,可操纵图像和视频的质量已得到显着改善。

2018年下半年 (Later 2018)

Platforms such as Reddit start to ban deepfakes after fake news and videos that started circling from specific communities on their site. Reddit took it on themself to delete these communities in a stride to protect their own.

Reddit等平台开始禁止假冒伪造的新闻和视频,这些消息从网站上的特定社区开始传播。 Reddit自行决定删除这些社区以保护自己。

A few days later BuzzFeed publishes a frighteningly realistic video that went viral. The video showed Barack Obama in a deepfake. Unlike the University of Washington video, Obama was made to say words that weren’t his own, in turn helping to raise light to this technology.

几天后,BuzzFeed发布了一部令人震惊的逼真的视频,并Swift传播开来。 录像显示,巴拉克·奥巴马(Barack Obama)深陷其中。 与华盛顿大学的视频不同,奥巴马被要求说的话不是他自己的,反而有助于提高这项技术的作用。

Below is a video BuzzFeed created with Jordan Peele as part of a campaign to raise awareness of this software.

下面是与Jordan Peele共同制作的BuzzFeed视频,作为提高人们对该软件知名度的一项活动。

2019年初 (Early 2019)

In the last year, several manipulated videos of politicians and other high-profile individuals have gone viral, highlighting the continued dangers of deepfakes, and forcing large platforms to take a position.

去年,一些政客和其他知名人士的操纵录像带风靡一时,突显了假冒伪劣品的持续危险,并迫使大型平台采取行动。

Following BuzzFeed’s disturbingly realistic Obama deepfake, instances of manipulated videos of other high-profile subjects began to go viral, and seemingly fool millions of people online.

在BuzzFeed令人不安的逼真的奥巴马(Obama)冒犯之后,操纵其他高知名度主题视频的实例开始风靡一时,看似愚弄了数百万人上网。

Despite most of the videos being even more crude than deepfakes — using rudimentary film editing rather than AI — the videos sparked sustained concern about the power of deepfakes and other forms of video manipulation while forcing technology companies to take a stance on what to do with such content. (Business Insider, 2019)

尽管大多数视频比Deepfake更加粗糙-使用基本的电影编辑而不是AI-这些视频引发了人们对Deepfake和其他形式的视频操纵的力量的持续关注,同时迫使技术公司对如何处理此类行为持立场。内容。 (商业内幕,2019)

Image for post
Photo by C hutterSnap on Unsplash
C hutterSnapUnsplash拍摄的照片

缓解与对策 (Mitigation and Countermeasures)

There are several state-of-the-art methods for detecting images or videos generated by a computer using, for example, a deepfake technique for face swapping.

有几种用于检测计算机生成的图像或视频的先进方法,例如使用Deepfake技术进行人脸交换。

Fridrich and Kodovsky proposed a hand-crafted-feature noise-based approach for steganalysis that can also be used for forgery detection. This could, in turn, be tuned and applied to serve as a framework for detecting forged videos that pose as deepfakes on the web. (Fridrich, 2012)

Fridrich和Kodovsky提出了一种基于特征的手工特征隐写分析方法,该方法也可用于伪造检测。 反过来,可以对其进行调整,并将其用作检测伪造的视频的框架,这些视频构成了网络上的伪造品。 (弗里德里希,2012年)

Others address this issue differently by saying that we need to build systems that can distinguish a deepfake from a genuine video. This can be done by using algorithms that are similar to those that have been developed to create deepfakes in the first place since that data can be used to train the detectors.

其他人则通过说我们需要构建能够将Deepfake与真实视频区分开来的系统来解决这个问题。 这可以通过使用与最初开发用于创建深层伪造算法的算法相似的算法来完成,因为该数据可用于训练检测器。

But others disagree and say that this method may not be very helpful for detecting more sophisticated deepfakes as they continue to evolve, says Sean Gourley, the founder, and CEO of Primer AI, a machine intelligence firm that builds products for analyzing large data sets.

但是,其他人则不同意,并说这种方法对于检测不断发展的更复杂的深层虚假可能不是很有帮助,机器智能公司Primer AI的创始人兼首席执行官Sean Gourley说,该公司生产用于分析大数据集的产品。

You can kind of think of this like zero-day attacks in the cybersecurity space,” says Gourley. “The zero-day attack is one that no one’s seen before, and thus has no defenses against. — Sean Gourley

您可以将其想象为网络安全领域的零日攻击,” Gourley说。 “零日攻击是前所未有的攻击,因此无法防御。 —肖恩·古利

As is often the case with cybersecurity, it can be difficult for those trying to solve issues and patch bugs to remain one step ahead of malicious actors. The same goes for deepfakes, says Villasenor.

与网络安全一样,那些试图解决问题和修补错误的人可能很难保持领先于恶意行为者的一步。 维拉森诺说,对于深造货也是如此。

It’s sort of an arms race, you’re always going to be a few steps behind on the detection. — Sean Gourley

这是一场军备竞赛,您总是在检测上落后几步。 —肖恩·古利

白皮书 (White Papers)

翻译自: https://medium.com/swlh/the-rise-of-deepfakes-19972498487a

简历造假

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值