ai人工智能_我的人工智能周:第5部分

ai人工智能

Welcome to My Week in AI! Each week this blog will have the following parts:

欢迎来到AI的我周! 每个星期,该博客将包含以下部分:

  • What I have done this week in AI

    我这周在AI所做的工作
  • An overview of an exciting and emerging piece of AI research

    激动人心的新兴AI研究概述

进度更新 (Progress Update)

交互式可视化数据 (Visualizing Data Interactively)

I’ve spent this week working on a dashboarding project using Plotly and Dash. Dashboarding is new for me, and I have been excited to learn Dash whilst building something with real data. I have found Dash very easy to use and intuitive, mostly due to the fact that it is a declarative and reactive library. Similar to Altair, which I spoke about in Part 3, Dash is an all-Python library, making it very easy for data scientists like myself to develop interactive and aesthetically pleasing dashboards.

我本周已经在使用Plotly和Dash进行仪表板项目上工作。 仪表板对我来说是新的,我很高兴学习Dash,同时使用实际数据构建东西。 我发现Dash非常易于使用和直观,主要是因为Dash是一个声明式和React式库。 与我在第3部分中谈到的Altair相似,Dash是一个全Python库,使像我这样的数据科学家很容易开发交互式且美观的仪表板。

I’ve also spent some time at the Spark + AI Summit by Databricks, which started on Monday and is a conference that I have been greatly looking forward to. I am especially interested in the talks on productionizing machine learning models and on using MLflow, and I’ll be sharing my thoughts and reactions in next week’s blog.

我还在Databricks举办的Spark + AI峰会上度过了一段时间,该峰会于周一开始,是我非常期待的会议。 我对有关生产机器学习模型和使用MLflow的演讲特别感兴趣,我将在下周的博客中分享我的想法和React。

热门AI文章: (Trending AI Articles:)

1. Natural Language Generation:The Commercial State of the Art in 2020

1.自然语言的产生:2020年的商业发展现状

2. This Entire Article Was Written by Open AI’s GPT2

2.此整篇文章由Open AI的GPT2撰写

3. Learning To Classify Images Without Labels

3.学习对没有标签的图像进行分类

4. Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst

4.成为数据科学家,数据分析师,财务分析师和研究分析师

新兴研究 (Emerging Research)

生成模型的模型反转攻击 (Model Inversion Attacks with Generative Models)

The major AI research event that took place over the last week was CVPR 2020 — the pre-eminent computer vision conference. There were many fascinating papers submitted as always, so I decided that the research I would present this week should be from that conference. One paper that caught my eye was ‘The Secret Revealer: Generative Model-Inversion Attacks against Deep Neural Networks’ by Zhang et al., a research team from China and the US¹. In this research, they demonstrated a novel model-inversion attack method, which was able to extract information about training data from a deep neural network.

上周发生的主要AI研究活动是CVPR 2020,这是杰出的计算机视觉会议。 一如既往地提交了许多引人入胜的论文,因此我决定本周要发表的研究应该来自该会议。 来自中国和美国的研究小组Zhang等人的论文《我的秘密揭密者:针对深层神经网络的生成模型逆转攻击》引起了我的注意。 在这项研究中,他们展示了一种新颖的模型反转攻击方法,该方法能够从深度神经网络中提取有关训练数据的信息。

Model-inversion attacks are especially dangerous with models that use sensitive data for training, for example healthcare data, or facial image datasets. The running example used in this research was a white-box attack on a facial recognition classifier, to expose the training data and recover the face images. The researchers’ method involved training GANs on public ‘auxiliary data,’ which in their example was defined as images in which the faces were blurred. This encourages the generator to produce realistic images. The next step was to use the trained generator to then recover the missing sensitive regions in the image; this step was framed as an optimization problem.

对于使用敏感数据进行训练的模型(例如医疗保健数据或面部图像数据集),模型反转攻击尤其危险。 本研究中使用的运行示例是对面部识别分类器的白盒攻击,以暴露训练数据并恢复面部图像。 研究人员的方法涉及对GAN进行有关公共“辅助数据”的训练,在他们的示例中,辅助数据被定义为面部模糊的图像。 这鼓励生成器生成逼真的图像。 下一步是使用训练有素的生成器,然后恢复图像中丢失的敏感区域。 此步骤被视为优化问题。

Image for post
Proposed attack method¹
建议的攻击方法¹

The researchers found that their method performed favorably in this task when compared with previous state-of-the-art methods. They also made two further empirical observations that I would like to reiterate. First, that models with high predictive power can be attacked with higher accuracy because such models are able to build a strong correlation between features and labels; this characteristic is exactly what is exploited in model inversion attacks. Second, differential privacy could not protect against this method of attack, as it does not aim to conceal the private attributes of the training data — it only obscures them with statistical noise. This raises questions about models that rely on differential privacy for information security.

研究人员发现,与以前的最新方法相比,他们的方法在此任务中表现出色。 他们还提出了另外两个经验性观察,我想重申一下。 首先,具有较高预测能力的模型可以以较高的精度进行攻击,因为此类模型能够在要素和标签之间建立强大的相关性; 此特征正是模型反转攻击中利用的特征。 其次,差异性隐私无法防止这种攻击方法,因为它并不旨在隐藏训练数据的私有属性,只会使统计数据模糊它们。 这就提出了关于依赖差异隐私保护信息安全的模型的问题。

Image for post
Artificial Intelligence Jobs
人工智能工作

从2D图像无监督学习3D对象 (Unsupervised Learning of 3D Objects from 2D Images)

I also wanted to mention the Best Paper Award winner from CVPR 2020, ‘Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild,’ by Wu et al². They proposed a method of learning 3D objects from single-view images without any external supervision. The researchers centered their method on an autoencoder that draws information based on the depth, albedo, viewpoint and illumination of the input image. Many of the results they presented in their paper were noteworthy, and I highly recommend reading it and trying their demo (which is available on their Github page).

我还想提到CVPR 2020的最佳论文奖获得者,Wu等人的“从野外图像中可能对称对称可变形3D对象的无监督学习”。 他们提出了一种无需任何外部监督即可从单视图图像学习3D对象的方法。 研究人员将他们的方法集中在自动编码器上,该自动编码器根据输入图像的深度,反照率,视点和照明度绘制信息。 他们在论文中提出的许多结果都值得注意,我强烈建议阅读并尝试他们的演示(可在Github页上找到)。

Join me next week for my thoughts on the Spark + AI summit, and an overview of a piece of exciting and emerging research. Thanks for reading and I appreciate any comments/feedback/questions.

下周加入我,对Spark + AI峰会发表看法,并概述一些令人兴奋和新兴的研究。 感谢您的阅读,我非常感谢任何评论/反馈/问题。

翻译自: https://becominghuman.ai/my-week-in-ai-part-5-9543453bfd90

ai人工智能

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值