结构化数据 神经网络_神经结构化学习对抗正则化

结构化数据 神经网络 介绍 (Introduction)As many of us are no doubt aware, the invariable progress made in the field of Computer Vision, has lead to some incredible achievements and broad deployment in fields fro...
摘要由CSDN通过智能技术生成

结构化数据 神经网络

介绍 (Introduction)

As many of us are no doubt aware, the invariable progress made in the field of Computer Vision, has lead to some incredible achievements and broad deployment in fields from healthcare and self-driving cars, to climate study, to name but a few. From state-of-the-art Liquid Nitrogen cooled hardware in the form of Tensor Processing Units (TPU) to increasingly sophisticated, multi-million parameter Deep-Convolutional Networks such as GoogLeNet, AlexNet the capability of such technology continues to break previously unassailable barriers.

众所周知,我们许多人都知道,计算机视觉领域的不断进步已经带来了令人难以置信的成就,并在医疗保健和自动驾驶汽车,气候研究等领域广泛部署。 从采用Tensor处理单元(TPU)形式的最新液氮冷却硬件到日趋复杂,数百万个参数的深层卷积网络(如GoogLeNet)AlexNet的能力不断突破以前无法克服的障碍。

对抗漏洞 (Adversarial Vulnerability)

Despite these incredible achievements, however, it has been proven that even the most skilful models are not infallible. Multiple research efforts have demonstrated how sensitive these models are to even imperceivably small changes in the input data structure. Initially in the findings of the joint research paper by Google and New York University: ‘Intriguing properties of neural networks, 2014’ the subject of model vulnerability to adversarial examples is now recognised as a subject of such importance that competitions now exist to tackle it:

尽管取得了这些令人难以置信的成就,但事实证明,即使是最熟练的模型也不是绝对可靠的。 多项研究表明,这些模型对输入数据结构中很小的变化甚至多么难以置信。 最初在Google和纽约大学的联合研究论文的发现中: “神经网络的有趣特性,2014年” ,对抗性示例的模型脆弱性这一主题现在被认为是具有如此重要意义的主题,因此现在存在竞争来解决这个问题:

The existence of these errors raises a variety of questions about out-of-sample generalization, and how the use of such examples might be used to abuse deployed systems.

这些错误的存在引发了关于样本外泛化以及如何使用此类示例来滥用已部署系统的各种问题。

神经结构学习 (Neural Structured Learning)

In some applications, these errors might not arise intentionally, moreover, they can arise as a result of human error or simply as a result of input instability. In the mining industry, computer vision has innumerable, highly useful applications, from streaming processing plant conveyor belt imagery in order to predict ore purity for example, to detecting commodity stockpile levels and illegal shipping/mining using satellite imagery.

在某些应用中,这些错误可能不是有意产生的,而且可能是由于人为错误或仅由于输入不稳定而引起的。 在采矿业中,计算机视觉具有无数的,非常有用的应用程序,从流处理厂的传送带图像以预测矿石纯度,到检测商品库存水平和使用卫星图像进行非法运输/采矿,都非常有用。

Quite often we find that such image data is corrupted during collection, as a result of camera misalignment, vibrations or simply very unique out-of-sample examples that can lead to misclassification.

我们经常发现,由于相机未对准,振动或非常独特的样本外示例(可能导致分类错误),此类图像数据在收集过程中被破坏。

In order to overcome examples such as these and generally improve our models against corrupt or perturbed data, we can employ a form of Neural Structured Learning called Adversarial Regularization.

为了克服此类示例并总体上改进针对损坏或受干扰数据的模型,我们可以采用一种 称为 对抗性正则化 神经结构学习 形式

Neural Structured Learning (NSL) is a relatively new, open-source framework developed by the good folks at TensorFlow for training deep neural networks with structured signals (as opposed to the conventional single sample). NSL implements Neural Graph Learning, in which a neural network is trained using graphs (see image below) which carry information about both a target (node) and neighbouring information in other nodes connected via node edges.

神经结构学习(NSL) 是TensorFlow的优秀人员开发的一种相对较新的开源框架,用于用结构化信号(与传统的单个样本相反)来训练深度神经网络。 NSL实现了“ 神经图学习” ,其中使用 (参见下图)训练神经网络,该携带有关目标(节点)的信息以及通过节点边缘连接的其他节点中的邻近信息。

In doing so, this allows the trained model to simultaneously exploit both labelled and unlabelled data through:

这样一来,经过训练的模型就可以通过以下方式同时利用标记和未标记的数据:

  1. Training the model on labelled data (standard procedure in any supervised learning problem);

    在标签数据上训练模型(任何监督学习问题中的标准程序);
  2. Biasing the network to learn similar hidden representations for neighbouring nodes on a graph (with respect to the input data labels)

    偏向网络以了解图上相邻节点的相似隐藏表示(相对于输入数据标签)
Image for post
Image from TensorFlow Blog: Neural Structured Learning, Adversarial Examples, 2019.
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值