【论文阅读】Twin Neural Network Regression

论文下载
GitHub
bib:

@ARTICLE{SebastianKevin2022Twin,
	title 		= {Twin neural network regression},
	author 		= {Sebastian Johann Wetzel and Kevin Ryczko and Roger Gordon Melko and Isaac Tamblyn},
	journal 	= {Applied AI Letters},
	year 		= {2022},
	volume 		= {3},
	number 		= {4},
	pages 	    = {e78},
	doi         = {10.1002/ail2.78}
}


1. 摘要

We introduce twin neural network (TNN) regression.

This method predicts differences between the target values of two different data points rather than the targets themselves.

The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points.

Whereas ensembles are normally costly to produce, TNN regression intrinsically creates an ensemble of predictions of twice the size of the training set while only training a single neural network.

虽然集合通常是昂贵的生产,但TNN回归本质上创建的预测集合是训练集大小的两倍,同时只训练单个神经网络。为什么是这样的,阅读后面的内容值得注意。

Since ensembles have been shown to be more accurate than single models this property naturally transfers to TNN regression.

We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared to other state-of-the-art methods.

Furthermore, TNN regression is constrained by self-consistency conditions.

We find that the violation of these conditions provides an estimate for the prediction uncertainty.

Note:
全文中主要出现了两个关键字,esembleself-consistency

2. 算法描述

在这里插入图片描述
从这张图中,可以大概的看出算法的华点。经典的神经网络主要是直接预测一个值,而TNNR是预测两个向量之间的距离。这样就将原本预测未知点的值转化为了预测已知点与未知点之间的差值。值得注意的是,twin neural network也叫孪生网络(siamese neural network),是度量学习中的内容。

从图中的环,可以同样推出self-consistency。也就是说:
( y 3 − y 1 ) + ( y 1 − y 2 ) + ( y 2 − y 3 ) = 0 (y_3-y_1) + (y_1-y_2)+(y_2-y_3) = 0 (y3y1)+(y1y2)+(y2y3)=0
F ( x 3 , x 1 ) + F ( x 1 , x 2 ) + F ( x 2 , x 3 ) = 0 (1) F(x_3, x_1) + F(x_1, x_2) + F(x_2, x_3) = 0 \tag{1} F(x3,x1)+F(x1,x2)+F(x2,x3)=0(1)
其中,等式1表述的就是self-consistency

算法细节:

  1. The training objective is to minimize the mean squared error on the training set.
  2. we employ standard gradient descent methods adadelta (and rmsprop) to minimize the loss on a batch of 16 pairs at each iteration.
  3. All data is split into 90% training, 5% validation, and 5% test data. Each run is performed on a randomly chosen different split of the data.
  4. we train on a generator which generates all possible pairs batchwise before reshuffling.

3. 实验

我一般是不会仔细看实验的,在这篇论文中我看到一个有意思的点。

3.1. | Prediction accuracy

在这里插入图片描述
论文中说,TNNR算法的优势是将训练集拓充到了二次方,但是在实际实验中,在大训练集上,TNNR反而会变差。

If the training set is very large, the number of pairs increases quadratically to a point where the TNN will in practice converge to a minimum before observing all possible pairs. At that point, the TNN begins to lose its advantages in terms of prediction accuracy.

其实,我觉得主要是模型的参数量太小,训练集变大,限制了神经网络的学习能力。

3.2. | Prediction uncertainty estimation

利用self-consistency的违反来建模预测不确定性。但是在实验部分的表述我不太能看懂。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

来日可期1314

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值