【论文研读】Similarity of Neural Network Representations Revisited (ICML2019)

该论文(ICML2019)研究了神经网络表示的相似性,关注相似性指数的不变性特性。作者发现早期层相对于后期层更能学习到相似的表示。文章对比了不同相似性结构,如内积基的相似性、Hilbert-Schmidt独立性准则(HSIC)和中心化核对齐(CKA)。CKA被证明在揭示层间对应关系方面最为有效,尤其在不同网络架构中。
摘要由CSDN通过智能技术生成

Title: Similarity of Neural Network Representations Revisited (ICML2019)

Author:Simon Kornblith ...(Long Beach, California)

目录

Aim:

invariance properties of similarity indexes 分为三个方面

Comparing Similarity Structures

Related Similarity Indexes

Results

Conclusion and Future Work


Aim:

  • one can first measure the similarity between every pair of examples in each representation separately, and then compare the similarity structures.

 

invariance properties of similarity indexes 分为三个方面

1. Invariance to Invertible Linear Transformation

定义:A similarity index is invariant to invertible linear transformation if s(X, Y ) = s(XA, Y B) for any full rank A and  B.

Key sentence:

  • We demonstrate that early layers, but not later layers, learn similar representations on different datasets.
  • Invariance to invertible linear transformation implies that the scale of directions in activation space is irrelevant.
  • Neural networks trained from different random initializations develop representations with similar large principal components 不同初始化得到的主要参数是相似的。因此基于主要参数得到的不同网络间的相似性度量(如欧氏距离)的相似的。A similarity index that is invariant to invertible linear transformation ignores this aspect of the representation, and assigns the same score to networks that match only in large principal components or networks that match only in small principal components.

 2. Invariance to Ort

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值