Uniform convergence may be unable to explain generalization in deep learning

本文质疑了基于均匀收敛的泛化界在解释深度学习泛化能力上的充分性,指出即便是考虑SGD的隐含偏置,泛化界在某些情况下也会变得无效。文章通过构造实例证明,即使训练误差和测试误差很小,深度学习的决策边界复杂度也可能高到足以记住训练数据点位置的偏差,从而导致最紧致的均匀收敛界变得空洞。这强调了泛化界应至少能合理反映参数数量和训练集大小的影响。
摘要由CSDN通过智能技术生成

本文价值:understand the limitations of u.c. based bounds / cast doubt on the power of u.c. bounds to fully explain generalization in DL

  1. highlight that explaining the training-set-size dependence of the generalization error is apparently just as non-trivial as explaining its parameter-count dependence.
  2. show that there are scenarios where all uniform convergence bounds, however cleverly applied, become vacuous.
Development of generalisation bound
stage 1:conventional u.c. bound

generalisation gap ≤ O ( representational complexity of whole hypothesis class training set size ) \text{generalisation gap} \leq O\Big(\sqrt{\frac{\text{representational complexity of whole hypothesis class}}{\text{training set size}}}\Big) generalisation gapO(training set sizerepresentational complexity of whole hypothesis class

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值