本文价值:understand the limitations of u.c. based bounds / cast doubt on the power of u.c. bounds to fully explain generalization in DL
- highlight that explaining the training-set-size dependence of the generalization error is apparently just as non-trivial as explaining its parameter-count dependence.
- show that there are scenarios where all uniform convergence bounds, however cleverly applied, become vacuous.
Development of generalisation bound
stage 1:conventional u.c. bound
generalisation gap ≤ O ( representational complexity of whole hypothesis class training set size ) \text{generalisation gap} \leq O\Big(\sqrt{\frac{\text{representational complexity of whole hypothesis class}}{\text{training set size}}}\Big) generalisation gap≤O(training set sizerepresentational complexity of whole hypothesis class