15年发表的博客
https://calculatedcontent.com/2015/03/25/why-does-deep-learning-work/
中看到的,用spin-glass model来阐述
先摘录些有意思的内容,之后再详细看看
This seemed to be necessary to resolve one of the great mysteries of protein folding: Levinthal’s paradox [5]. If nature just used statistical sampling to fold a protein, it would take longer than the ‘known’ lifetime of the Universe. It is why Machine Learning is not just statistics.
大自然不是统计抽样来折叠蛋白质的,所以ML不仅仅是统计(存疑)
Recent research at Google and Stanford confirms that the Deep Learning Energy Landscapes appear to be roughly convex [6], as does LeCuns work on spin glasses.
DL Energy Landscapes 粗略是凸的
Note that a real theory of protein folding, which would actually be able to fold a protein correctly (i.e. Freed’s approach [7]), would be a lot more detailed than a simple spin glass model. Likewise, real Deep Learning systems are going to have a lot more engineering details–to avoid overtraining (Dropout, Pooling, Momentum) than a theoretical spin funnel.
蛋白质折叠中仍有许多细节,比spin glass model要复杂,就像DL比spin funnel要复杂一样。DL中利用Dropout, Pooling, Momentum去避免overtraining
I believe this is the first conjecture that Supervised Deep Learning is related to a Spin Funnel. In the next post, I will examine the relationship between Unsupervised Deep Learning and the Variational Renormalization Group [10].
监督DL对应spin funnel ; 无监督DL对应 群(之前看的2015年ICLR发表的论文也是用群解释无监督DL的)