最近在实现SMO,读到论文中关于算法的收敛性,有提到osuna定理,网上查了查资料也不是很多,先占个坑。
来源:Osuna, E.; Freund, R.; Girosi, F. (1997). “An improved training algorithm for support vector machines”.
维基百科原文:
In 1997, E. Osuna, R. Freund, and F. Girosi proved a theorem which suggests a whole new set of QP algorithms for SVMs.By the virtue of this theorem a large QP problem can be broken down into a series of smaller QP sub-problems. A sequence of QP sub-problems that always add at least one violator of the Karush–Kuhn–Tucker (KKT) conditions is guaranteed to converge. The chunking algorithm obeys the conditions of the theorem, and hence will converge.The SMO algorithm can be considered a special case of the Osuna algorithm, where the size of the optimization is two and both Lagrange multipliers are replaced at every step with new multipliers that are chosen via good heuristics.
大概意思:
E.Osuna、R.Freund和F.Girosi证明了一个定理,该定理开创了一套新的SVM中QP问题的解决算法。利用这个定理,可以将一个大的QP问题可以分解为一系列较小的QP子问题来解决。一系列的QP子问题总是加入至少一个违反Karush-Kuhn-Tucker(KKT)条件的变量,从而保证收敛。“Chunking”算法遵循此条件,因此可以收敛。SMO算法可以看作是Osuna算法的一个特例,其中优化两个变量,并且每一步都通过启发式算法来选择新的变量。