![](https://img-blog.csdnimg.cn/20201014180756916.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
Optimization
文章平均质量分 83
xiwang_chn
这个作者很懒,什么都没留下…
展开
-
Optimization Week 7: Convex programming and duality
Week 7: Convex programming and dualityStrong dualityStrong dualityAny convex opt problem satisfying slator’s condition (∃\exist∃ a strictly feasible point) has strong duality.原创 2021-02-10 11:23:30 · 202 阅读 · 2 评论 -
Optimization Week 11: Constrained descent, Coordinate descent, Subgradient
Week 11: Constrained descent, Coordinate descent, Subgradient1 Constrained descent1.1 Projected gradient descent (PGD)***Problem******Algorithm******Convergence***1.2 Frank Wolfe method***Algorithm******Convergence***Examples2 Coordinate descent2.1 Will it原创 2021-01-21 14:11:42 · 100 阅读 · 0 评论 -
Optimization Week 12: Proximal gradient method and newton method
Week 12: Proximal gradient method and newton method1 Proximal gradient method1.1 Motivation1.2 Idea of proximal gradient1.3 Proximal gradient1.4 Convergence1.5 Examples2 Newton method (Second order method)2.1 Motivation2.2 Idea of Newton method2.3 Newton m原创 2021-01-21 14:11:14 · 256 阅读 · 0 评论 -
Optimization Week 9: Convex conjugate (Fenchel Conjugate)
Week 9: Convex conjugate, Fenchel Conjugate1 Definition2 PropertiesSum of conjugateDecompose conjugateDouble conjugateConvex original3 Examples-todoEvery function has something known as the convex conjugate, or differential conjugate. And, this is a very原创 2021-01-21 14:06:26 · 383 阅读 · 0 评论 -
Optimization Week 6: Semidefinite programming (SDP)
Week 6: Semidefinite programming1 Linear algebra reviewSquare matrix eigenvalue and eigenvectorsSymmetric matrixSpectral decomposition of symmetric matrix2 Positive semidefinite matrix (PSD) (must be Hermitian)3 Semidefinite programming (SDP)1 Linear alge原创 2021-01-21 14:06:03 · 437 阅读 · 0 评论 -
Optimization Week 8: KKT
Week 8: KKT1 KKTDefinitionStrong duality and KKTGeometry2 Application of KKTMaximum entrophy1 KKTDefinitionStationary condition: gradient of lagrangian is zero.Strong duality and KKTIf Strong duality holds, KKT is equivalent to optimality:xxx is pri原创 2021-01-20 04:26:13 · 158 阅读 · 0 评论 -
Optimization Week 1: Convex Sets
Week 1: Convex Sets1 Definition of convex set2 Operations preserving convexity2.1 Affine transformation (shift, scale, rotate)2.2 Intersection3 Examples3.1 Hyperplanes3.2 Halfspaces3.3 Convex hull of x1…xnx_1 \dots x_nx1…xn3.4 Conic combination of x1…xnx原创 2021-01-20 04:25:48 · 159 阅读 · 0 评论 -
Optimization Week 3: Programming (convex program, linear program)
Week 3: Programming: convex program, linear program1 Convex programming1.1 Definition1.2 Local, Global optimality2 Linear programming2.1 Definition of LP2.2 Optimality of LPExtreme point and Basic feasible point1 Convex programming1.1 Definitionminxf(x原创 2021-01-20 04:25:34 · 232 阅读 · 0 评论 -
Optimization Week 5: Duality example
Week 5: Duality exampleRobust linear programmingRobust linear programming5.1 Robust Linear Programming原创 2021-01-20 04:25:22 · 98 阅读 · 0 评论 -
Optimization Week 5: Duality example
Week 5: Duality exampleRobust linear programmingRobust linear programming5.1 Robust Linear Programming原创 2021-01-20 04:24:49 · 113 阅读 · 0 评论 -
Optimization Week 4: Duality
Week 4:Duality of Linear Programming, LP1 Making the dual2 Weak duality3 Strong duality4 Applying duality5 Complementray slackness1 Making the dualminxcTxs.t.Ax≤b\begin{aligned}\min_x& \quad c^Tx\\s.t.& \quad Ax\leq b\end{aligned}xmins.t.cT原创 2021-01-20 04:24:36 · 172 阅读 · 0 评论 -
Optimization Week 13: Newton method and barrier method
Week 13: Newton method and barrier method1 Newton method (Second order method)1.1 Motivation1.2 Idea of Newton method1.3 Newton method1.4 Step size1.5 Convergence with BTLS1.6 Scale free Newton1.6.1 Definition1.6.2 Convergence2 Quasi Newton methods2.1 Basi原创 2021-01-20 04:24:21 · 245 阅读 · 0 评论 -
Optimization Week 14: Stochastic gradient descent
Week 14: Stochastic gradient descent1 Noisy Unbiased (sub) Gradient (NUS)2 Stochastic gradient descent2.1 Update rule2.2 Convergence rate2.3 Step size3 Mini-batch Stochastic Gradient Descent3.1 Update rule3.2 Convergence rate3.3 Step size4 Variance reducti原创 2021-01-20 04:24:03 · 180 阅读 · 0 评论 -
Optimization Week 2: Convex Functions
Week 2: Convex Functions1 Definition of convex function1.1 Basic definition1.2 Definition of differentiable function1.3 Definition with monotone gradient2 Operations preserving convexity2.1 Non-negative sum2.2 Affine map2.3 Pointwise max2.4 Minimizing out原创 2021-01-20 04:23:46 · 153 阅读 · 0 评论 -
Optimization Week 10: Gradient Descent
Week 10: Gradient Descent1 MotivationFirst Order Taylor expansionQuadratic approximation2 Step sizeExact line searchBacktracking line search (BTLS)BTLS for gradient descent3 Convergence, step size* Smoothness, upper bound, and self-tuning3.1 Lipschitz Grad原创 2021-01-20 04:22:52 · 300 阅读 · 0 评论