近端梯度下降与软阈值迭代:PGD and ISTA

近端梯度下降与软阈值迭代:PGD and ISTA

简介

对于包含不可微部分的优化目标如lasso回归的求解,梯度下降等算法已不再适用。近端梯度下降(Proximal Gradient Descent)即被用来解决包含不可导项的优化问题。

写在前面:由于CSDN使用的KaTEX不支持align环境QAQ,因此我没有找到方法在多行公式中分行标序号的方法。

回顾梯度下降

给定一个无约束优化问题形如
min ⁡ x f ( x ) (1) \min_\mathbf{x} f(\mathbf{x}) \tag{1} xminf(x)(1)

其中 x ∈ R n , f : R n → R \mathbf{x} \in \mathbb R^n, f:\mathbb R^{n} \rightarrow \mathbb R xRn,f:RnR

函数 f ( x ) f(\mathbf{x}) f(x) 连续可微,且关于 x \mathbf x x 是(下)凸函数。需要指出的是,在下面的推导中,如未特殊说明,则所使用的向量均为列向量。

对于函数 f ( x ) f(\mathbf{x}) f(x) ,我们在点 x 0 \mathbf x_0 x0 做二阶泰勒展开,将其用来近似原函数:
f ( x ) ≈ f ( x 0 ) + ∇ f ( x 0 ) T ( x − x 0 ) + 1 2 ( x − x 0 ) T ∇ 2 f ( x 0 ) ( x − x 0 ) (2) \begin{aligned} f(\mathbf{x}) & \approx f(\mathbf{x}_0) + \nabla f(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) + \frac{1}{2} (\mathbf{x} - \mathbf{x}_0)^T\nabla^{2} f(\mathbf{x_0})(\mathbf{x} - \mathbf{x}_0) \\ \end{aligned} \tag{2} f(x)f(x0)+f(x0)T(xx0)+21(xx0)T2f(x0)(xx0)(2)
其中, ∇ f ( x 0 ) \nabla f(\mathbf x_0) f(x0) 为函数 f ( x ) f(\mathbf x) f(x) 在点 x 0 \mathbf x_0 x0 处的梯度; ∇ 2 f ( x 0 ) \nabla^2 f(\mathbf x_0) 2f(x0) 为函数 f ( x ) f(\mathbf x) f(x) 在点 x 0 \mathbf x_0 x0 处的Hessian矩阵;对于上式中的 ∇ f ( x 0 ) T ( x − x 0 ) \nabla f(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) f(x0)T(xx0) 项,通常也可以用内积的形式表示,即 < ∇ f ( x 0 ) , ( x − x 0 ) > <\nabla f(\mathbf{x}_0), (\mathbf{x} - \mathbf{x}_0)> <f(x0),(xx0)>
∇ f ( x 0 ) = [ ∂ f ∂ x 1 ∂ f ∂ x 2 ⋮ ∂ f ∂ x n ] x = x 0   ; ∇ 2 f ( x 0 ) = H f ( x 0 ) = [ ∂ 2 f ∂ x 1 2 ∂ 2 f ∂ x 1 ∂ x 2 ⋯ ∂ 2 f ∂ x 1 ∂ x n ∂ 2 f ∂ x 2 ∂ x 1 ∂ 2 f ∂ x 2 2 ⋯ ∂ 2 f ∂ x 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ 2 f ∂ x n ∂ x 1 ∂ 2 f ∂ x n ∂ x 2 ⋯ ∂ 2 f ∂ x n 2 ] x = x 0 \nabla f(\mathbf x_0) = \left [ \begin{matrix} \frac{\partial f}{\partial x_{1}} \\ \frac{\partial f}{\partial x_{2}} \\ \vdots \\ \frac{\partial f}{\partial x_{n}} \\ \end{matrix} \right ]_{\mathbf x = \mathbf x_0} \ ; \nabla^2 f(\mathbf x_0) = Hf(\mathbf x_0) =\left[\begin{array}{cccc} \frac{\partial^{2} f}{\partial x_{1}^{2}} & \frac{\partial^{2} f}{\partial x_{1} \partial x_{2}} & \cdots & \frac{\partial^{2} f}{\partial x_{1} \partial x_{n}} \\ \frac{\partial^{2} f}{\partial x_{2} \partial x_{1}} & \frac{\partial^{2} f}{\partial x_{2}^{2}} & \cdots & \frac{\partial^{2} f}{\partial x_{2} \partial x_{n}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^{2} f}{\partial x_{n} \partial x_{1}} & \frac{\partial^{2} f}{\partial x_{n} \partial x_{2}} & \cdots & \frac{\partial^{2} f}{\partial x_{n}^{2}} \end{array}\right]_{\mathbf x = \mathbf x_0} f(x0)= x1fx2fxnf x=x0 ;2f(x0)=Hf(x0)= x122fx2x12fxnx12fx1x22fx222fxnx22fx1xn2fx2xn2fxn22f x=x0
二阶导相对一阶导要难求很多,并且从时间复杂度的角度上来讲,求二阶导信息即Hessian矩阵要花去许多时间。我们将二阶导用 1 t I n \frac{1}{t} I_n t1In 代替,其中 t > 0 t > 0 t>0 I n I_n In n n n 阶单位矩阵,便有
f ( x ) ≈ f ( x 0 ) + ∇ f ( x 0 ) T ( x − x 0 ) + 1 2 t ∥ x − x 0 ∥ 2 2 (3) \begin{aligned} f(\mathbf{x}) & \approx f(\mathbf{x}_0) + \nabla f(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) + \frac{1}{2t} \|\mathbf{x} - \mathbf{x}_0 \|_{2}^{2} \\ \end{aligned} \tag{3} f(x)f(x0)+f(x0)T(xx0)+2t1xx022(3)
我们求解上述 ( 3 ) (3) (3) 对原函数 f ( x ) f(\mathbf{x}) f(x) 的近似式,并用 x + \mathbf{x}^+ x+ 来表示最优解,即
x + = arg ⁡ min ⁡ x { f ( x 0 ) + ∇ f ( x 0 ) T ( x − x 0 ) + 1 2 t ∥ x − x 0 ∥ 2 2 } (4) \mathbf{x}^+ = \mathop{\arg\min}\limits_\mathbf{x} \{f(\mathbf{x}_0) + \nabla f(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) + \frac{1}{2t} \|\mathbf{x} - \mathbf{x}_0 \|_{2}^{2} \} \tag{4} x+=xargmin{f(x0)+f(x0)T(xx0)+2t1xx022}(4)
由于 f ( x 0 ) f(\mathbf{x}_0) f(x0) 是常数,对该优化问题没有影响,故上式等价于
x + = arg ⁡ min ⁡ x { ∇ f ( x 0 ) T ( x − x 0 ) + 1 2 t ∥ x − x 0 ∥ 2 2 } (5) \mathbf{x}^+ = \mathop{\arg\min}\limits_\mathbf{x} \{\nabla f(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) + \frac{1}{2t} \|\mathbf{x} - \mathbf{x}_0 \|_{2}^{2} \} \tag{5} x+=xargmin{f(x0)T(xx0)+2t1xx022}(5)
上面这个目标函数是关于自变量 x \mathbf x x 的“二次函数”,能够轻松的得到其最小值。我们再做一定化简:
x + = arg ⁡ min ⁡ x { ∇ f ( x 0 ) T ( x − x 0 ) + 1 2 t ∥ x − x 0 ∥ 2 2 } = arg ⁡ min ⁡ x { ∇ f ( x 0 ) T ( x − x 0 ) + 1 2 t ( x − x 0 ) T ( x − x 0 ) } = arg ⁡ min ⁡ x { 1 2 t [ x − ( x 0 − 2 t ∇ f ( x 0 ) ] T ( x − x 0 ) } (6) \begin{aligned} \mathbf{x}^+ &= \mathop{\arg\min}\limits_\mathbf{x} \{\nabla f(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) + \frac{1}{2t} \|\mathbf{x} - \mathbf{x}_0 \|_{2}^{2} \} \\ & = \mathop{\arg\min}\limits_\mathbf{x} \{\nabla f(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) + \frac{1}{2t} (\mathbf{x} - \mathbf{x}_0)^T(\mathbf{x} - \mathbf{x}_0) \} \\ & = \mathop{\arg\min}\limits_\mathbf{x} \{\frac{1}{2t}[\mathbf{x} - (\mathbf{x}_0 - 2t\nabla f(\mathbf{x}_0)]^T(\mathbf{x} - \mathbf{x}_0) \} \tag{6} \\ \end{aligned} x+=xargmin{f(x0)T(xx0)+2t1xx022}=xargmin{f(x0)T(xx0)+2t1(xx0)T(xx0)}=xargmin{2t1[x(x02tf(x0)]T(xx0)}(6)
化简后,可以明显看出该“二次函数”的两个零点。因此,自变量取两个零点的中点时,函数值最小,即有
x + = 1 2 [ x 0 − 2 t ∇ f ( x 0 ) + x 0 ] = x 0 − t ∇ f ( x 0 ) \begin{aligned} \mathbf{x}^+ &= \frac{1}{2}[\mathbf{x}_0 - 2t\nabla f(\mathbf{x}_0) + \mathbf{x}_0] \\ & = \mathbf{x}_0 - t \nabla f(\mathbf{x}_0) \end{aligned} x+=21[x02tf(x0)+x0]=x0tf(x0)
亦或者,我们可将 ( 6 ) (6) (6) 式再做化简:
x + = arg ⁡ min ⁡ x { 1 2 t [ x − ( x 0 − 2 t ∇ f ( x 0 ) ] T ( x − x 0 ) } = arg ⁡ min ⁡ x { 1 2 t [ x − ( x 0 − t ∇ f ( x 0 ) + t ∇ f ( x 0 ) ] T [ x − ( x 0 − t ∇ f ( x 0 ) − t ∇ f ( x 0 ) ] } = arg ⁡ min ⁡ x { 1 2 t ∥ x − ( x 0 − t ∇ f ( x 0 ) ∥ 2 2 − t 2 ∥ ∇ f ( x 0 ) ∥ 2 2 } = arg ⁡ min ⁡ x 1 2 t ∥ x − ( x 0 − t ∇ f ( x 0 ) ∥ 2 2 = x 0 − t ∇ f ( x 0 ) (7) \begin{aligned} \mathbf{x}^+ & = \mathop{\arg\min}\limits_\mathbf{x} \{\frac{1}{2t}[\mathbf{x} - (\mathbf{x}_0 - 2t\nabla f(\mathbf{x}_0)]^T(\mathbf{x} - \mathbf{x}_0) \} \\ & = \mathop{\arg\min}\limits_\mathbf{x} \{\frac{1}{2t}[\mathbf{x} - (\mathbf{x}_0 - t\nabla f(\mathbf{x}_0) + t\nabla f(\mathbf{x}_0)]^T[\mathbf{x} - (\mathbf{x}_0 - t\nabla f(\mathbf{x}_0) - t\nabla f(\mathbf{x}_0)] \} \\ & = \mathop{\arg\min}\limits_\mathbf{x} \{\frac{1}{2t}\|\mathbf{x} - (\mathbf{x}_0 - t\nabla f(\mathbf{x}_0)\|_2^2 - \frac{t}{2}\|\nabla f(\mathbf{x}_0)\|_2^2 \} \\ & = \mathop{\arg\min}\limits_\mathbf{x} \frac{1}{2t}\|\mathbf{x} - (\mathbf{x}_0 - t\nabla f(\mathbf{x}_0)\|_2^2 \tag{7} \\ & = \mathbf{x}_0 - t\nabla f(\mathbf{x}_0) \end{aligned} x+=xargmin{2t1[x(x02tf(x0)]T(xx0)}=xargmin{2t1[x(x0tf(x0)+tf(x0)]T[x(x0tf(x0)tf(x0)]}=xargmin{2t1x(x0tf(x0)222t∥∇f(x0)22}=xargmin2t1x(x0tf(x0)22=x0tf(x0)(7)
我们将 ( 7 ) (7) (7) 式中的符号做出一点小改动,就得到
x k = x k − 1 − t ∇ f ( x k − 1 ) (8) \mathbf{x}^k = \mathbf{x}^{k-1} - t \nabla f(\mathbf{x}^{k-1}) \tag{8} xk=xk1tf(xk1)(8)
这就是梯度下降的迭代公式。

直观的看 ( 7 ) (7) (7) 式,不难发现,前面的二阶泰勒展开等做法,本质为将原问题转化为求解其二阶近似(quadratic approximation);

在得到 ( 3 ) (3) (3) 式的过程中,我们用 1 t I n \frac{1}{t} I_n t1In 去替换 ( 2 ) (2) (2) 中的 ∇ 2 f ( x 0 ) \nabla^2 f(\mathbf x_0) 2f(x0) ,这并不是毫无根据的。通常来说,我们会假设目标函数 f ( x ) f(\mathbf x) f(x) R n \mathbb R^n Rn 满足Lipchitz连续,即 ∀ x 1 , x 2 ∈ R n \forall \mathbf{x}_1, \mathbf{x}_2 \in \mathbb R^n x1,x2Rn ∃   L > 0 \exists \ L > 0  L>0 , 满足
∣ ∇ f ( x 1 ) − ∇ f ( x 2 ) ∣ ≤ L ∣ x 1 − x 2 ∣ |\nabla f(\mathbf x_1) - \nabla f(\mathbf x_2)| \leq L |\mathbf x_1 - \mathbf x_2| ∣∇f(x1)f(x2)Lx1x2
上式中的 ≤ \leq 表示两向量中对应分量均分别满足该关系。直观的看,函数满足Lipchitz连续意味着,导函数曲线上任两点间的连线斜率不能无穷大;也即表明,函数的二阶导是有穷的,我们可以找到一个有穷的大整数来作为二阶导的上界。因此,我们取 t = 1 L t = \frac{1}{L} t=L1 ,也就是 L = 1 t L = \frac{1}{t} L=t1 ,我们将 L I n L I_n LIn 1 t I n \frac{1}{t} I_n t1In 视为 ∇ 2 f ( x 0 ) \nabla^2 f(\mathbf x_0) 2f(x0) 的一个“上界”,并用其替换Hessian矩阵,以简化求解。

临近点算子

对于lasso回归,我们的优化问题为
min ⁡ β 1 2 ∥ Y − X β ∥ 2 2 + λ ∥ β ∥ 1  s.t.   Y , β ∈ R n , X ∈ R m × n , λ > 0 (9) \begin{array}{ll} \min\limits_\beta \frac{1}{2}\|Y - X\beta\|_2^2 + \lambda\|\beta\|_1 \\ \text { s.t. } \ Y, \beta\in \mathbb R^n, X\in \mathbb R^{m\times n},\lambda>0 \end{array} \tag{9} βmin21Y22+λβ1 s.t.  Y,βRn,XRm×n,λ>0(9)
可以看到,由于 l 1 l_1 l1-norm 的存在,该目标函数不可导。类似地,我们研究这样一种目标函数,即其可拆分为两部分:
f ( x ) = g ( x ) + h ( x ) f(\mathbf x) = g(\mathbf x) + h(\mathbf x) f(x)=g(x)+h(x)

min ⁡ x f ( x ) ⇔ min ⁡ x g ( x ) + h ( x ) \min_{\mathbf x} f(\mathbf x) \Leftrightarrow \min_{\mathbf x} g(\mathbf x) + h(\mathbf x) xminf(x)xming(x)+h(x)

其中 g ( x ) g(\mathbf x) g(x) 是连续可导的凸函数; h ( x ) h(\mathbf x) h(x) 是连续的凸函数,可以是不光滑的(不可导)(not necessarily differentiable)。

对应上面的lasso回归 ( 9 ) (9) (9),我们即对应
f ( β ) = g ( β ) + h ( β ) g ( β ) = 1 2 ∥ Y − X β ∥ 2 2 h ( β ) = λ ∥ β ∥ 1 \begin{aligned} & f(\beta) = g(\beta) + h(\beta) \\ & g(\beta) = \frac{1}{2}\|Y - X\beta\|_2^2 \\ & h(\beta) = \lambda\|\beta\|_1 \end{aligned} f(β)=g(β)+h(β)g(β)=21Y22h(β)=λβ1
这样,我们将目标函数不可导的部分与可导的部分拆分开来。对于可导的部分 g ( x ) g(\mathbf x) g(x) ,我们可以做类似上面的 ( 3 ) ∼ ( 7 ) (3) \sim (7) (3)(7) 式的处理,有
x + = arg ⁡ min ⁡ x { g ( x 0 ) + ∇ g ( x 0 ) T ( x − x 0 ) + 1 2 t ∥ x − x 0 ∥ 2 2 + h ( x ) } = arg ⁡ min ⁡ x { 1 2 t ∥ x − ( x 0 − t ∇ g ( x 0 ) ∥ 2 2 + h ( x ) } (10) \begin{aligned} \mathbf{x}^+ & = \mathop{\arg\min}\limits_\mathbf{x} \{g(\mathbf{x}_0) + \nabla g(\mathbf{x}_0)^{T}(\mathbf{x} - \mathbf{x}_0) + \frac{1}{2t} \|\mathbf{x} - \mathbf{x}_0 \|_{2}^{2} +h(\mathbf x) \} \\ & = \mathop{\arg\min}\limits_\mathbf{x} \{ \frac{1}{2t}\|\mathbf{x} - (\mathbf{x}_0 - t\nabla g(\mathbf{x}_0)\|_2^2 + h(\mathbf x) \} \tag{10} \\ \end{aligned} x+=xargmin{g(x0)+g(x0)T(xx0)+2t1xx022+h(x)}=xargmin{2t1x(x0tg(x0)22+h(x)}(10)
这次我们不能直接给出 x + \mathbf x^+ x+ 的显式解了,因为有着不可导项 h ( x ) h(\mathbf x) h(x) 项的存在。下面给出临近点算子/近端算子(proximal operator) 的定义:
prox t , h ( ⋅ ) ( z ) = arg ⁡ min ⁡ x { 1 2 t ∥ x − z ∥ 2 2 + h ( x ) } (11) \text{prox}_{t,h(\cdot)}(\mathbf z) = \mathop{\arg\min}_\mathbf{x} \{ \frac{1}{2t}\|\mathbf{x} - \mathbf z\|_2^2 + h(\mathbf x) \} \tag{11} proxt,h()(z)=argminx{2t1xz22+h(x)}(11)

需注意,形如
prox t , h ( z ) = arg ⁡ min ⁡ x { 1 2 ∥ x − z ∥ 2 2 + t h ( x ) } \text{prox}_{t,h} (\mathbf z) = \mathop{\arg\min}_{\mathbf x} \{ \frac{1}{2}\|\mathbf x - \mathbf z\|_2^2 + th(\mathbf x) \} \\ proxt,h(z)=argminx{21xz22+th(x)}

prox t , h ( x ) = arg ⁡ min ⁡ z { 1 2 ∥ x − z ∥ 2 2 + t h ( z ) } \text{prox}_{t,h} (\mathbf x) = \mathop{\arg\min}_{\mathbf z} \{ \frac{1}{2}\|\mathbf x - \mathbf z\|_2^2 + th(\mathbf z) \} proxt,h(x)=argminz{21xz22+th(z)}

与式 ( 11 ) (11) (11) 是等价的。

有了 ( 11 ) (11) (11) 的符号表示,我们可将式 ( 10 ) (10) (10) 等价转化为:
x + = prox t , h ( ⋅ ) ( x 0 − t ∇ g ( x 0 ) ) \mathbf x^+ = \text{prox}_{t, h(\cdot)}(\mathbf x_0 - t \nabla g(\mathbf x_0)) x+=proxt,h()(x0tg(x0))
对上式做符号变动,即有
x k = prox t , h ( ⋅ ) ( x k − 1 − t ∇ g ( x k − 1 ) ) (12) \mathbf x^{k} = \text{prox}_{t, h(\cdot)}(\mathbf x^{k-1} - t \nabla g(\mathbf x^{k-1})) \tag{12} xk=proxt,h()(xk1tg(xk1))(12)
( 12 ) (12) (12) 式即为近端梯度下降算法的核心迭代式。对于很多不同的函数形式 h ( ⋅ ) h(\cdot) h() ,如 l 0 l_0 l0 范数、 l 1 l_1 l1 范数、 l 2 l_2 l2 范数、二次函数等,近端算子都有着对应的闭式解(closed-form)(或者说显式解),下表我们就列出了几个简单的情况。更详细的可参考文末的ref [3]。

h ( ⋅ ) h(\cdot) h() prox t , h ( ⋅ ) ( z ) \text{prox}_{t,h(\cdot)}(\mathbf z) proxt,h()(z)Comment
0 z \mathbf z zObviously
∣ ∣ ⋅ ∣ ∣ 0 \vert\vert \cdot\vert\vert_0 ∣∣0 z i = { z i ∣ z i ∣ ≥ t 0  otherwise  z_i = \left\{\begin{array}{ll}z_i & \vert z_i\vert \geq t \\0 & \text { otherwise }\end{array}\right. zi={zi0zit otherwise Elementwise hard-thresholding
∣ ∣ ⋅ ∣ ∣ 1 \vert\vert\cdot\vert\vert_1 ∣∣1 z i = sign ( z i ) max ⁡ ( ∣ z i ∣ − t , 0 ) z_i = \text{sign}(z_i)\max(\vert z_i\vert -t, 0) zi=sign(zi)max(zit,0)Elementwise soft-thresholding

线性回归

线性回归即对应 h ( ⋅ ) = 0 h(\cdot) = 0 h()=0 的情况,此时我们的迭代式即为:
x k = prox t , h ( ⋅ ) = 0 ( x k − 1 − t ∇ g ( x k − 1 ) ) = x k − 1 − t ∇ g ( x k − 1 ) \mathbf x^{k} = \text{prox}_{t, h(\cdot)=0}(\mathbf x^{k-1} - t \nabla g(\mathbf x^{k-1})) = \mathbf x^{k-1} - t \nabla g(\mathbf x^{k-1}) xk=proxt,h()=0(xk1tg(xk1))=xk1tg(xk1)


l 1 l_1 l1 范数

lasso回归中使用了 l 1 l_1 l1 范数,故对应 h ( ⋅ ) = λ ∥ ⋅ ∥ 1 h(\cdot) = \lambda\|\cdot\|_1 h()=λ1 的情况,此时迭代式 ( 12 ) (12) (12) 即可化为:

x k = prox t , λ ∥ ⋅ ∥ 1 ( x k − 1 − t ∇ g ( x k − 1 ) ) = S λ t ( x k − 1 − t ∇ g ( x k − 1 ) ) (13) \mathbf x^{k} = \text{prox}_{t, \lambda\|\cdot\|_1}(\mathbf x^{k-1} - t \nabla g(\mathbf x^{k-1})) = S_{\lambda t}(\mathbf x^{k-1} - t \nabla g(\mathbf x^{k-1})) \tag{13} xk=proxt,λ1(xk1tg(xk1))=Sλt(xk1tg(xk1))(13)

其中 S λ t ( ⋅ ) S_{\lambda t}(\cdot) Sλt() 称为软阈值算子(soft-thresholding operator),可以看到该算子即为临近点算子中 h ( ⋅ ) = λ ∥ ⋅ ∥ 1 h(\cdot) = \lambda\|\cdot\|_1 h()=λ1 的情况。考虑标量 z z z ,我们有

prox t , λ ∥ ⋅ ∥ 1 ( z ) = S λ t ( z ) = sign ( z ) max ⁡ ( ∣ z ∣ − λ t , 0 ) { z − λ t z > λ t z − λ t ≤ z ≤ λ t z + λ t z < − λ t (14) \text{prox}_{t, \lambda\|\cdot\|_1}(z) = S_{\lambda t}(z) = \text{sign}(z)\max(|z|-\lambda t, 0) \left\{\begin{array}{ll} z - \lambda t & z > \lambda t \\ z & -\lambda t \leq z \leq \lambda t \\ z + \lambda t & z < -\lambda t \end{array}\right. \tag{14} proxt,λ1(z)=Sλt(z)=sign(z)max(zλt,0) zλtzz+λtz>λtλtzλtz<λt(14)
其中, sign ( ⋅ ) \text{sign}(\cdot) sign() 为符号函数,即
sign ( z ) = { 1 z > 0 0 z = 0 − 1 z < 0 \text{sign}(z) = \left\{\begin{array}{ll} 1 & z > 0 \\ 0 & z = 0 \\ -1 & z < 0 \end{array}\right. sign(z)= 101z>0z=0z<0
考虑向量 z = [ z 1 , ⋯   , z n ] T ∈ R n \mathbf z = [z_1, \cdots,z_n]^T \in \mathbb R^n z=[z1,,zn]TRn ,则 S λ t ( z ) = [ S λ t ( z 1 ) , ⋯   , S λ t ( z n ) ] T S_{\lambda t}(\mathbf z) = [S_{\lambda t}(z_1),\cdots, S_{\lambda t}(z_n)]^T Sλt(z)=[Sλt(z1),,Sλt(zn)]T ,即为对其每个分量均做 ( 14 ) (14) (14) 式的操作。我们也可推导出其具体形式,如下:
S λ t ( z i ) = sign ( z i ) max ⁡ ( ∣ z i ∣ − λ t , 0 ) = sign ( z i ) ∣ z i ∣ max ⁡ ( ∣ z i ∣ − λ t ∣ z i ∣ , 0 ) = z i max ⁡ ( ∣ z i ∣ − λ t ∣ z i ∣ , 0 ) = z i max ⁡ ( 1 − λ t ∣ z i ∣ , 0 ) \begin{aligned} S_{\lambda t}(z_i) & = \text{sign}(z_i)\max(|z_i|-\lambda t, 0) \\ & = \text{sign}(z_i)|z_i|\max(\frac{|z_i|-\lambda t}{|z_i|}, 0) \\ & = z_i\max(\frac{|z_i|-\lambda t}{|z_i|}, 0) \\ & = z_i\max(1 - \frac{\lambda t}{|z_i|}, 0) \\ \end{aligned} \\ Sλt(zi)=sign(zi)max(ziλt,0)=sign(zi)zimax(ziziλt,0)=zimax(ziziλt,0)=zimax(1ziλt,0)

∴ S λ t ( z ) = z ∘ [ 1 − λ t ∣ z ∣ ] + (15) \therefore S_{\lambda t}(\mathbf z) = \mathbf z \circ [1 - \frac{\lambda t}{|\mathbf z|}]_+ \tag{15} Sλt(z)=z[1zλt]+(15)

其中 ∣ z ∣ |\mathbf z| z 为对向量 z \mathbf z z 的每个分量取绝对值后组成的向量; [ ⋅ ] + [\cdot]_+ []+ 表示取正,即 max ⁡ ( ⋅ , 0 ) \max(\cdot, 0) max(,0) A ∘ B A\circ B AB 代表 A m × n A_{m\times n} Am×n B m × n B_{m\times n} Bm×n 的Hadamard乘积,即对应元素相乘,满足 ( A ∘ B ) i j = a i j × b i j (A\circ B)_{ij} = a_{ij} \times b_{ij} (AB)ij=aij×bij 。通常也可用符号 ⊙ \odot 表示。

我们简单推导一下为什么一范数对应的临近点算子的显式解是这个形式,下面的推导不是严格推导。
prox t , ∥ ⋅ ∥ 1 ( z ) = arg ⁡ min ⁡ x { 1 2 t ∥ x − z ∥ 2 2 + ∥ x ∥ 1 } = arg ⁡ min ⁡ x ∑ i = 1 n [ 1 2 t ( x i − z i ) 2 + ∣ x i ∣   ] \begin{aligned} \text{prox}_{t,\|\cdot\|_1}(\mathbf z) & = \mathop{\arg\min}\limits_\mathbf{x} \{ \frac{1}{2t}\|\mathbf{x} - \mathbf z\|_2^2 + \|\mathbf x\|_1 \} \\ & = \mathop{\arg\min}\limits_\mathbf{x} \sum_{i=1}^n [\frac{1}{2t}(x_i - z_i)^2 + |x_i|\ ] \end{aligned} proxt,1(z)=xargmin{2t1xz22+x1}=xargmini=1n[2t1(xizi)2+xi ]

∴ [ prox t , ∥ ⋅ ∥ 1 ( z ) ] i = arg ⁡ min ⁡ x i { 1 2 t ( x i − z i ) 2 + ∣ x i ∣   } = arg ⁡ min ⁡ x i f ( x i ) \therefore [\text{prox}_{t,\|\cdot\|_1}(\mathbf z)]_i = \mathop{\arg\min}\limits_{x_i} \{ \frac{1}{2t}(x_i - z_i)^2 + |x_i|\ \}= \mathop{\arg\min}_{x_i} f(x_i) [proxt,1(z)]i=xiargmin{2t1(xizi)2+xi }=argminxif(xi)

对函数 f ( x i ) f(x_i) f(xi) “求导”并令其等于0,
d f d x i = 1 t ( x i − z i ) + sign ( x i ) = 0 \frac{df}{d x{_i}} = \frac{1}{t} (x_i - z_i) + \text{sign}(x_i) = 0 dxidf=t1(xizi)+sign(xi)=0
化简,即有
x i = z i − t ⋅ sign ( x i ) x_i = z_i - t \cdot \text{sign}(x_i) xi=zitsign(xi)
x i > 0 x_i > 0 xi>0 ,上式即为 x i = z i − t > 0 x_i = z_i - t > 0 xi=zit>0

x i < 0 x_i < 0 xi<0 ,上式即为 x i = z i + t < 0 x_i = z_i + t < 0 xi=zi+t<0

x i = 0 x_i = 0 xi=0 ,上式即为 x i = z i x_i = z_i xi=zi

分别对应上面三种情况,我们即可得到
[ prox t , ∥ ⋅ ∥ 1 ( z ) ] i = { z i − t z i > t z i − t ≤ z i ≤ t z i + t z < − t [\text{prox}_{t,\|\cdot\|_1}(\mathbf z)]_i= \left\{\begin{array}{ll}z_i - t & z_i > t \\z_i & -t \leq z_i \leq t \\z_i + t & z < -t\end{array}\right. [proxt,1(z)]i= zitzizi+tzi>ttzitz<t


l 2 l_2 l2 范数

对于 h ( ⋅ ) = ∣ ∣ ⋅ ∣ ∣ 2 h(\cdot) = || \cdot ||_2 h()=∣∣2 的情况,我们即有
prox t , ∥ ⋅ ∥ 2 ( z ) = z ∘ [ 1 − t ∥ z ∥ 2 ] + (16) \text{prox}_{t,\|\cdot\|_2}(\mathbf z) = \mathbf z \circ [1 - \frac{t}{\|\mathbf z\|_2}]_+ \tag{16} proxt,2(z)=z[1z2t]+(16)

近端梯度下降与软阈值迭代算法

通常我们将 l 1 l_1 l1 范数对应的临近点算子称为软阈值,将 l 0 l_0 l0 范数对应的临近点算子称为硬阈值。软阈值迭代算法(ISTA, iterative soft-thresholding algorithm),即为用软阈值作为包含 l 1 l_1 l1 范数的目标函数的“梯度”,使用“梯度下降”来得到最优值的算法。回顾lasso回归的优化问题 ( 10 ) (10) (10) 与迭代式 ( 14 ) (14) (14) ,我们可以计算得到
∇ g ( β ) = ∇ 1 2 ∥ Y − X β ∥ 2 2 = X T ( X β − Y ) , ∴ β k = prox t , λ ∥ ⋅ ∥ 1 ( β k − 1 − t ∇ g ( β k − 1 ) ) = S λ t ( β k − 1 − t X T ( X β k − 1 − Y ) ) = β k − 1 ∘ [ 1 − λ t ∣ β k − 1 ∣ ] + (17) \begin{aligned} \nabla g(\beta) & = \nabla \frac{1}{2}\|Y - X\beta\|_2^2 = X^T(X\beta - Y), \\ \therefore \beta^{k} & = \text{prox}_{t, \lambda \|\cdot\|_1}(\beta^{k-1} - t \nabla g(\beta^{k-1})) \\ & = S_{\lambda t}(\beta^{k-1} - t X^T(X\beta^{k-1} - Y)) \tag{17} \\ & = \beta^{k-1} \circ [1 - \frac{\lambda t}{|\beta^{k-1}|}]_+ \end{aligned} g(β)βk=21Y22=XT(Y),=proxt,λ1(βk1tg(βk1))=Sλt(βk1tXT(Xβk1Y))=βk1[1βk1λt]+(17)
下面我们给出完整的ISTA步骤。

A l g o r i t h m   1.      I S T A \mathbf{Algorithm \ 1. \ \ \ \ ISTA} Algorithm 1.    ISTA

input: a small number ϵ \epsilon ϵ or maxiterations.

output: β ^ \hat \beta β^

Initialize β , t , λ \beta, t, \lambda β,t,λ (e.g., set β 0 = 1 ⃗ n × 1 , t = 0.1 , λ = 0.1 \beta^0 = \vec{1}_{n\times1}, t=0.1, \lambda = 0.1 β0=1 n×1,t=0.1,λ=0.1), set k = 1 k=1 k=1

while ∣ J k − J k − 1 ∣ < ε \left|J^{k}-J^{k-1}\right|<\varepsilon JkJk1 <ε or t < t< t< maxiterations do

​ update β k \beta^k βk using ( 17 ) (17) (17) .

​ compute J k ( β ) , J^k(\beta), Jk(β), where J k ( β ) = 1 2 ∣ ∣ Y − X β k ∣ ∣ 2 2 + λ ∣ ∣ β k ∣ ∣ 1 J^k(\beta) = \frac{1}{2}||Y-X\beta^k||_2^2 + \lambda||\beta^k||_1 Jk(β)=21∣∣YXβk22+λ∣∣βk1

k : = k + 1 k := k+1 k:=k+1

end while

Output result given by β ^ = β k − 1 \hat \beta = \beta^{k-1} β^=βk1

更一般地,我们给出近端梯度下降(PGD)的算法框架如下:

A l g o r i t h m   2.      P G D \mathbf{Algorithm \ 2. \ \ \ \ PGD} Algorithm 2.    PGD

input: a small number ϵ \epsilon ϵ or maxiterations.

output: β ^ \hat \beta β^

Initialize β , t , λ \beta, t, \lambda β,t,λ (e.g., set β 0 = 1 ⃗ n × 1 , t = 0.1 , λ = 0.1 \beta^0 = \vec{1}_{n\times1}, t=0.1, \lambda = 0.1 β0=1 n×1,t=0.1,λ=0.1), set k = 1 k=1 k=1

while ∣ J k − J k − 1 ∣ < ε \left|J^{k}-J^{k-1}\right|<\varepsilon JkJk1 <ε or t < t< t< maxiterations do

​ update β k \beta^k βk using ( 12 ) (12) (12) , i.e., β k = prox t , h ( ⋅ ) ( β k − 1 − t ∇ g ( β k − 1 ) ) \beta^k = \text{prox}_{t, h(\cdot)}(\beta^{k-1} - t \nabla g(\beta^{k-1})) βk=proxt,h()(βk1tg(βk1))

​ compute J k ( β ) , J^k(\beta), Jk(β), where J k ( β ) = g ( β k ) + λ h ( β k ) J^k(\beta) = g(\beta^k) + \lambda h(\beta^k) Jk(β)=g(βk)+λh(βk)

k : = k + 1 k := k+1 k:=k+1

end while

Output result given by β ^ = β k − 1 \hat \beta = \beta^{k-1} β^=βk1

编程实现

有时间再补。

总结与反思

下面我们提出几点思考。

  • 步长参数 t t t 如何选取?
  • 算法的时间复杂度如何?

References

  1. http://stat.cmu.edu/~ryantibs/convexopt/lectures/prox-grad.pdf

  2. https://angms.science/doc/CVX/ISTA0.pdf

  3. https://www.math.ucla.edu/~wotaoyin/summer2016/4_proximal.pdf

    (各种 h ( ⋅ ) h(\cdot) h() 对应近端算子闭式解)

以下是使用MATLAB实现的投影梯度下降法(PGD)的仿真程序: ```matlab % PGD Algorithm % minimize f(x) subject to x in C % where C is a convex set % Define objective function f(x) f = @(x) (x(1)-1)^2 + (x(2)-2)^2; % Define projection operator onto convex set C C = @(x) [max(min(x(1), 2), -2); max(min(x(2), 2), -2)]; % Set initial point x0 and step size alpha x0 = [3; 3]; alpha = 0.1; % Set maximum number of iterations and tolerance level max_iter = 1000; tol = 1e-6; % Initialize variables iter = 0; x = x0; x_prev = x + tol*10; % PGD Algorithm while norm(x - x_prev) > tol && iter < max_iter % Save current point as previous point x_prev = x; % Compute gradient of f at current point grad_f = [2*(x(1)-1); 2*(x(2)-2)]; % Compute descent direction by projecting negative gradient onto C d = C(x - alpha*grad_f) - x; % Update current point by moving in descent direction x = x + alpha*d; % Increment iteration counter iter = iter + 1; end % Print results fprintf('PGD Algorithm:\n'); fprintf('Iterations: %d\n', iter); fprintf('Minimizer: (%f, %f)\n', x(1), x(2)); fprintf('Minimum value: %f\n', f(x)); ``` 在此示例中,我们定义了目标函数$f(x)=(x_1-1)^2+(x_2-2)^2$,并将其最小化,限制条件为$x$在一个凸集$C$中。我们还定义了一个投影算子$C(x)$,它将点$x$投影到凸集$C$上。在每个迭代中,我们计算$f$在当前点$x$处的梯度,并将其投影到$C$上,以获得下降方向$d$。我们通过将步长$alpha$乘以下降方向$d$来更新$x$的值。最后,我们计算了迭代的次数、最小化器和最小值,并将它们打印出来。 请注意,在实际应用中,需要根据具体问题来定义目标函数和凸集$C$,并根据问题的复杂性和精度要求来调整迭代次数和收敛容差。
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值