相关内容:
稀疏入门
BPD问题求解
问题描述
缺失数据补全
原始数据有M个点,记为为 x x x,现观测到K个点(K<M),记为 y y y,则 y y y可以表示为采样矩阵 S S S与原始数据相乘,即 y = S x y=Sx y=Sx其中 S S S为K*M的矩阵。
假设 x x x对于 F F F有稀疏表示,即 x = F c x=Fc x=Fc则观测信号可表示为 y = S x = S F c y=Sx=SFc y=Sx=SFc令 A = S F A=SF A=SF,则 y = A c y=Ac y=Ac,缺失数据补全的问题可转化为BP问题求解,求解出 c c c后,原始数据可由 x = F c x=Fc x=Fc算出。
模型求解
对于优化问题
arg
min
x
∣
∣
λ
x
∣
∣
1
(
1
a
)
s
.
t
.
y
=
A
x
(
1
b
)
\mathop{\arg\min}_{x } \ ||\lambda x||_{1} \qquad(1a)\\ s.t. \quad y=Ax \qquad(1b)
argminx ∣∣λx∣∣1(1a)s.t.y=Ax(1b)
通过变量分裂,得到等价的优化问题
arg
min
x
,
u
∣
∣
λ
u
∣
∣
1
(
2
a
)
s
.
t
.
y
=
A
x
,
u
−
x
=
0
(
2
b
)
\mathop{\arg\min}_{x,u} \ ||\lambda u||_{1} \qquad(2a)\\ s.t. \quad y=Ax , u-x=0\qquad(2b)
argminx,u ∣∣λu∣∣1(2a)s.t.y=Ax,u−x=0(2b)
写成局部增广拉格朗日函数
L
(
x
,
u
,
λ
,
μ
)
=
∣
∣
λ
μ
∣
∣
1
+
λ
T
(
u
−
x
)
+
0.5
μ
∣
∣
u
−
x
∣
∣
2
2
+
λ
2
(
A
x
−
y
)
L(x,u,\lambda,\mu)=||\lambda \mu||_{1}+\lambda^{T}(u-x)+0.5\mu ||u-x||_{2}^{2}+\lambda_{2}(Ax-y)
L(x,u,λ,μ)=∣∣λμ∣∣1+λT(u−x)+0.5μ∣∣u−x∣∣22+λ2(Ax−y)
其求解算法如下:
initialize:
μ
>
0
,
d
repeat:
x
,
u
←
{
arg
min
x
,
u
∣
∣
λ
u
∣
∣
1
+
0.5
μ
∣
∣
u
−
x
−
d
∣
∣
2
2
s
.
t
.
A
x
=
y
d
←
d
−
(
u
−
x
)
e
n
d
\left. \begin{array}{l} \text{initialize:}\mu>0,d \\ \text{repeat:} \\ x,u\leftarrow \begin{cases} {\arg\min}_{x,u} \ ||\lambda u||_{1} +0.5\mu ||u-x-d||_{2}^{2} \\ s.t. \qquad Ax=y \ \end{cases} \\ d\leftarrow d-(u-x) \\ end \end{array} \right.
initialize:μ>0,drepeat:x,u←{argminx,u ∣∣λu∣∣1+0.5μ∣∣u−x−d∣∣22s.t.Ax=y d←d−(u−x)end
对
x
x
x和
u
u
u交替最小化,得
initialize:
μ
>
0
,
d
repeat:
u
←
arg
min
x
,
u
∣
∣
λ
u
∣
∣
1
+
0.5
μ
∣
∣
u
−
x
−
d
∣
∣
2
2
x
←
∣
∣
u
−
x
−
d
∣
∣
2
2
s
.
t
.
A
x
=
y
d
←
d
−
(
u
−
x
)
end
\left. \begin{array}{l} \text{initialize:} \mu>0,d \\ \text{repeat:} \\ u \leftarrow {\arg\min}_{x,u}||\lambda u||_{1} +0.5\mu ||u-x-d||_{2}^{2} \\ x \leftarrow ||u-x-d||_{2}^{2} \qquad s.t. \quad Ax=y \\ d \leftarrow d-(u-x) \\ \text{end} \end{array} \right.
initialize:μ>0,drepeat:u←argminx,u∣∣λu∣∣1+0.5μ∣∣u−x−d∣∣22x←∣∣u−x−d∣∣22s.t.Ax=yd←d−(u−x)end
对
u
u
u的最小化为
s
o
f
t
soft
soft函数,对
x
x
x的最小化是一个约束最小二乘问题,可得到算法如下:
{
initialize:
μ
>
0
,
d
repeat:
u
←
s
o
f
t
(
x
+
d
,
λ
/
μ
)
x
←
(
u
−
d
)
+
A
H
(
A
A
H
)
−
1
(
y
−
A
(
u
−
d
)
)
d
←
d
−
(
u
−
x
)
end
\left\{ \begin{array}{l} \text{initialize:} \mu>0,d \\ \text{repeat:} \\ u \leftarrow soft(x+d,\lambda / \mu) \\ x \leftarrow (u-d)+A^{H}(AA^{H})^{-1}(y-A(u-d)) \\ d \leftarrow d-(u-x) \\ \text{end} \end{array} \right.
⎩⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎧initialize:μ>0,drepeat:u←soft(x+d,λ/μ)x←(u−d)+AH(AAH)−1(y−A(u−d))d←d−(u−x)end
结果展示
生成200个点的模拟信号
M = 200;
N = 2^nextpow2(M);
m = 0:M-1;
f1 = 0.15;
f2 = 0.27;
f3 = 0.45;
y = 2*cos(2*pi*f1*m)+sin(2*pi*f2*m)+0.3*sin(2*pi*f3*m);%完整信号,M*1
y = y.';
随机缺失50%
%%观测信号
s = rand(M,1) > 0.5; %逻辑向量
S = diag(s);
S = S(s,:); %观测矩阵
y1 = S*y; %带入算法的观测向量,K个点
y1_plot = y; %画图用的观测向量
y1_plot(~s,:) = nan;
补全结果如图所示:
参考文献:
[1].M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo. Fast image recovery using variable splitting and constrained
optimization. IEEE Trans. Image Process., 19(9):2345–2356, September 2010.
[2].J. Eckstein and D. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal
monotone operators. Math. Program., 5:293–318, 1992.
[3].L1-norm penalized least squares with SALSA. Ivan selesnick.