Logistic Regression
sparkliblinear 库的类关系图
1、LR
Given a set of training label-instance pairs ${(x_ i ,y_ i )}^ l_{i=1} ,
x i \in \mathbb{R}^ n , y i \in{−1,1}, \forall{i} $
LR with L2 reg model considers the following optimization problem:
2、A Trust Region Newton Method(TRON)
TRON obtains the truncated Newton step by approximately solving
Δt
Δ
t
Δt
Δt
Δt
Δt
Δt
Δt
Δt
Δt
Δt
is the size of the trust region,
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
q
t
(
d
)
=
∇
f
(
w
t
)
T
d
+
1
2
d
T
∇
2
f
(
x
t
)
d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
qt(d)=∇f(wt)Td+12dT∇2f(xt)d
is the second-order Taylor approximation of
f(wt+d)−f(wt)
f
(
w
t
+
d
)
−
f
(
w
t
)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
f(wt+d)−f(wt)
.
applying CG(Conjugate Gradient) to slove (2)
2.1 Distributed Algorithm
We partition the data matrix X and the labels Y into
disjoint p parts.
We can observe that for computing (12)-(14), only the data partition Xk X k Xk Xk Xk Xk Xk Xk Xk Xk Xk is needed in computing. Therefore, the computation can be done in parallel, with the partitions being stored distributedly. After the map functions are computed, we need to reduce the results to the machine performing the TRON algorithm in order to obtain the summation over all partitions.
3、Implement Design
1) Loop Structure: choose the while loop to implement the software
2) Data Encapsulation:
AA uses two arrays to store indices and feature values of an instance:
ndex1 index2 index3 index4 index5 …
value1 value2 value3 value4 value5 …
3) Using mapPartitions Rather Than map
4) not to cache σ(YkXkw) σ ( Y k X k w ) σ(YkXkw) σ(YkXkw) σ(YkXkw) σ(YkXkw) σ(YkXkw) σ(YkXkw) σ(YkXkw) σ(YkXkw) σ(YkXkw)
5) Using Broadcast Variables