Boost
Gradient boosting
Idea
we can think our prediction function itself is a parameter
f
(
x
)
f(x)
f(x). So the update of it is the gradient diction. For each i
f
(
x
)
t
+
1
=
f
(
x
)
t
+
γ
∂
L
(
y
,
f
(
x
)
)
∂
f
(
x
)
f(x)_{t+1}=f(x)_t + \gamma \frac{\partial L(y,f(x))}{\partial f(x)}
f(x)t+1=f(x)t+γ∂f(x)∂L(y,f(x))
As our model is additive of mutiple weak learner, so the next weak learner is suppose to best fit the gradient ter . The data set is
(
x
i
,
∂
L
(
y
,
f
(
x
)
i
)
∂
f
(
x
)
i
)
(x_i, \frac{\partial L(y,f(x)_i)}{\partial f(x)_i})
(xi,∂f(x)i∂L(y,f(x)i)) to get a new predictor
h
m
h_m
hm
Find γ \gamma γ such that
γ
m
=
arg min
γ
∑
i
=
1
n
L
(
y
i
,
f
t
(
x
i
)
+
γ
h
m
(
x
i
)
)
.
\gamma_m = \underset{\gamma}{\operatorname{arg\,min}} \sum_{i=1}^n L\left(y_i, f_{t}(x_i) + \gamma h_m(x_i)\right).
γm=γargmini=1∑nL(yi,ft(xi)+γhm(xi)).
One of the example is Gradient Boost Decision Tree.
link
Adaboosting
idea
for classification, our final predict function is still weight sum of weak learners.
Instead of using 0-1 loss, we use exponential loss functions,
The details are in wiki. Basically we adjuct our weigth on each data set, more weight will be put on the data point where we previously classify it wrongly.
Here is the link
Xboosting Tree vs GBDT
GBDT is using gradient boosting,
Xboosting Tree is using gradient boosting with second order information, basically it expand the Loss function with second order Taylor equations. our h m h_m hm is considered as delta pertub in the Taylor expansion. Then find the optimal additive weak learner h m h_m hm such that minimize this Taylor equations.