XGBOOST--Introduction to Boosted Trees

XGBOOST–Introduction to Boosted Trees

cite:https://xgboost.readthedocs.io/en/stable/tutorials/model.html

Introduction to Boosted Tree

XGBoost stands for “Extreme Gradient Boosting”, where the term “Gradient Boosting” originates from the paper Greedy Function Approximation: A Gradient Boosting Machine, by Friedman.

The gradient boosted trees has been around for a while, and there are a lot of materials on the topic. This tutorial will explain boosted trees in a self-contained and principled way using the elements of supervised learning. We think this explanation is cleaner, more formal, and motivates the model formulation used in XGBoost.

Elements of Supervised Learning

XGBoost is used for supervised learning problems, where we use the training data (with multiple features) x i x_i xi to predict a target variable y i y_i yi. Before we learn about trees specifically, let us start by reviewing the basic elements in supervised learning.

Model and Parameters

The model in supervised learning usually refers to the mathematical structure of by which the prediction y i y_i yi is made from the input x i x_i xi. A common example is a linear model, where the prediction is given as y ^ i = ∑ j θ j x i j \hat{y}_i = \sum_j \theta_j x_{ij} y^i=jθjxij, a linear combination of weighted input features. The prediction value can have different interpretations, depending on the task, i.e., regression or classification. For example, it can be logistic transformed to get the probability of positive class in logistic regression, and it can also be used as a ranking score when we want to rank the outputs.

The parameters are the undetermined part that we need to learn from data. In linear regression problems, the parameters are the coefficients θ \theta θ. Usually we will use θ \theta θ to denote the parameters (there are many parameters in a model, our definition here is sloppy).

Objective Function: Training Loss + Regularization

With judicious choices for y i y_i yi, we may express a variety of tasks, such as regression, classification, and ranking. The task of training the model amounts to finding the best parameters θ \theta θ that best fit the training data x i x_i xi and labels y i y_i yi. In order to train the model, we need to define the objective function to measure how well the model fit the training data.

A salient characteristic of objective functions is that they consist two parts: training loss and regularization term:
obj ( θ ) = L ( θ ) + Ω ( θ ) … ( 1 ) \text{obj}(\theta) = L(\theta) + \Omega(\theta) \dots (1) obj(θ)=L(θ)+Ω(θ)(1)

where L L L is the training loss function, and Ω \Omega Ω is the regularization term. The training loss measures how predictive our model is with respect to the training data. A common choice of L L L is the mean squared error, which is given by
L ( θ ) = ∑ i ( y i − y ^ i ) 2 … ( 2 ) L(\theta) = \sum_i (y_i-\hat{y}_i)^2 \dots (2) L(θ)=i(yiy^i)2(2)

Another commonly used loss function is logistic loss, to be used for logistic regression:
L ( θ ) = ∑ i [ y i ln ⁡ ( 1 + e − y ^ i ) + ( 1 − y i ) ln ⁡ ( 1 + e y ^ i ) ] … ( 3 ) L(\theta) = \sum_i[ y_i\ln (1+e^{-\hat{y}_i}) + (1-y_i)\ln (1+e^{\hat{y}_i})] \dots(3) L(θ)=i[yiln(1+ey^i)+(1yi)ln(1+ey^i)](3)

The regularization term is what people usually forget to add. The regularization term controls the complexity of the model, which helps us to avoid overfitting. This sounds a bit abstract, so let us consider the following problem in the following picture. You are asked to fit visually a step function given the input data points on the upper left corner of the image. Which solution among the three do you think is the best fit?
在这里插入图片描述

The correct answer is marked in red. Please consider if this visually seems a reasonable fit to you. The general principle is we want both a simple and predictive model. The tradeoff between the two is also referred as bias-variance tradeoff in machine learning.

Why introduce the general principle?

The elements introduced above form the basic elements of supervised learning, and they are natural building blocks of machine learning toolkits. For example, you should be able to describe the differences and commonalities between gradient boosted trees and random forests. Understanding the process in a formalized way also helps us to understand the objective that we are learning and the reason behind the heuristics such as pruning and smoothing.

Decision Tree Ensembles

Now that we have introduced the elements of supervised learning, let us get started with real trees. To begin with, let us first learn about the model choice of XGBoost: decision tree ensembles. The tree ensemble model consists of a set of classification and regression trees (CART). Here’s a simple example of a CART that classifies whether someone will like a hypothetical computer game X.

在这里插入图片描述
We classify the members of a family into different leaves, and assign them the score on the corresponding leaf. A CART is a bit different from decision trees, in which the leaf only contains decision values. In CART, a real score is associated with each of the leaves, which gives us richer interpretations that go beyond classification. This also allows for a principled, unified approach to optimization, as we will see in a later part of this tutorial.

Usually, a single tree is not strong enough to be used in practice. What is actually used is the ensemble model, which sums the prediction of multiple trees together.

在这里插入图片描述
Here is an example of a tree ensemble of two trees. The prediction scores of each individual tree are summed up to get the final score. If you look at the example, an important fact is that the two trees try to complement each other. Mathematically, we can write our model in the form
y ^ i = ∑ k = 1 K f k ( x i ) , f k ∈ F … ( 4 ) \hat{y}_i = \sum_{k=1}^K f_k(x_i), f_k \in \mathcal{F} \dots (4) y^i=k=1Kfk(xi),fkF(4)
where
K K K is the number of trees
f k f_k fk is a function in the functional space F \mathcal{F} F
F \mathcal{F} F is the set of all possible CARTs.

The objective function to be optimized is given by:
obj ( θ ) = ∑ i n l ( y i , y ^ i ) + ∑ k = 1 K ω ( f k ) … ( 5 ) \text{obj}(\theta) = \sum_i^n l(y_i, \hat{y}_i) + \sum_{k=1}^K \omega(f_k) \dots(5) obj(θ)=inl(yi,y^i)+k=1Kω(fk)(5)
where
ω ( f k ) \omega(f_k) ω(fk) is the complexity of the tree
f k f_k fk defined in detail later

Now here comes a trick question: what is the model used in random forests? Tree ensembles! So random forests and boosted trees are really the same models; the difference arises from how we train them. This means that, if you write a predictive service for tree ensembles, you only need to write one and it should work for both random forests and gradient boosted trees. (See Treelite for an actual example.) One example of why elements of supervised learning rock.

Tree Boosting

Now that we introduced the model, let us turn to training: How should we learn the trees? The answer is, as is always for all supervised learning models: define an objective function and optimize it!

Let the following be the objective function (remember it always needs to contain training loss and regularization):

obj = ∑ i = 1 n l ( y i , y ^ i ( t ) ) + ∑ i = 1 t ω ( f i ) … ( 6 ) \text{obj} = \sum_{i=1}^n l(y_i, \hat{y}_i^{(t)}) + \sum_{i=1}^t\omega(f_i) \dots(6) obj=i=1nl(yi,y^i(t))+i=1tω(fi)(6)

Additive Training

The first question we want to ask: what are the parameters of trees? You can find that what we need to learn are those functions f i f_i fi, each containing the structure of the tree and the leaf scores. Learning tree structure is much harder than traditional optimization problem where you can simply take the gradient. It is intractable to learn all the trees at once. Instead, we use an additive strategy: fix what we have learned, and add one new tree at a time. We write the prediction value at step t t t as y ^ i ( t ) \hat{y}_i^{(t)} y^i(t). Then we have

y ^ i ( 0 ) = 0 y ^ i ( 1 ) = f 1 ( x i ) = y ^ i ( 0 ) + f 1 ( x i ) y ^ i ( 2 ) = f 1 ( x i ) + f 2 ( x i ) = y ^ i ( 1 ) + f 2 ( x i ) … y ^ i ( t ) = ∑ k = 1 t f k ( x i ) = y ^ i ( t − 1 ) + f t ( x i ) … ( 7 ) \hat{y}_i^{(0)} = 0\\ \hat{y}_i^{(1)}= f_1(x_i) = \hat{y}_i^{(0)} + f_1(x_i)\\ \hat{y}_i^{(2)} = f_1(x_i) + f_2(x_i)= \hat{y}_i^{(1)} + f_2(x_i)\\ \dots\\ \hat{y}_i^{(t)} = \sum_{k=1}^t f_k(x_i)= \hat{y}_i^{(t-1)} + f_t(x_i) \dots (7) y^i(0)=0y^i(1)=f1(xi)=y^i(0)+f1(xi)y^i(2)=f1(xi)+f2(xi)=y^i(1)+f2(xi)y^i(t)=k=1tfk(xi)=y^i(t1)+ft(xi)(7)

It remains to ask: which tree do we want at each step? A natural thing is to add the one that optimizes our objective.
obj ( t ) = ∑ i = 1 n l ( y i , y ^ i ( t ) ) + ∑ i = 1 t ω ( f i ) = ∑ i = 1 n l ( y i , y ^ i ( t − 1 ) + f t ( x i ) ) + ω ( f t ) + c o n s t a n t … ( 8 ) \text{obj}^{(t)} = \sum_{i=1}^n l(y_i, \hat{y}_i^{(t)}) + \sum_{i=1}^t\omega(f_i) \\ = \sum_{i=1}^n l(y_i, \hat{y}_i^{(t-1)} + f_t(x_i)) + \omega(f_t) + \mathrm{constant} \dots(8) obj(t)=i=1nl(yi,y^i(t))+i=1tω(fi)=i=1nl(yi,y^i(t1)+ft(xi))+ω(ft)+constant(8)

If we consider using mean squared error (MSE) as our loss function, the objective becomes

obj ( t ) = ∑ i = 1 n ( y i − ( y ^ i ( t − 1 ) + f t ( x i ) ) ) 2 + ∑ i = 1 t ω ( f i ) = ∑ i = 1 n [ 2 ( y ^ i ( t − 1 ) − y i ) f t ( x i ) + f t ( x i ) 2 ] + ω ( f t ) + c o n s t a n t … ( 9 ) \text{obj}^{(t)} = \sum_{i=1}^n (y_i - (\hat{y}_i^{(t-1)} + f_t(x_i)))^2 + \sum_{i=1}^t\omega(f_i) \\ = \sum_{i=1}^n [2(\hat{y}_i^{(t-1)} - y_i)f_t(x_i) + f_t(x_i)^2] + \omega(f_t) + \mathrm{constant} \dots(9) obj(t)=i=1n(yi(y^i(t1)+ft(xi)))2+i=1tω(fi)=i=1n[2(y^i(t1)yi)ft(xi)+ft(xi)2]+ω(ft)+constant(9)

The form of MSE is friendly, with a first order term (usually called the residual) and a quadratic( [kwɑːˈdrætɪk] adj.平方的;二次方的n.【数】二次方程式;二次项;【数】二次方程式论) term. For other losses of interest (for example, logistic loss), it is not so easy to get such a nice form. So in the general case, we take the Taylor expansion of the loss function up to the second order:

obj ( t ) = ∑ i = 1 n [ l ( y i , y ^ i ( t − 1 ) ) + g i f t ( x i ) + 1 2 h i f t 2 ( x i ) ] + ω ( f t ) + c o n s t a n t g i = ∂ y ^ i ( t − 1 ) l ( y i , y ^ i ( t − 1 ) ) h i = ∂ y ^ i ( t − 1 ) 2 l ( y i , y ^ i ( t − 1 ) ) … ( 10 ) \text{obj}^{(t)} = \sum_{i=1}^n [l(y_i, \hat{y}_i^{(t-1)}) + g_i f_t(x_i) + \frac{1}{2} h_i f_t^2(x_i)] + \omega(f_t) + \mathrm{constant} \\g_i = \partial_{\hat{y}_i^{(t-1)}} l(y_i, \hat{y}_i^{(t-1)})\\ h_i = \partial_{\hat{y}_i^{(t-1)}}^2 l(y_i, \hat{y}_i^{(t-1)}) \dots(10) obj(t)=i=1n[l(yi,y^i(t1))+gift(xi)+21hift2(xi)]+ω(ft)+constantgi=y^i(t1)l(yi,y^i(t1))hi=y^i(t1)2l(yi,y^i(t1))(10)
after remove all constant:
obj ( t ) = ∑ i = 1 n [ g i f t ( x i ) + 1 2 h i f t 2 ( x i ) ] + ω ( f t ) … ( 11 ) \text{obj}^{(t)} = \sum_{i=1}^n [g_i f_t(x_i) + \frac{1}{2} h_i f_t^2(x_i)] + \omega(f_t) \dots(11) obj(t)=i=1n[gift(xi)+21hift2(xi)]+ω(ft)(11)

This becomes our optimization goal for the new tree. One important advantage of this definition is that the value of the objective function only depends on g i g_i gi and h i h_i hi. This is how XGBoost supports custom loss functions. We can optimize every loss function, including logistic regression and pairwise ranking, using exactly the same solver that takes g i g_i gi and h i h_i hi as input!

Model Complexit

We have introduced the training step, but wait, there is one important thing, the regularization term! We need to define the complexity of the tree ω ( f ) \omega(f) ω(f). In order to do so, let us first refine the definition of the tree f ( x ) f(x) f(x) as:

f t ( x ) = w q ( x ) , w ∈ R T , q : R d → { 1 , 2 , ⋯   , T } … ( 12 ) f_t(x) = w_{q(x)}, w \in R^T, q:R^d\rightarrow \{1,2,\cdots,T\} \dots(12) ft(x)=wq(x),wRT,q:Rd{1,2,,T}(12)
Here :
(w) is the vector of scores on leaves,
(q) is a function assigning each data point to the corresponding leaf
(T) is the number of leaves. In XGBoost,

we define the complexity as
ω ( f ) = γ T + 1 2 λ ∑ j = 1 T w j 2 … ( 13 ) \omega(f) = \gamma T + \frac{1}{2}\lambda \sum_{j=1}^T w_j^2 \dots(13) ω(f)=γT+21λj=1Twj2(13)
Of course, there is more than one way to define the complexity, but this one works well in practice. The regularization is one part most tree packages treat less carefully, or simply ignore. This was because the traditional treatment of tree learning only emphasized improving impurity, while the complexity control was left to heuristics. By defining it formally, we can get a better idea of what we are learning and obtain models that perform well in the wild.

The Structure Score

Here is the magical part of the derivation. After re-formulating the tree model, we can write the objective value with the (t)-th tree as:

obj ( t ) ≈ ∑ i = 1 n [ g i w q ( x i ) + 1 2 h i w q ( x i ) 2 ] + γ T + 1 2 λ ∑ j = 1 T w j 2 = ∑ j = 1 T [ ( ∑ i ∈ I j g i ) w j + 1 2 ( ∑ i ∈ I j h i + λ ) w j 2 ] + γ T … ( 14 ) \text{obj}^{(t)} \approx \sum_{i=1}^n [g_i w_{q(x_i)} + \frac{1}{2} h_i w_{q(x_i)}^2] + \gamma T + \frac{1}{2}\lambda \sum_{j=1}^T w_j^2\\ = \sum^T_{j=1} [(\sum_{i\in I_j} g_i) w_j + \frac{1}{2} (\sum_{i\in I_j} h_i + \lambda) w_j^2 ] + \gamma T \dots(14) obj(t)i=1n[giwq(xi)+21hiwq(xi)2]+γT+21λj=1Twj2=j=1T[(iIjgi)wj+21(iIjhi+λ)wj2]+γT(14)
where
I j = { i ∣ q ( x i ) = j } I_j = \{i|q(x_i)=j\} Ij={iq(xi)=j} is the set of indices of data points assigned to the j − t h j-th jth leaf. Notice that in the second line we have changed the index of the summation because all the data points on the same leaf get the same score. We could further compress the expression by defining G j = ∑ i ∈ I j g i , H j = ∑ i ∈ I j h i G_j = \sum_{i\in I_j} g_i , H_j = \sum_{i\in I_j} h_i Gj=iIjgi,Hj=iIjhi:

obj ( t ) = ∑ j = 1 T [ G j w j + 1 2 ( H j + λ ) w j 2 ] + γ T … ( 15 ) \text{obj}^{(t)} = \sum^T_{j=1} [G_jw_j + \frac{1}{2} (H_j+\lambda) w_j^2] +\gamma T \dots(15) obj(t)=j=1T[Gjwj+21(Hj+λ)wj2]+γT(15)
In this equation, w j w_j wj are independent with respect to each other, the form G j w j + 1 2 ( H j + λ ) w j 2 G_jw_j+\frac{1}{2}(H_j+\lambda)w_j^2 Gjwj+21(Hj+λ)wj2 is quadratic and the best w j w_j wj for a given structure q ( x ) q(x) q(x) and the best objective reduction we can get is:

w j ∗ = − G j H j + λ obj ∗ = − 1 2 ∑ j = 1 T G j 2 H j + λ + γ T … ( 16 ) w_j^\ast = -\frac{G_j}{H_j+\lambda}\\ \text{obj}^\ast = -\frac{1}{2} \sum_{j=1}^T \frac{G_j^2}{H_j+\lambda} + \gamma T \dots(16) wj=Hj+λGjobj=21j=1THj+λGj2+γT(16)

The last equation measures how good a tree structure q ( x ) q(x) q(x) is.

在这里插入图片描述
if all this sounds a bit complicated, let’s take a look at the picture, and see how the scores can be calculated. Basically, for a given tree structure, we push the statistics g i g_i gi and h i h_i hi to the leaves they belong to, sum the statistics together, and use the formula(公式) to calculate how good the tree is. This score is like the impurity([ɪmˈpjʊərəti] 杂质) measure in a decision tree, except that it also takes the model complexity into account.

Learn the tree structure

Now that we have a way to measure how good a tree is, ideally we would enumerate all possible trees and pick the best one. In practice this is intractable([ɪnˈtræktəbl] 棘手的), so we will try to optimize one level of the tree at a time. Specifically we try to split a leaf into two leaves, and the score it gains is

G a i n = 1 2 [ G L 2 H L + λ + G R 2 H R + λ − ( G L + G R ) 2 H L + H R + λ ] − γ … ( 17 ) Gain = \frac{1}{2} \left[\frac{G_L^2}{H_L+\lambda}+\frac{G_R^2}{H_R+\lambda}-\frac{(G_L+G_R)^2}{H_L+H_R+\lambda}\right] - \gamma \dots(17) Gain=21[HL+λGL2+HR+λGR2HL+HR+λ(GL+GR)2]γ(17)

This formula can be decomposed as

  1. the score on the new left leaf
  2. the score on the new right leaf
  3. The score on the original leaf
  4. regularization on the additional leaf.

We can see an important fact here: if the gain is smaller than γ \gamma γ, we would do better not to add that branch. This is exactly the pruning techniques in tree based models! By using the principles of supervised learning, we can naturally come up with the reason these techniques work 😃

For real valued data, we usually want to search for an optimal split. To efficiently do so, we place all the instances in sorted order, like the following picture.

在这里插入图片描述
A left to right scan is sufficient to calculate the structure score of all possible split solutions, and we can find the best split efficiently.

Note:

Limitation of additive tree learning

Since it is intractable to enumerate all possible tree structures, we add one split at a time. This approach works well most of the time, but there are some edge cases that fail due to this approach. For those edge cases, training results in a degenerate(退化的) model because we consider only one feature dimension at a time. See Can Gradient Boosting Learn Simple Arithmetic? for an example.

Final words on XGBoos

Now that you understand what boosted trees are, you may ask, where is the introduction for XGBoost? XGBoost is exactly a tool motivated by the formal principle introduced in this tutorial! More importantly, it is developed with both deep consideration in terms of systems optimization and principles in machine learning. The goal of this library is to push the extreme of the computation limits of machines to provide a scalable, portable and accurate library. Make sure you try it out, and most importantly, contribute your piece of wisdom (code, examples, tutorials) to the community!

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值