侵删!!!
对于一般的线性方程:
h θ ( X ) = θ 0 + θ 1 x 1 + . . . . . . + θ n x n \text{h}_\theta(\mathbf{X})=\theta_0 + \theta_1x_1 + ...... + \theta_nx_n hθ(X)=θ0+θ1x1+......+θnxn
这里的
θ i \theta_i θi
就是我们需要拟合的参数项
表示成矩阵形式:
h ( X ) = ∑ i = 0 n θ i x i = θ T X \text{h}(\mathbf{X})=\sum_{i=0}^n\theta_ix_i=\mathbf{\theta^TX} h(X)=i=0∑nθixi=θTX
损失函数:
J
(
θ
)
=
1
2
m
∑
i
=
1
m
(
h
θ
(
X
(
i
)
)
−
y
(
i
)
)
2
\text{J}(\theta)=\frac{1}{2m}\sum_{i=1}^m(h_\theta(\mathbf{X}^{(i)})-y^{(i)})^2
J(θ)=2m1i=1∑m(hθ(X(i))−y(i))2
这里
m
m
m表示样本的数量,加入系数2是为了在后续求偏导的时候消掉,使得公式更加简洁。我们在优化的过程中就是为了使得损失函数最小化
(
l
o
s
s
f
u
n
c
t
i
o
n
)
\text(loss function)
(lossfunction),即:
arg min
θ
J
(
θ
)
\underset{\theta}\mathbf{\text{arg min}} \mathbf{J(\theta)}
θarg minJ(θ)
根据梯度下降的思想,参数的最优解可以表示为:
θ j : = θ j − η ∂ J ( θ ) ∂ θ j \theta_j:=\theta_j- \eta \frac{\partial J(\theta)}{\partial\theta_j} θj:=θj−η∂θj∂J(θ)
对于梯度 ∂ J ( θ ) ∂ θ j \frac{\partial J(\theta)}{\partial\theta_j} ∂θj∂J(θ) 的解法,需要将每个参数的偏导求出来,经过迭代之后最终得到未知参数的值,下面进行推导:
∂ ∂ θ j J ( θ ) = ∂ ∂ θ j 1 2 m ∑ i = 1 m ( h θ ( X ( i ) ) − y ( i ) ) 2 = 1 2 m ∂ ∂ θ j [ ( h θ ( X ( 1 ) ) − y ( 1 ) ) 2 + . . + ( h θ ( X ( m ) ) − y ( m ) ) 2 ] = 1 2 m [ 2 ( h θ ( X ( 1 ) ) − y ( 1 ) ) ∂ ∂ θ j ( ∑ k = 0 n θ k x k ( 1 ) − y ( 1 ) ) + . . . . . . + 2 ( h θ ( X ( m ) ) − y ( m ) ) ∂ ∂ θ j ( ∑ k = 0 n θ k x k ( m ) − y ( m ) ) ] = 1 m [ ( h θ ( X ( 1 ) ) − y ( 1 ) ) x j ( 1 ) + . . . . . . + ( h θ ( X ( m ) ) − y ( m ) ) x j ( m ) ] = 1 m ∑ i = 1 m ( h θ ( X ( i ) ) − y ( i ) ) x j ( i ) \begin{aligned} \frac{\partial}{\partial\theta_j} J(\theta) & =\frac{\partial}{\partial\theta_j} \frac{1}{2m} \sum_{i=1}^m \left(h_\theta \left(\mathbf{X}^{(i)}\right) - y^{(i)}\right)^2\\ &=\frac{1}{2m} \frac{\partial}{\partial\theta_j}\left[\left(h_\theta\left(X^{(1)}\right)-y^{(1)}\right)^2+..+\left(h_\theta\left(X^{(m)}\right)-y^{(m)}\right)^2\right]\\ &=\frac{1}{2m}\left[2\left(h_\theta\left(X^{(1)}\right)-y^{(1)}\right)\frac {\partial}{\partial\theta_j}\left(\sum_{k=0}^n\theta_kx_k^{(1)}-y^{(1)}\right)+......+2\left(h_\theta\left(X^{(m)}\right)-y^{(m)}\right)\frac{\partial}{\partial\theta_j}\left(\sum_{k=0}^n\theta_kx_k^{(m)}-y^{(m)}\right)\right]\\ &=\frac{1}{m}\left[\left(h_\theta\left(X^{(1)}\right)-y^{(1)}\right)x_j^{(1)}+......+\left(h_\theta\left(X^{(m)}\right)-y^{(m)}\right)x_j^{(m)}\right]\\ &=\frac{1}{m}\sum_{i=1}^{m}\left(h_\theta\left(X^{(i)}\right)-y^{(i)}\right)x_j^{(i)} \end{aligned} ∂θj∂J(θ)=∂θj∂2m1i=1∑m(hθ(X(i))−y(i))2=2m1∂θj∂[(hθ(X(1))−y(1))2+..+(hθ(X(m))−y(m))2]=2m1[2(hθ(X(1))−y(1))∂θj∂(k=0∑nθkxk(1)−y(1))+......+2(hθ(X(m))−y(m))∂θj∂(k=0∑nθkxk(m)−y(m))]=m1[(hθ(X(1))−y(1))xj(1)+......+(hθ(X(m))−y(m))xj(m)]=m1i=1∑m(hθ(X(i))−y(i))xj(i)
这里其实就是:
∂
∂
θ
j
J
(
θ
)
=
1
m
X
T
∗
L
\frac{\partial}{\partial\theta_j} J(\theta)=\frac{1}{m}X^T*L
∂θj∂J(θ)=m1XT∗L
X
T
X^T
XT为数据矩阵的转置,
L
L
L为每次更新之前计算得到的
l
o
s
s
loss
loss
对于李宏毅深度学习作业1
1.处理数据集,提取PM2.5的数据,每10小时为一条数据,前9小时的PM2.5值作为特征,最后1小时PM2.5值作为label.
这里我的数据集读取之后列名有点问题,重新处理了一下
import pandas as pd
import numpy as np
#Modify column name
df = pd.read_csv('train.csv', encoding='ISO-8859-1')
df = df.rename(columns={'¤é´Á':'data', '´ú¯¸':'stations', '´ú¶µ':'obvservations'})
df['stations'] = 'station'
#create dataSet
df_pm25 = df[df.obvservations == 'PM2.5'].loc[:, '0':]
X_train = []
y_train = []
for idx, row in df_pm25.iterrows():
for i in range(15):
cluster = row[i:i+10].tolist()
X_train.append(cluster[:9])
y_train.append(cluster[9])
X_train = np.asarray(X_train, dtype=np.float64)
y_train = np.asarray(y_train, dtype=np.float64)
这里是按照课程提供的伪代码写的代码,用adagrad做的参数优化,最后你不确定需要迭代多少步可以将loss存下来,作图看一下。
learning_rate = 0.001
iterations = 10000000
#W的维度需要根据你数据集的特征数量确定,也就是列数
W = np.zeros((len(X_train[0])))
s_grad = np.zeros((len(X_train[0])))
lossVals = []
for i in range(iterations):
y_pred = np.dot(X_train, W)
loss = y_pred - y_train
#计算梯度值,上面公式推导已经解释了梯度等于X_train和loss做点积
gradient = 2* np.dot(X_train.transpose(), loss)
s_grad += gradient ** 2
ada = np.sqrt(s_grad)
W = W - learning_rate*gradient/ada
lossVals.append(np.mean(loss))