梯度消失和爆炸原理
求导知识
y = x 2 y = x^2 y=x2
d y \mathrm{d} {y} dy 导数
d y d x \Large \frac {\mathcal{d} {y}} {\mathcal{d}{x}} dxdy 偏导
RNN推导
正向传播:
a t = w x x t + w h h t − 1 + b t a_t=w_xx_t + w_hh_{t-1} + b_t at=wxxt+whht−1+bt
h t = σ ( a t ) h_t = \sigma(a_t) ht=σ(at)
y ^ = s o f t m a x ( w y h t + b y ) \hat{y} =softmax(w_yh_t+b_y) y^=softmax(wyht+by)
定义loss:
用logloss,TODO:多分类的logloss为啥是下面的格式?为啥不是
l
o
s
s
=
∑
[
−
y
l
o
g
(
y
^
)
−
(
1
−
y
)
l
o
g
(
1
−
y
^
)
]
loss = \sum[-ylog(\hat{y})-(1-y)log(1-\hat{y})]
loss=∑[−ylog(y^)−(1−y)log(1−y^)]
l o s s = L = ∑ i = 1 n − y i l o g ( y i ^ ) loss = \mathcal{L} = \displaystyle\sum_{i=1}^{n}-y_ilog(\hat{y_i}) loss=L=i=1∑n−yilog(yi^)
d L d w t = d L d a t d a t d w t = d L d a t \Large \frac {\mathrm{d}\mathcal{L}} {\mathrm{d}w_t} = \frac {\mathrm{d}\mathcal{L}} {\mathrm{d}a_t} \frac{\mathrm{d}a_t} {\mathrm{d}w_t}= \frac {\mathrm{d}\mathcal{L}} {\mathrm{d}a_t} dwtdL=datdLdwtdat=datdL