膜电压公式
τ m d u d t = − [ u − u r e s t ] + R I ( t ) \tau _m\frac{du}{dt}=-[u-u_{rest}]+RI(t) τmdtdu=−[u−urest]+RI(t)
迭代式1
u ( t + Δ t ) = u ( t ) e − Δ t τ m + R I ( t ) u(t+\Delta t)=u(t)e^{-\frac{\Delta t}{\tau m} }+RI(t) u(t+Δt)=u(t)e−τmΔt+RI(t)
推导
当
t
=
t
0
t=t_0
t=t0时,
u
(
t
)
=
u
r
e
s
t
+
Δ
u
u(t)=u_{rest}+\Delta u
u(t)=urest+Δu;当
t
>
0
t>0
t>0
时,输入电流
I
(
t
)
I(t)
I(t)衰减到零,即
R
I
(
t
)
=
0
RI(t)=0
RI(t)=0。若以
t
0
t_0
t0为初始状态:
1
u
−
u
r
e
s
t
d
u
=
−
1
τ
m
d
t
\frac{1}{u-u_{rest}}du=-\frac{1}{\tau _m}dt
u−urest1du=−τm1dt
∫
u
r
e
s
t
+
Δ
u
u
1
u
−
u
r
e
s
t
d
u
=
−
∫
t
0
t
1
τ
m
d
t
\int_{u_{rest}+\Delta u}^{u} \frac{1}{u-u_{rest}}du=-\int_{t_0}^{t} \frac{1}{\tau _m}dt
∫urest+Δuuu−urest1du=−∫t0tτm1dt
∫
u
r
e
s
t
+
Δ
u
u
d
(
u
−
u
r
e
s
t
)
=
−
1
τ
m
∫
t
0
t
d
t
\int_{u_{rest}+\Delta u}^{u}d(u-u_{rest})=-\frac{1}{\tau _m}\int_{t_0}^{t} dt
∫urest+Δuud(u−urest)=−τm1∫t0tdt
l
n
(
u
−
u
r
e
s
t
)
∣
u
r
e
s
t
+
Δ
u
u
=
−
t
τ
m
∣
t
0
t
ln(u-u_{rest})|_{u_{rest}+\Delta u}^u=-\frac{t}{\tau _m}|_{t_0}^t
ln(u−urest)∣urest+Δuu=−τmt∣t0t
l
n
(
u
−
u
r
e
s
t
)
−
l
n
(
Δ
u
)
=
−
t
−
t
0
τ
m
ln(u-u_{rest})-ln(\Delta u)=-\frac{t-t_0}{\tau _m}
ln(u−urest)−ln(Δu)=−τmt−t0
u
−
u
r
e
s
t
=
e
l
n
Δ
u
−
t
−
t
0
τ
m
=
Δ
u
e
−
t
−
t
0
τ
m
u-u_{rest}=e^{ln\Delta u-\frac{t-t_0}{\tau m}} =\Delta u e^{-\frac{t-t_0}{\tau m}}
u−urest=elnΔu−τmt−t0=Δue−τmt−t0
最后得到其解为
u
−
u
r
e
s
t
=
Δ
u
e
−
t
−
t
0
τ
m
u-u_{rest} =\Delta u e^{-\frac{t-t_0}{\tau m}}
u−urest=Δue−τmt−t0
一般情况下,取
u
r
e
s
t
=
0
u_{rest}=0
urest=0,为了简化计算,将
R
I
(
t
)
RI(t)
RI(t)看成常数,进一步得到迭代式,
u
(
t
+
Δ
t
)
=
u
(
t
)
e
−
Δ
t
τ
m
+
R
I
(
t
)
u(t+\Delta t)=u(t)e^{-\frac{\Delta t}{\tau m} }+RI(t)
u(t+Δt)=u(t)e−τmΔt+RI(t)
应用
[1]中的公式(2)
u
(
t
)
=
u
(
t
i
−
1
)
e
−
t
i
−
1
−
t
τ
+
I
(
t
)
u(t)=u(t_{i-1})e^{-\frac{t_{i-1}-t}{\tau} }+I(t)
u(t)=u(ti−1)e−τti−1−t+I(t)
[1]Wu Y, Deng L, Li G, Zhu J, Shi L P. Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks[J]. Frontiers in Neuroscience, 2017, 12.
[2]https://blog.csdn.net/qq_34886403/article/details/75735448
最后得到STBP中TD的迭代式为
{
x
i
i
+
1
,
n
=
∑
j
=
1
l
(
n
−
1
)
w
i
j
n
o
j
t
+
1
,
n
−
1
u
i
t
+
1
,
n
=
u
i
t
,
n
f
(
o
i
t
,
n
)
+
x
i
t
+
1
,
n
+
b
i
n
o
i
t
+
1
,
n
=
g
(
u
i
t
+
1
,
n
)
\left\{ \begin{aligned} x^{i+1,n}_i &= \sum _{j=1}^{l(n-1)}w_{ij}^no_j^{t+1,n-1}\\ u^{t+1,n}_i &=u^{t,n}_i f(o^{t,n}_i)+x^{t+1,n}_i+b^n_i\\ o^{t+1,n}_i &=g(u^{t+1,n}_i) \end{aligned} \right.
⎩⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎧xii+1,nuit+1,noit+1,n=j=1∑l(n−1)wijnojt+1,n−1=uit,nf(oit,n)+xit+1,n+bin=g(uit+1,n)
其中
f
(
x
)
=
τ
e
−
x
τ
f(x)=\tau e^{-\frac{x}{\tau}}
f(x)=τe−τx
g
(
x
)
=
{
1
,
x
≥
V
t
h
1
,
x
<
V
t
h
g(x)=\left\{ \begin{aligned} 1,x \ge V_{th}\\ 1,x < V_{th} \end{aligned} \right.
g(x)={1,x≥Vth1,x<Vth
上式中,
x
i
x_i
xi是第i个神经元的突触前输入的简化表示,和原始LIF模型中的I相似,
u
i
u_i
ui是第i个神经元的神经元膜电位,
b
i
b_i
bi是和阈值
V
t
h
V_th
Vth相关的偏差参数。
借鉴LSTM的思想,用遗忘门
f
(
⋅
)
f(·)
f(⋅)来控制TD缓存,用输出门
g
(
⋅
)
g(·)
g(⋅)来发射脉冲,遗忘门控制TD中的膜电压缓存泄漏程度,输出门当被激活时产生一个脉冲,对于一个很小的时间常数
τ
\tau
τ,
f
(
⋅
)
f(·)
f(⋅)可以被近似为,
f
(
x
)
=
τ
e
−
x
τ
f(x)=\tau e^{-\frac{x}{\tau}}
f(x)=τe−τx
f
(
o
i
t
,
n
)
=
{
τ
,
o
i
t
,
n
=
0
0
,
o
i
t
,
n
=
1
f(o_i^{t,n})=\left\{ \begin{aligned} \tau,o_i^{t,n}=0\\ 0,o_i^{t,n}=1 \end{aligned} \right.
f(oit,n)={τ,oit,n=00,oit,n=1
因为
τ
e
−
1
τ
≈
0
\tau e ^{-\frac{1}{\tau}}\approx 0
τe−τ1≈0,用这种方式,原始LIF模型被转化为迭代版本,方便后续反向传播
迭代式2
推导
由膜电压公式
τ
d
u
d
t
=
−
u
+
I
,
u
<
V
t
h
\tau \frac{du}{dt}=-u+I,u<V_{th}
τdtdu=−u+I,u<Vth
Euler method:
d
u
=
u
t
+
1
−
u
t
du=u^{t+1}-u^t
du=ut+1−ut
代入得
τ
u
t
+
1
−
u
t
d
t
=
−
u
t
+
I
\tau \frac{u^{t+1}-u^t}{dt}=-u^t+I
τdtut+1−ut=−ut+I
u
t
+
1
−
u
t
=
−
d
t
τ
u
t
+
d
t
τ
I
u^{t+1}-u^t=-\frac{dt}{\tau}u^t+\frac{dt}{\tau}I
ut+1−ut=−τdtut+τdtI
得到迭代式
u
t
+
1
=
(
1
−
d
t
τ
)
u
t
+
d
t
τ
I
u^{t+1}=(1-\frac{dt}{\tau})u^t+\frac{dt}{\tau}I
ut+1=(1−τdt)ut+τdtI
应用
[3]Wu Y, Deng L, Li G, Zhu J, Shi L. Direct Training for Spiking Neural Networks: Faster, Larger, Better[J]. CoRR, 2018, abs/1809.05793.
- 将 1 − d t τ 1-\frac{dt}{\tau} 1−τdt替换为delay factor k τ 1 k_{\tau 1} kτ1
- 把电流
I
I
Iexpand为显性求和,
∑
j
w
j
o
(
j
)
\sum_{j}w_jo(j)
∑jwjo(j).其中j为突触前神经元下标,
o
(
j
)
o(j)
o(j).表示对应突触前神经元是否发射脉冲,取值1或0
得到公式
u t + 1 = k τ 1 u t + ∑ j w j o ( j ) u^{t+1}=k_{\tau 1}u^t+\sum_{j}w_jo(j) ut+1=kτ1ut+j∑wjo(j)
然后添加firing-and-reseting机制,当发射脉冲时,膜电位重置为静息电位
假设 u r e s t = 0 u_{rest}=0 urest=0
{ u t + 1 , n + 1 ( i ) = k τ 1 u t , n + 1 ( i ) ( 1 − o t , n + 1 ( i ) ) + ∑ j = 1 l ( n ) w i j n o t + 1 , n ( j ) o t + 1 , n + 1 ( i ) = f ( u t + 1 , n + 1 ( i ) − V t h ) \left\{ \begin{aligned} u^{t+1,n+1}(i) & = k_{\tau 1}u^{t,n+1}(i)(1-o^{t,n+1}(i))+\sum_{j=1}^{l(n)}w_{ij}^{n}o^{t+1,n}(j) \\ o^{t+1,n+1}(i) & = f(u^{t+1,n+1}(i) -V_{th})\\ \end{aligned} \right. ⎩⎪⎪⎨⎪⎪⎧ut+1,n+1(i)ot+1,n+1(i)=kτ1ut,n+1(i)(1−ot,n+1(i))+j=1∑l(n)wijnot+1,n(j)=f(ut+1,n+1(i)−Vth)
其中 w i j n w_{ij}^n wijn表示从第n层第j个神经元到第n+1层第i个神经元的突触权重, l ( n ) l(n) l(n)表示第n层的神经元数量。 f ( ⋅ ) f(·) f(⋅)有
f ( x ) = { = 0 , x < 0 = 1 , x ≥ 0 f(x)=\left\{ \begin{aligned} =0,x<0 \\ =1,x\ge 0\\ \end{aligned} \right. f(x)={=0,x<0=1,x≥0
从上式,
若 u t + 1 , n + 1 ( i ) ≥ V t h u^{t+1,n+1}(i)\ge V_{th} ut+1,n+1(i)≥Vth,则 o t , n + 1 = 1 o^{t,n+1}=1 ot,n+1=1,发射脉冲
若 o t , n + 1 = 1 o^{t,n+1}=1 ot,n+1=1,即在t时刻发射脉冲,则t+1时刻膜电位被重置
u t + 1 , n + 1 = ∑ j = 1 l ( n ) w i j n o t + 1 , n ( j ) u^{t+1,n+1}=\sum_{j=1}^{l(n)}w_{ij}^{n}o^{t+1,n}(j) ut+1,n+1=j=1∑l(n)wijnot+1,n(j)
若 o t , n + 1 = 0 o^{t,n+1}=0 ot,n+1=0,未发射脉冲,则不会被重置