LIF神经元膜电压公式-迭代式推导,及其在STBP中的应用

膜电压公式

τ m d u d t = − [ u − u r e s t ] + R I ( t ) \tau _m\frac{du}{dt}=-[u-u_{rest}]+RI(t) τmdtdu=[uurest]+RI(t)

迭代式1

u ( t + Δ t ) = u ( t ) e − Δ t τ m + R I ( t ) u(t+\Delta t)=u(t)e^{-\frac{\Delta t}{\tau m} }+RI(t) u(t+Δt)=u(t)eτmΔt+RI(t)

推导

t = t 0 t=t_0 t=t0时, u ( t ) = u r e s t + Δ u u(t)=u_{rest}+\Delta u u(t)=urest+Δu;当 t > 0 t>0 t>0
时,输入电流 I ( t ) I(t) I(t)衰减到零,即 R I ( t ) = 0 RI(t)=0 RI(t)=0。若以 t 0 t_0 t0为初始状态:
1 u − u r e s t d u = − 1 τ m d t \frac{1}{u-u_{rest}}du=-\frac{1}{\tau _m}dt uurest1du=τm1dt
∫ u r e s t + Δ u u 1 u − u r e s t d u = − ∫ t 0 t 1 τ m d t \int_{u_{rest}+\Delta u}^{u} \frac{1}{u-u_{rest}}du=-\int_{t_0}^{t} \frac{1}{\tau _m}dt urest+Δuuuurest1du=t0tτm1dt
∫ u r e s t + Δ u u d ( u − u r e s t ) = − 1 τ m ∫ t 0 t d t \int_{u_{rest}+\Delta u}^{u}d(u-u_{rest})=-\frac{1}{\tau _m}\int_{t_0}^{t} dt urest+Δuud(uurest)=τm1t0tdt
l n ( u − u r e s t ) ∣ u r e s t + Δ u u = − t τ m ∣ t 0 t ln(u-u_{rest})|_{u_{rest}+\Delta u}^u=-\frac{t}{\tau _m}|_{t_0}^t ln(uurest)urest+Δuu=τmtt0t
l n ( u − u r e s t ) − l n ( Δ u ) = − t − t 0 τ m ln(u-u_{rest})-ln(\Delta u)=-\frac{t-t_0}{\tau _m} ln(uurest)ln(Δu)=τmtt0
u − u r e s t = e l n Δ u − t − t 0 τ m = Δ u e − t − t 0 τ m u-u_{rest}=e^{ln\Delta u-\frac{t-t_0}{\tau m}} =\Delta u e^{-\frac{t-t_0}{\tau m}} uurest=elnΔuτmtt0=Δueτmtt0
最后得到其解为
u − u r e s t = Δ u e − t − t 0 τ m u-u_{rest} =\Delta u e^{-\frac{t-t_0}{\tau m}} uurest=Δueτmtt0
一般情况下,取 u r e s t = 0 u_{rest}=0 urest=0,为了简化计算,将 R I ( t ) RI(t) RI(t)看成常数,进一步得到迭代式,
u ( t + Δ t ) = u ( t ) e − Δ t τ m + R I ( t ) u(t+\Delta t)=u(t)e^{-\frac{\Delta t}{\tau m} }+RI(t) u(t+Δt)=u(t)eτmΔt+RI(t)

应用

[1]中的公式(2)
u ( t ) = u ( t i − 1 ) e − t i − 1 − t τ + I ( t ) u(t)=u(t_{i-1})e^{-\frac{t_{i-1}-t}{\tau} }+I(t) u(t)=u(ti1)eτti1t+I(t)

[1]Wu Y, Deng L, Li G, Zhu J, Shi L P. Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks[J]. Frontiers in Neuroscience, 2017, 12.
[2]https://blog.csdn.net/qq_34886403/article/details/75735448

最后得到STBP中TD的迭代式为

{ x i i + 1 , n = ∑ j = 1 l ( n − 1 ) w i j n o j t + 1 , n − 1 u i t + 1 , n = u i t , n f ( o i t , n ) + x i t + 1 , n + b i n o i t + 1 , n = g ( u i t + 1 , n ) \left\{ \begin{aligned} x^{i+1,n}_i &= \sum _{j=1}^{l(n-1)}w_{ij}^no_j^{t+1,n-1}\\ u^{t+1,n}_i &=u^{t,n}_i f(o^{t,n}_i)+x^{t+1,n}_i+b^n_i\\ o^{t+1,n}_i &=g(u^{t+1,n}_i) \end{aligned} \right. xii+1,nuit+1,noit+1,n=j=1l(n1)wijnojt+1,n1=uit,nf(oit,n)+xit+1,n+bin=g(uit+1,n)
其中
f ( x ) = τ e − x τ f(x)=\tau e^{-\frac{x}{\tau}} f(x)=τeτx

g ( x ) = { 1 , x ≥ V t h 1 , x < V t h g(x)=\left\{ \begin{aligned} 1,x \ge V_{th}\\ 1,x < V_{th} \end{aligned} \right. g(x)={1,xVth1,x<Vth
上式中, x i x_i xi是第i个神经元的突触前输入的简化表示,和原始LIF模型中的I相似, u i u_i ui是第i个神经元的神经元膜电位, b i b_i bi是和阈值 V t h V_th Vth相关的偏差参数。
借鉴LSTM的思想,用遗忘门 f ( ⋅ ) f(·) f()来控制TD缓存,用输出门 g ( ⋅ ) g(·) g()来发射脉冲,遗忘门控制TD中的膜电压缓存泄漏程度,输出门当被激活时产生一个脉冲,对于一个很小的时间常数 τ \tau τ, f ( ⋅ ) f(·) f()可以被近似为,
f ( x ) = τ e − x τ f(x)=\tau e^{-\frac{x}{\tau}} f(x)=τeτx

f ( o i t , n ) = { τ , o i t , n = 0 0 , o i t , n = 1 f(o_i^{t,n})=\left\{ \begin{aligned} \tau,o_i^{t,n}=0\\ 0,o_i^{t,n}=1 \end{aligned} \right. f(oit,n)={τ,oit,n=00,oit,n=1
因为 τ e − 1 τ ≈ 0 \tau e ^{-\frac{1}{\tau}}\approx 0 τeτ10,用这种方式,原始LIF模型被转化为迭代版本,方便后续反向传播

迭代式2

推导

由膜电压公式
τ d u d t = − u + I , u < V t h \tau \frac{du}{dt}=-u+I,u<V_{th} τdtdu=u+I,u<Vth
Euler method:
d u = u t + 1 − u t du=u^{t+1}-u^t du=ut+1ut
代入得
τ u t + 1 − u t d t = − u t + I \tau \frac{u^{t+1}-u^t}{dt}=-u^t+I τdtut+1ut=ut+I
u t + 1 − u t = − d t τ u t + d t τ I u^{t+1}-u^t=-\frac{dt}{\tau}u^t+\frac{dt}{\tau}I ut+1ut=τdtut+τdtI
得到迭代式
u t + 1 = ( 1 − d t τ ) u t + d t τ I u^{t+1}=(1-\frac{dt}{\tau})u^t+\frac{dt}{\tau}I ut+1=(1τdt)ut+τdtI

应用

[3]Wu Y, Deng L, Li G, Zhu J, Shi L. Direct Training for Spiking Neural Networks: Faster, Larger, Better[J]. CoRR, 2018, abs/1809.05793.

  1. 1 − d t τ 1-\frac{dt}{\tau} 1τdt替换为delay factor k τ 1 k_{\tau 1} kτ1
  2. 把电流 I I Iexpand为显性求和, ∑ j w j o ( j ) \sum_{j}w_jo(j) jwjo(j).其中j为突触前神经元下标, o ( j ) o(j) o(j).表示对应突触前神经元是否发射脉冲,取值1或0
    得到公式
    u t + 1 = k τ 1 u t + ∑ j w j o ( j ) u^{t+1}=k_{\tau 1}u^t+\sum_{j}w_jo(j) ut+1=kτ1ut+jwjo(j)
    然后添加firing-and-reseting机制,当发射脉冲时,膜电位重置为静息电位
    假设 u r e s t = 0 u_{rest}=0 urest=0
    { u t + 1 , n + 1 ( i ) = k τ 1 u t , n + 1 ( i ) ( 1 − o t , n + 1 ( i ) ) + ∑ j = 1 l ( n ) w i j n o t + 1 , n ( j ) o t + 1 , n + 1 ( i ) = f ( u t + 1 , n + 1 ( i ) − V t h ) \left\{ \begin{aligned} u^{t+1,n+1}(i) & = k_{\tau 1}u^{t,n+1}(i)(1-o^{t,n+1}(i))+\sum_{j=1}^{l(n)}w_{ij}^{n}o^{t+1,n}(j) \\ o^{t+1,n+1}(i) & = f(u^{t+1,n+1}(i) -V_{th})\\ \end{aligned} \right. ut+1,n+1(i)ot+1,n+1(i)=kτ1ut,n+1(i)(1ot,n+1(i))+j=1l(n)wijnot+1,n(j)=f(ut+1,n+1(i)Vth)
    其中 w i j n w_{ij}^n wijn表示从第n层第j个神经元到第n+1层第i个神经元的突触权重, l ( n ) l(n) l(n)表示第n层的神经元数量。 f ( ⋅ ) f(·) f()
    f ( x ) = { = 0 , x < 0 = 1 , x ≥ 0 f(x)=\left\{ \begin{aligned} =0,x<0 \\ =1,x\ge 0\\ \end{aligned} \right. f(x)={=0,x<0=1,x0
    从上式,
    u t + 1 , n + 1 ( i ) ≥ V t h u^{t+1,n+1}(i)\ge V_{th} ut+1,n+1(i)Vth,则 o t , n + 1 = 1 o^{t,n+1}=1 ot,n+1=1,发射脉冲
    o t , n + 1 = 1 o^{t,n+1}=1 ot,n+1=1,即在t时刻发射脉冲,则t+1时刻膜电位被重置
    u t + 1 , n + 1 = ∑ j = 1 l ( n ) w i j n o t + 1 , n ( j ) u^{t+1,n+1}=\sum_{j=1}^{l(n)}w_{ij}^{n}o^{t+1,n}(j) ut+1,n+1=j=1l(n)wijnot+1,n(j)
    o t , n + 1 = 0 o^{t,n+1}=0 ot,n+1=0,未发射脉冲,则不会被重置

伪代码

在这里插入图片描述

  • 4
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值