Tempotron (论文传送门)

Tempotron是一个二层网络学习算法，输入脉冲序列，输出脉冲响应。对于二分类，最重要的是保证以下关系的存在：即应该发放脉冲的神经元的最大膜电势应超过阈值，否则就增加突触传递效率；反之，不该发放脉冲的其最大膜电势就不该超过阈值，否则就减小突触传递效率。
V ( t max ⁡ ⊕ ) > V t h r > V ( t max ⁡ ⊖ ) V\left(t_{\max } ^\oplus\right)>V_{\mathrm{thr}}>V\left(t_{\max } ^\ominus\right)

• 神经元模型LIF

V ( t ) = ∑ i ω i ∑ t i K ( t − t i ) + V r e s t V(t)=\sum_{i} \omega_{i} \sum_{t_{i}} K\left(t-t_{i}\right)+V_{\mathrm{rest}}

K ( t − t i ) = V 0 ( exp ⁡ [ − ( t − t i ) / τ ] − exp ⁡ [ − ( t − t i ) / τ s ] ) K(t-t_i) = V_{0}\left(\exp \left[-\left(t-t_{i}\right) / \tau\right]-\exp \left[-\left(t-t_{i}\right) / \tau_{\mathrm{s}}\right]\right)

t i t_i 表示脉冲发放时刻， τ \tau 表示膜电势整合的延迟参数， τ s \tau_s 表示突触电流的延迟参数， V 0 V_0 表示归一化因子，它使得PSP取决于突触效率 ω \omega V 0 V_0 的计算步骤为：设 V 0 V_0 为0，求最大 K K 所在时刻，然后代入 t m a x t_{max} 求方程的倒数即为 V 0 V_0
V 0 = τ τ s τ − τ s ln ⁡ τ τ s V_0 = \frac{\tau \tau_s}{\tau-\tau_s}\ln\frac{\tau}{\tau_s}

LIF神经元代码如下：

def compute_norm_factor(self, tau, tau_s):
tmax = (tau * tau_s * np.log(tau/tau_s)) / (tau - tau_s)
v_max = self.K(1, tmax, 0)
V_0 = 1/v_max
return V_0

def K(self, V_0, t, t_i):
if t < t_i:
value = 0
else:
value = V_0 * (np.exp(-(t-t_i)/self.tau) - np.exp(-(t-t_i)/self.tau_s))
return value

def compute_spike_contributions(self, t, spike_times):
N_synapse = len(spike_times)
spike_contribs = np.zeros(N_synapse)
for neuron_pos in xrange(N_synapse):
for spike_time in spike_times[neuron_pos]:
spike_contribs[neuron_pos] += self.K(self.V_norm, t, spike_time)
return spike_contribs

def compute_membrane_potential(self, t, spike_times):
spike_contribs = self.compute_spike_contributions(t, spike_times)
total_incoming = spike_contribs * self.efficacies
V = total_incoming.sum() + self.V_rest
return V

• 网络训练

Tempotron只在错误发生时更新突触传递效率，其更新规则是梯度下降，损失函数如下：
E ± = ± ( V thr  − V ( t max  ) ) Θ ( ± ( V thr  − V ( t max  ) ) ) Θ ( x ) = { 1 , x ≥ 0 0 , x < 0 E_{\pm}=\pm\left(V_{\text {thr }}-V\left(t_{\text {max }}\right)\right) \Theta\left(\pm\left(V_{\text {thr }}-V\left(t_{\text {max }}\right)\right)\right)\\ \Theta(x)=\begin{cases} 1, x\geq0 \\ 0,x<0\end{cases}

− d E ± d ω i = ± ∑ t i < t max ⁡ K ( Δ t i ) ± ∂ V ( t max ⁡ ) ∂ t max ⁡ d t max ⁡ d ω i -\frac{\mathrm{d} E_{\pm}}{\mathrm{d} \omega_{i}}=\pm \sum_{t_{i}<t_{\max }} K\left(\Delta t_{i}\right) \pm \frac{\partial V\left(t_{\max }\right)}{\partial t_{\max }} \frac{d t_{\max }}{d \omega_{i}}

Δ ω i = λ ∑ t i < t max  K ( t max ⁡ − t i ) \Delta \omega_{i}=\lambda \sum_{t_{i}<t_{\text {max }}} K\left(t_{\max }-t_{i}\right) 鉴于Tempotron应用于二层网络，其输出希望对 ⊕ \oplus 模式发放脉冲，而对 ⊖ \ominus 模式不发放脉冲，因此对应于最大膜电势要大于脉冲发放阈值，而对另一种模式小于脉冲发放阈值。注意：在T时间内，输出神经元只负责计算膜电势，只在最后判断响应对错时才将其与阈值进行比较。也就是说，Tempotron算法的输出层最多只发放一个脉冲（发不发无所谓，我们关心的是最大膜电势），因此它也无法扩展到多层神经网络中使用。

t m a x = τ τ s τ − τ s ( ln ⁡ τ τ s + ln ⁡ ∑ ω i exp ⁡ ( t i τ ) ∑ ω i exp ⁡ ( t i τ s ) ) t_{max} = \frac{\tau \tau_s}{\tau -\tau_s}\left( \ln \frac{\tau}{\tau_s} + \ln \frac{\sum\omega_i\exp(\frac{t_i}{\tau})}{\sum\omega_i\exp(\frac{t_i}{\tau_s})}\right)

def adapt_weights(self, spike_times, target, learning_rate):
tmax = self.compute_tmax(spike_times)
vmax = self.compute_membrane_potential(tmax, spike_times)

if (vmax >= self.threshold) == target:
return

dw = self.dw(learning_rate, tmax, spike_times)
if target is True:
self.efficacies += dw
else:
self.efficacies -= dw

def dw(self, learning_rate, tmax, spike_times):
spike_contribs = self.compute_spike_contributions(tmax, spike_times)
update = learning_rate * spike_contribs
return update

def compute_tmax(self, spike_times):
spikes_chron = [(time, synapse) for synapse in xrange(len(spike_times)) for time in spike_times[synapse]]
spikes_chron.sort()

spikes = [(s[0], self.efficacies[s[1]]) for s in spikes_chron]
times = np.array([spike[0] for spike in spikes])
weights = np.array([spike[1] for spike in spikes])

sum_tau = (weights*np.exp(times/self.tau)).cumsum()
sum_tau_s = (weights*np.exp(times/self.tau_s)).cumsum()

div = sum_tau_s/sum_tau
boundary_cases = div < 0
div[boundary_cases] = 10

tmax_list = self.tau*self.tau_s*(self.log_tts + np.log(div))/(self.tau - self.tau_s)
tmax_list[boundary_cases] = times[boundary_cases]

vmax_list = np.array([self.compute_membrane_potential(t, spike_times) for t in tmax_list])

tmax = tmax_list[vmax_list.argmax()]
return tmax


• 附：最大模电势时刻推导

• 这里将一个神经元的多个脉冲理解成多个相同权重的神经元分别发放一个脉冲，从而使得膜电势计算公式有如下变化
V ( t ) = ∑ ω i ∑ t i V 0 ( exp ⁡ ( − t − t i τ s ) − exp ⁡ ( − t − t i τ ) )    ⟹    V ( t ) = V 0 ∑ ω i ( exp ⁡ ( − t − t i τ s ) − exp ⁡ ( − t − t i τ ) ) V(t) = \sum \omega_i \sum \limits _{t_i}V_0\left(\exp(-\frac{t-t_i}{\tau_s})-\exp(-\frac{t-t_i}{\tau})\right)\\ \implies V(t) = V_0 \sum \omega_i \left(\exp(-\frac{t-t_i}{\tau_s})-\exp(-\frac{t-t_i}{\tau})\right) 最大膜电势时刻其导数为0
V ′ ( t ) = V 0 ∑ ω i ( 1 τ s exp ⁡ ( − t − t i τ s ) − 1 τ exp ⁡ ( − t − t i τ ) ) = 0    ⟹    1 τ s ∑ ω i exp ⁡ ( t i τ s ) exp ⁡ ( − t m a x τ s ) = 1 τ ∑ ω i exp ⁡ ( t i τ ) exp ⁡ ( − t m a x τ )    ⟹    exp ⁡ ( − t m a x τ s ) 1 τ s ∑ ω i exp ⁡ ( t i τ s ) = exp ⁡ ( − t m a x τ ) 1 τ ∑ ω i exp ⁡ ( t i τ )    ⟹    − t m a x τ s + ln ⁡ 1 τ s + ln ⁡ ∑ ω i exp ⁡ ( t i τ s ) = − t m a x τ + ln ⁡ 1 τ + ln ⁡ ∑ ω i exp ⁡ ( t i τ )    ⟹    τ − τ s τ τ s t m a x = ln ⁡ τ τ s + ln ⁡ ∑ ω i exp ⁡ ( t i τ s ) ∑ ω i exp ⁡ ( t i τ )    ⟹    t m a x = τ τ s τ − τ s ( ln ⁡ τ τ s + ln ⁡ ∑ ω i exp ⁡ ( t i τ s ) ∑ ω i exp ⁡ ( t i τ ) ) V^{\prime}(t) = V_0 \sum \omega_i \left( \frac{1}{\tau_s}\exp(-\frac{t-t_i}{\tau_s})-\frac{1}{\tau}\exp(-\frac{t-t_i}{\tau})\right) = 0\\ \implies \frac{1}{\tau_s}\sum \omega_i\exp(\frac{t_i}{\tau_s})\exp(-\frac{t_{max}}{\tau_s})=\frac{1}{\tau}\sum \omega_i\exp(\frac{t_i}{\tau})\exp(-\frac{t_{max}}{\tau})\\ \implies \exp(-\frac{t_{max}}{\tau_s}) \frac{1}{\tau_s}\sum \omega_i\exp(\frac{t_i}{\tau_s})=\exp(-\frac{t_{max}}{\tau})\frac{1}{\tau}\sum \omega_i\exp(\frac{t_i}{\tau})\\ \implies -\frac{t_{max}}{\tau_s} +\ln\frac{1}{\tau_s}+\ln\sum \omega_i\exp(\frac{t_i}{\tau_s})=-\frac{t_{max}}{\tau}+\ln\frac{1}{\tau}+\ln\sum \omega_i\exp(\frac{t_i}{\tau})\\ \implies \frac{\tau-\tau_s}{\tau \tau_s}t_{max} = \ln \frac{\tau}{\tau_s}+ \ln \frac{\sum\omega_i\exp(\frac{t_i}{\tau_s})}{\sum\omega_i\exp(\frac{t_i}{\tau})} \\ \implies t_{max} = \frac{\tau \tau_s}{\tau -\tau_s}\left( \ln \frac{\tau}{\tau_s} + \ln \frac{\sum\omega_i\exp(\frac{t_i}{\tau_s})}{\sum\omega_i\exp(\frac{t_i}{\tau})}\right)
∑ ω i exp ⁡ ( t i τ s ) ∑ ω i exp ⁡ ( t i τ ) \frac{\sum\omega_i\exp(\frac{t_i}{\tau_s})}{\sum\omega_i\exp(\frac{t_i}{\tau})} 是不能小于等于0的，不然 t m a x t_{max} 便不会存在，当出现小于0的情况时，手动设置其比值为10。

12-05 7912

11-26 2万+
12-27 661
01-15 1万+
09-08 5219
02-09 9497
04-15 1万+
10-16 1万+
05-04 4万+
05-30 3343
07-07 2123
04-09 3432