# Theano中如何只更新一部分权重，用法及理由。

lookup_table = theano.shared(matrix_ndarray)

subset = lookup_table[vector_of_indices]

cost = something that depends on subset
g = theano.grad(cost, subset)

updates = inc_subtensor(subset, g*lr)
OR
updates = set_subtensor(subset, subset + g*lr)

f = theano.function(..., updates=[(lookup_table, updates)])

• In rmsprop, you keep an exponentially decaying squared gradient by whose square root you divide the current gradient to rescale the update step component-wise. If the gradient of the lookup table row which corresponds to a rare word is very often zero, the squared gradient history will tend to zero for that row because the history of that row decays towards zero.
• Using Hessian-Free, you will get many zero rows and columns. Even one of them would make it non-invertible.

http://deeplearning.net/software/theano/tutorial/faq_tutorial.html

#### tensorflow从已经训练好的模型中，恢复(指定)权重(构建新变量、网络)并继续训练(finetuning)

2017-07-27 18:18:45

#### tensorflow中optimizer如何实现神经网络的权重，偏移等系数的更新和梯度计算

2017-08-15 17:33:10

#### TensorFlow的权值更新

2017-02-11 00:05:35

#### Theano tutorial和卷积神经网络的Theano实现 Part1

2016-12-27 17:09:50

#### Theano中scan函数的使用

2016-11-19 16:03:26

#### 五个例子掌握theano.scan函数

2016-01-14 16:32:03

#### Deep Learning - Theano.scan 对比理解

2016-11-19 18:59:11

#### TensorFlow 优化实践

2017-10-25 15:14:29

#### tf37：tensorflow中将模型的权重值限定范围

2018-05-23 09:58:23

#### tensorflow入门

2017-11-30 16:05:07