http://deeplearning.net/software/theano/tutorial/loop.html
http://deeplearning.net/software/theano/library/scan.html#lib-scan
Scan
- A general form of recurrence, which can be used for looping.
- Reduction and map (loop over the leading dimensions) are special cases of
scan
. - You
scan
a function along some input sequence, producing an output at each time-step. - The function can see the previous K time-steps of your function.
sum()
could be computed by scanning the z + x(i) function over a list, given an initial state of z=0.- Often a for loop can be expressed as a
scan()
operation, andscan
is the closest that Theano comes to looping. - Advantages of using
scan
over for loops:- Number of iterations to be part of the symbolic graph.
- Minimizes GPU transfers (if GPU is involved).
- Computes gradients through sequential steps.
- Slightly faster than using a for loop in Python with a compiled Theano function.
- Can lower the overall memory usage by detecting the actual amount of memory needed.
The full documentation can be found in the library: Scan.
计算A^K:
三件事要处理:不变的A,使用non_sequences
这个参数;初始值,使用 outputs_info
这个参数;prior_result*A,theano框架自动计算。
所以有:
>>> k=T.iscalar('k')
>>> a=T.dmatrix('a')
>>> # We only care about A**k, but scan has provided us with A**1 through A**k.
>>> result, updates = theano.scan(fn=lambda prior_result, a: prior_result*a,
... outputs_info=T.ones_like(a), non_sequences=a, n_steps=k)
>>> # Discard the values that we don't care about. Scan is smart enough to notice this and not waste memory saving them.
>>> final_result = result[-1]
>>> # compiled function that returns a**k
>>> power = theano.function(inputs=[a,k], outputs=final_result, updates=updates)
>>> print(power([[1,2,3],[4,5,6]],2))
[[ 1. 4. 9.]
[ 16. 25. 36.]]
Scan over一个tensor的第一维(主维度):
需要loop的tensor(s)通过sequence
keyword argument传递给scan函数。
例如构建一个多项式,系数来自指定的vector,底来自指定的scalar:
1.0 * (3 ** 0) + 0.0 * (3 ** 1) + 2.0 * (3 ** 2)
>>> coefs=T.vector('coefs')
>>> x=T.scalar('x')
>>> maxSupportCoefNum=100
>>> #generage the components of the polnomial
>>> components, updates=theano.scan(fn=lambda coef, power, base: coef*(base**power),
... outputs_info=None, sequences=[coefs, T.arange(maxSupportCoefNum)], non_sequences=x)
>>> poly=components.sum() #sum components up
>>> polynomial=theano.function(inputs=[coefs,x], outputs=poly)
>>> print(polynomial([1,0,2],3))
19.0
outputs_info
为
None,
说明fn不需要初始化。
另外,power从哪里来的呢???a handy trick used to simulate python’s enumerate
: simply include theano.tensor.arange
to the sequences.
最后说一下scan函数中,fn函数的参数的产生顺序(The general order of function parameters to fn
is):
sequences (if any), prior result(s) (if needed), non-sequences (if any)
elementwise的计算y+x[i]=z[i]:
scalar与vector:
>>> x=T.vector('x')
>>> y=T.scalar('y')
>>> addEach, updates=theano.scan(lambda xi: y+xi, sequences=x)
>>> addFun=theano.function(inputs=[x,y],outputs=[addEach])
>>> z=addFun([1,2,3,4],100)
>>> print z
[array([ 101., 102., 103., 104.])]
scalar与matrix:
>>> import theano
>>> import theano.tensor as T
>>> x=T.dmatrix('x')
>>> y=T.dscalar('y')
>>> addEach, updates=theano.scan(lambda xi: y+xi, sequences=x)
>>> addFun=theano.function(inputs=[x,y], outputs=[addEach])
>>> z=addFun([[1,2],[3,4]],100)
>>> print z
[array([[ 101., 102.],
[ 103., 104.]])]
vector与matrix:
>>> subEach, updates=theano.scan(lambda xi: T.tanh(T.dot(xi,w)+b), sequences=x)
>>> mulEach, updates=theano.scan(lambda xi: T.tanh(T.dot(xi,w)+b), sequences=x)
>>> mulFun=theano.function(inputs=[x,w,b], outputs=[mulEach])
>>> z=mulFun([[1,2],[3,4]],[5,6],[7,8])
>>> print z
[array([[ 1., 1.],
[ 1., 1.]])]
>>> mulEach, updates=theano.scan(lambda xi: T.dot(xi,w)+b, sequences=x)
>>> mulFun=theano.function(inputs=[x,w,b], outputs=[mulEach])
>>> z=mulFun([[1,2],[3,4]],[5,6],[7,8])
>>> print z
[array([[ 24., 25.],
[ 46., 47.]])]
scalar与scalar计算和:
>>> k=theano.shared(0)
>>> nStep=T.iscalar('step')
>>> addAll, updates=theano.scan(lambda:{k:(k+1)},n_steps=nStep)
>>> addFun=theano.function([nStep],[],updates=updates)
>>> k.get_value()
array(0)
>>> addFun(3)
[]
>>> k.get_value()
array(3)
>>> k.set_value(5)
>>> k.get_value()
array(5)
>>> addFun(3)
[]
>>> k.get_value()
array(8)
还有很多,参考:
http://deeplearning.net/software/theano/tutorial/loop.html
Computing the sequence x(t) = tanh(x(t - 1).dot(W) + y(t).dot(U) + p(T - t).dot(V))
Computing norms of lines of X
Computing norms of columns of X
Computing trace of X
Computing the sequence x(t) = x(t - 2).dot(U) + x(t - 1).dot(V) + tanh(x(t - 1).dot(W) + b)
Computing the Jacobian of y = tanh(v.dot(A)) wrt x
Computing tanh(v.dot(W) + b) * d where d is binomial
Computing pow(A, k)
Calculating a Polynomial
>>> x=T.vector('x')
>>> y=T.scalar('y')
>>> addEach, updates=theano.scan(lambda xi: y+xi, sequences=x)
>>> addFun=theano.function(inputs=[x,y],outputs=[addEach])
>>> z=addFun([1,2,3,4],100)
>>> print z
[array([ 101., 102., 103., 104.])]