记录TensorFlow听课笔记
一,可训练变量
Variable对象 :
对Tensor对象的进一步封装 ,
在模型训练过程中自动记录梯度信息,由算法自动优化 ,
可以被训练的变量 ,
在机器学习中作为模型参数。
创建可训练变量
tf.Variable(initial_value,dtype)
数字Python列表 ndarray对象 Tensor对象
import tensorflow as tf
import numpy as np
a=tf.Variable(3)
b=tf.Variable([1,2])
c=tf.Variable(np.array([1,2]))
print(a)
print(b)
print(c)
利用这个方法,默认整数为int32,浮点数为float32,注意Numpy默认的浮点数类型是float64,如果想和Numpy数据进行对比,则需要修改与numpy一致,否则在机器学习中float32位够用了。
将张量封装成可训练变量
import tensorflow as tf
import numpy as np
a=tf.Variable(tf.constant([[1,2],[3,4]]))
b=tf.Variable(tf.zeros([2,3]))
c=tf.Variable(tf.random.normal([2,2]))
print(a)
print(b)
print(c)
使用变量名
import tensorflow as tf
import numpy as np
x=tf.Variable([1,2])
print(x.shape,x.dtype)
print(x.numpy())
#trainalbe属性
print(x.trainalbe)
可训练变量赋值
只有可训练变量可以使用 constant不行
对象名.assign() 对象名.assign_add() 对象名.assign_sub()
import tensorflow as tf
import numpy as np
x=tf.Variable([1,2])
print(x)
x.assign([3,4])
print(x)
x.assign_add([1,1])
print(x)
x.assign_sub([1,1])
print(x)
判断是张量还是可训练
import tensorflow as tf
import numpy as np
a=tf.range(5)
x=tf.Variable(a)
print(isinstance(a,tf.tensor))
print(isinstance(a,tf.Variable))
二,TensorFlow的自动求导
求y=x^2 x=3的导数
GradientTape(persistent,watch_accessed_variables)
第一个参数默认为false,表示梯度带只使用一次,使用完就销毁了,若为true则表明梯度带可以多次使用,但在循环最后要记得把它销毁。
第二个参数默认为true,表示自动添加监视
import tensorflow as tf
import numpy as np
x=tf.Variable(3.)
with tf.GradientTape() as tape:
y=tf.square(x)
dy_dx = tape.gradient(y,x)
print(y)
print(dy_dx)
import tensorflow as tf
import numpy as np
x=tf.Variable(3.)
with tf.GradientTape(persistent=True) as tape:
y=tf.square(x)
z=pow(x,3)
dy_dx = tape.gradient(y,x)
dz_dx = tape.gradient(z,x)
print(y)
print(dy_dx)
print(z)
print(dz_dx)
del tape
当第二个参数为watch_accessed_variables=false
手动监视
import tensorflow as tf
import numpy as np
x=tf.Variable(3.)
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(x) #添加监视可训练变量
y=tf.square(x)
dy_dx = tape.gradient(y,x)
print(y)
print(dy_dx)
监视非可训练变量
import tensorflow as tf
import numpy as np
x=tf.constant(3.)
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(x) #添加监视可训练变量
y=tf.square(x)
dy_dx = tape.gradient(y,x)
print(y)
print(dy_dx)
多元函数求偏导数
多元函数求偏导数一阶导
import tensorflow as tf
import numpy as np
x=tf.Variable(3.)
y=tf.Variable(4.)
with tf.GradientTape(persistent=True) as tape:
f=tf.square(x)+2*tf.square(y)+1
df_dx,df_dy = tape.gradient(f,[x,y])
first_grade = tape.gradient(f,[x,y]) #使用一个变量名接受两个结果
print(f)
print(df_dx)
print(df_dy)
print(first_grade)
del tape
多元函数求偏导数二阶导
import tensorflow as tf
import numpy as np
x=tf.Variable(3.)
y=tf.Variable(4.)
with tf.GradientTape(persistent=True) as tape2:
with tf.GradientTape(persistent=True) as tape1:
f=tf.square(x)+2*tf.square(y)+1
first_grade = tape1.gradient(f,[x,y])
second_grade = [tape2.gradient(first_grade,[x,y])]
print(f)
print(first_grade)
print(second_grade)
del tape1
del tape2
对向量求偏导
x=tf.Variable([1,2,3.]) #需要加.
y=tf.Variable([4,5,6.])
with tf.GradientTape() as tape:
f=tf.square(x)+2*tf.square(y)+1
df_dx,df_dy = tape.gradient(f,[x,y])
print(f)
print(df_dx)
print(df_dx)