《tensorflow笔记》学习记录第四节

4.1损失函数

 激活函数的意义:有效避免仅使用\sum xw的纯线性组合,提高了模型的表达力,使模型有更好区分度。

NN复杂度: 多用NN层数和NN参数的个数表示

层数 = 隐藏层的层数 + 1一个输出层

总参数 = 总w +总b

神经网络优化(从损失函数、学习率、滑动平均、正则化出发)

损失函数(loss): 预测值(y)和已知答案(y_)的差距

NN优化目标:loss减少

方法:mse(mean squared error),自定义,ce(cross entropy,交叉熵)


均方误差mse: MSE(y_,y)= \frac{\sum_{i=1}^{n}(y-y_{1})^{2}}{n},式中,y1=y_.

loss_mse _ tf.reduce_mean(tf,square(y_-y))

例子:预测酸奶日销量y.x1,x2是影响日销量热因素

分析:建模前,应预测先采集的数据有:每日x1,x2和销量y_(即已知答案,最佳答案)

拟造数据集X,Y_=X1+X2 噪声:-0.05~0.05 拟合可以预测销量的函数


自定义损失函数:

如预测商品销量,预测多了,损失成本;预测少了,损失利润。

若利润不等于成本,则mse产生的loss无法利益最大化。则mse产生的loss无法利益最大化。

自定义损失函数 loss(y_,y)=\sum_{}^{}f(y_{1}-y),式中y1=y_,表示标准答案,y为神经网络计算出来的预测答案

y<y_: f(y,y_) = PROFIT*(y_-y) 预测的y少了,损失利润(PROFIT)

y>=y_: f(y,y_) = COST*(y_-y) 预测的y多了,损失成本(COST)

loss = tf.reduce_sum(tf.where(tf.greater(y,y_),COST(y,y_),COST(y-y_),PROFIT(y_-y))) 


交叉熵ce(cross entropy):表征两个概率分布之间的距离,越小说明两个概率越近。

H(y_,y) = -∑y_*logy

例子,二分类问题,已知答案y_ = (1,0) 预测y1 = (0.6, 0.2) y2= (0.8,0.2) 哪个更接近标准答案?

H1((1.0,0),(0.6,0.4))= -(1*log0.6+0*log0.4) = 0.222

H2((1.0,0),(0.8,0.2))= -(1*log0.8+0*log0.2)= 0.097

所以y2预测更准,在tensorflow中可以使用,

ce = -tf.reduce_mean(y*tf.log(tf.clip_by_value(y,1e-12, 1.0)) :使y在1e-12到1之间。

当n分类问题的n个输出(y1,y2,...yn)通过softmax()函数,便满足了概率分布要求:

\forall x P(X=x)\epsilon [0,1] ,and \sum_{x}^{}P(X=x)=1

softmax(yi)=\frac{e^{yi}}{\sum e^{yi}}

在tensorflow 中使用如下表示,

ce = tf.nn.space_softmax_cross_entropy_with_logits(logits=y,labels=tf.argmax(y_,1))

cem = tf.reduce_mean(ce)

下面为课程中代码。

#coding:utf-8
#
import tensorflow as tf
import numpy as np
BATCH_SIZE = 8
SEED = 23455

rdm = np.random.RandomState(SEED)
X = rdm.rand(32,2)
print X 
print type(X)

Y_ = [[x1+x2+(rdm.rand()/10.0-0.05)] for x1,x2 in X]

print "Y_:",Y_

#1前向传播,构建结构
x = tf.placeholder(tf.float32, shape=(None, 2))
y_ = tf.placeholder(tf.float32, shape=(None, 1))
w1 = tf.Variable(tf.random_normal([2,1], stddev=1, seed=1))
#w2 = tf.Varible(tf.random_normal(
y = tf.matmul(x,w1)

#定义损失函数及反向传播算法
loss_mse = tf.reduce_mean(tf.square(y-y_))
#train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse)
train_step = tf.train.MomentumOptimizer(0.001,loss_mse).minimize(loss_mse)
#train_step = tf.train.AdamOptimizer(0.001).minimize(loss_mse)

#生成会会话,训练STEPS轮
with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	STEP = 30000
	for i in range(STEP):
		start = (i*BATCH_SIZE) % 32
		end = (i*BATCH_SIZE) % 32 + BATCH_SIZE
		sess.run(train_step, feed_dict={x: X[start:end],y_: Y_[start:end]})
		if i %500 ==0:
			print "after %d training steps,w1 is:" %(i)
			print sess.run(w1), "\n"
	print "Final w1 is:\n", sess.run(w1)

4.2学习率

学习率 learning_rate:每次更新的幅度

w(n+1) =w(n)- learing_rate\bigtriangledown\bigtriangledown为损失函数的梯度,w为需要更新的参数)

例子,设损失函数loss =(w+1)^2,梯度=\frac{\partial loss}{\partial w} = 2w+2.w初值设为5,学习率设为0.2,则有如下代码

#coding:utf-8

import tensorflow as tf
#定义待优化参数w
w = tf.Variable(tf.constant(5, dtype=tf.float32))
#定义损失函数
loss = tf.square(w+1)
#定义反向传播算法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)

#生成会话
with tf.Session() as sess:
#初始化变量
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	STEPS = 40
	for i in range(STEPS):
		sess.run(train_step)
		w_val = sess.run(w)
		loss_val = sess.run(loss)
		print "After %s steps: w is %f, loss is %f." %(i, w_val, loss_val)

分别将学习率设为1和0.0001,观察结果

可知,学习率大了震荡不收敛,学习率小了收敛速度慢

由此提出指数衰减学习率,

learning_rate = LEARNING_RATE_BASE*LEARNING_RATE_DECAY^(global_step/LEARNING_RATE_STEP)

(注意公式中指数的位置)

LEARNING_RATE_BASE学习率基数,学习率初始值
LEARNING_RATE_DECAY学习率衰减率(0到1)
global_step运行了几轮BATACH_SIZE,喂了几次数据
LEARNING_RATE_STEP多少轮更新一次学习率,= 总样本数/BATCH_SIZE
#global_step设为不可训练
global_step = tf.Variable(0, trainable=Flase)
learning_rate = tf.train.exponentiall_decay(LEARNING_RATE_BASE, global_step, LEARING_RATE_STEP,LEARNING_RATE_DECAY,staircase =True)

将上面例子程序中的学习率改为指数衰减学习率(加快优化迭代效率。)

#coding:utf-8

import tensorflow as tf

LEARNING_RATE_BASE = 0.1   # 最初学习率
LEARNING_RATE_DECAY = 0.99 # 学习衰减率
LEARNING_RATE_STEP = 1     # 喂入多少轮BATCH_SIZE后,更新一次学习率,
                           # 一般设为:样本总数/BATCH_SIZE
global_step = tf.Variable(0, trainable=Flase)
learning_rate = tf.train.exponentiall_decay(LEARNING_RATE_BASE, global_step, LEARING_RATE_STEP,LEARNING_RATE_DECAY,staircase =True)

#定义待优化参数w
w = tf.Variable(tf.constant(5, dtype=tf.float32))
#定义损失函数
loss = tf.square(w+1)
#定义反向传播算法
train_step = tf.train.GradientDescentOptimizer(learning_rate ).minimize(loss,global_step=global_step)

#生成会话
with tf.Session() as sess:
#初始化变量
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	STEPS = 40
	for i in range(STEPS):
		sess.run(train_step)
        learning_rate_val = sess.run(learning_rate)
        global_step_val = sess.run(global_step)
		w_val = sess.run(w)
		loss_val = sess.run(loss)
		print "After %s steps: global step id %f,w is %f, laerning_rate_is %f,loss is %f." %(i,global_step, w_val, learning_rate_val,loss_val)

4.3 滑动平均

滑动平均(影子值):记录每个参数一段时间内过往值的平均,增加模型的泛化性。

针对所有参数:w和b . (像是给参数加影子,参数变化,影子缓慢追随,有点像低通滤波)

公式如下:

影子 = 衰减率*影子+(1-衰减率)*参数      (其中 影子初值= 参数初值,衰减率=min{MOVING_AVERAGE_DECAY, (1+轮数)/(10+轮数)}   MOVING_AVERAGE_DECAY是滑动平均衰减率,是一个超参数,一般设一个比较大的值)

下面为,课程中滑动平均演示所用代码

#coding:utf-8

import tensorflow as tf

# 1.定义变量及滑动平均类
# 定义一个32位浮点变量,初始值为0.0,这个代码就是不断更新w1,优化w1参数,滑动平均做了个w1的影子
w1 = tf.Variable(0, dtype=tf.float32)
# 定义NN迭代轮数,初始值为0,不可被优化(训练),
global_step = tf.Variable(0,trainable=False)
# 定义滑动平均超参数
MOVING_AVERAGE_DECAY =0.99
# 实例化滑动平均类,给删减率为0.99,当前轮数为global_step
ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY,global_step)
# ema.apply 后面的括号里是更新列表,每次运行sess.run(ema_op)时,对更新列表中的元素求滑动平均
# 在实际应用中 会使用 tf.trainable_variables()自动将所有待训练参数汇总为列表
# ema_op = ema.apply([w1])
ema_op = ema.apply(tf.trainable_variables())

# w = tf.Variable(tf.constant(5, dtype=tf.float32))
# loss = tf.square(w+1)
# train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss,global_step)

# 2.查看不同迭代中变量取值的变化
with tf.Session() as sess:
    # 初始化
	init_op = tf.global_variables_initializer()
	sess.run(init_op)

	print sess.run([w1, ema.average(w1)])

    # 参数w1 更新为1
	sess.run(tf.assign(w1, 1))
	sess.run(ema_op)
	print sess.run([w1, ema.average(w1)])
	
    # 更新w1 step ,模拟出100轮迭代后,参数w1变为10
	sess.run(tf.assign(global_step, 100))
	sess.run(tf.assign(w1, 10))
	sess.run(ema_op)
	print sess.run([w1, ema.average(w1)])

	sess.run(ema_op)
	print sess.run([w1, ema.average(w1)])
	print sess.run(global_step)
	
    # sess.run(train_step)
	# print sess.run(global_step)

4.3正则化

正则化在损失函数中引入模型复杂度指标,利用给w加权值,弱化了数据中的噪声(弱化过拟合)。(一般不正则化b)

loss = loss(y与y_)+REGULARIZER*loss(w)

其中  

loss(y与y_)是模型中所有参数的损失函数,比如交叉熵、均方误差。
REGULARIZER是超参数,给出参数w在总loss中1比例,即正则化权重
loss(w)需要正则化的参数

在tensorflow中可以使用以下两个函数正则化,

loss(w)=tf.contrib.layers.l1_regularizer(REGULARIZER)(w) 对应原理

loss_L_1(w)=\sum_{i}^{}\left | w_i \right |

loss(w)=tf.contrib.layers.l2_regularizer(REGULARIZER)(w) 对应原理

 loss_L_1(w)=\sum_{i}^{}\left | w_i^{2} \right |

一般和add_to_collection() add_n()配合使用

# 把内容加到集合对应位置做加法
tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)
# 合成得到正则化之后的损失函数
loss = cem + tf.add_n(tf.get_collection('losses')

例子,数据X[x0, x1]为正态分布随机点,标注Y_当 x0^2 +x1^2<2 时,y_ =1(红),其余y_=0(蓝)

在正式解决问题前需要先了解matplotlib库,linux下安装方式为在终端中输入

sudo pip install matplotlib

简单介绍下课程中使用的matplotlib函数,

scatter(x坐标,y坐标,c="颜色")----
show()显示函数
contour(x轴坐标值,y轴坐标值,该点的高度,levels = [等高线的高度]----

numpy库中的两个函数mgird[]和c_[](注意是[] 不是()!)可以参考大佬的文章numpy.mgrid函数的使用_naruhina的博客-CSDN博客_mgrid函数

或者参考numpy官网numpy.c_ — NumPy v1.21 Manual

课程中使用例子代码如下,注意偏置b的shape,在排查代码错误的时候使用了https://github.com/Adnios/Tensorflow仓库中的代码作为对比,

代码中tf.add_to_collection()函数可以参考《TensorFlow实战Google学习框架》

plt.contour()函数可以参考plt.contour - 程序员大本营matplotlib.pyplot.contour — Matplotlib 3.5.1 documentation

等高线选0.5的原因未知,如果有知道的欢迎讨论

#coding:utf-8
# 0导入模块生成数据集
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

BATCH_SIZE = 30
seed = 2
# 基于seed产生随机数
rdm = np.random.RandomState(seed)
# 随机数返回300行2列的矩阵,表示300组坐标点(x0,x1)作为输入数据集
X = rdm.randn(300,2)
# Y_作为输入数据集的标签(正确答案)
# 想一想为什么Y_不用[[int(x0*x0 + x1*x1 < 2)] for (x0,x1) in X]
# 是为了Y_c的数据结构,不考虑Y_c是可以的
Y_ = [int(x0*x0 + x1*x1 < 2) for (x0,x1) in X]
# 遍历Y中的每个元素,1赋值‘red’,其他赋值‘blue',这样可视化显示时人可以直观区分
Y_c = [['red' if y else 'blue'] for y in Y_]
#   
# 对数据集X和标签Y进行shape整理,第一个元素-1表示,随第二个参数计算得到,
# 第二个元素表示多少列,把X整理为n行2列,把Y整理为n行1列
X = np.vstack(X).reshape(-1, 2)
Y_ = np.vstack(Y_).reshape(-1, 1)

# print X
# print Y_
# print Y_c

# X,Y 是nbarray 关于numpy数据操作可以参考菜鸟联盟
# https://www.runoob.com/numpy/numpy-indexing-and-slicing.html
# plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c))
# plt.show()

#定义神经网络的输入、输出、参数,定义前向传播过程。
def get_weight(shape, regularizer):
	w = tf.Variable(tf.random_normal(shape),dtype=tf.float32)
	tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w))
	return w

def get_bias(shape):
	b = tf.Variable(tf.constant(0.01, shape=shape))
	return b

x = tf.placeholder(tf.float32, shape=(None, 2))
y_ = tf.placeholder(tf.float32, shape=(None, 1))

w1 = get_weight([2, 11], 0.01)
b1 = get_bias([11]) 
 # [1, 11]???
y1 = tf.nn.relu(tf.matmul(x, w1)+b1)

w2 = get_weight([11,1], 0.01)
b2 = get_bias([1])
y = tf.matmul(y1, w2)+b2


# losses????
# 定义损失函数 losses 这不部分不太理解
loss_mse = tf.reduce_mean(tf.square(y-y_))
loss_total = loss_mse + tf.add_n(tf.get_collection('losses'))

# 定义反向传播方法:不包含正则化
train_step = tf.train.AdamOptimizer(0.0001).minimize(loss_mse)

with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	STEPS = 40000
	for i in range(STEPS):
		start = (i*BATCH_SIZE)%300
		end = start +BATCH_SIZE
		sess.run(train_step, feed_dict={ x:X[start:end], y_:Y_[start:end]})
		if i%2000 ==0:
			loss_mse_v = sess.run(loss_mse, feed_dict={x:X,y_:Y_})
			print "After %d steps, loss is:%f" %(i, loss_mse_v)

	xx, yy =np.mgrid[-3:3:.01, -3:3:.01]

	grid = np.c_[xx.ravel(), yy.ravel()]

	probs = sess.run(y, feed_dict={x:grid})
	probs = probs.reshape(xx, shape)
	print "w1 :\n",sess.run(w1)
	print "b1 :\n",sess.run(b1)
	print "w2 :\n",sess.run(w2)
	print "b2 :\n",sess.run(b2)

plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c))
plt.contour(xx, yy, probs, levels=[.5])

plt.show()



train_step = tf.train.AdamOptimizer(0.0001).minimize(loss_total)

with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	STEPS = 40000
	for i in range(STEPS):
		start = (i*BATCH_SIZE) % 300
		end = start + BATCH_SIZE
		sess.run(train_step, feed_dict={x:X[start:end], y_:Y_[start:end]})
		if i % 2000 ==0:
			loss_v = sess.run(loss_total,feed_dict={x:X, y_:Y_})
			print "After %d steps, loss is :%f"  %(i, loss_v)
    # xx在-3 到 3之间以步长为0.01,yy同理,生成二维网络坐标点。
	xx, yy = np.mgrid[-3:3:.01, -3:3:.01]
    # 将xx,yy 拉直,并合并成一个2列的矩阵得到一个网格坐标点的集合,
	grid = np.c_[xx.ravel(), yy.ravel()]
    # 将网格坐标点喂入神经网络,probs为输出
	probs = sess.run(y, feed_dict={x:grid})
    # probs 的shape 调成 xx的样子
	probs = probs.reshape(xx, shape)
 	print "w1 :\n",sess.run(w1)
 	print "b1 :\n",sess.run(b1)
	print "w2 :\n",sess.run(w2)
	print "b2 :\n",sess.run(b2) 

plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c))
plt.contour(xx, yy, probs, levels=[.5])
plt.show()

关于python中形如

Y_c = [['red' if y else 'blue'] for y in Y_]

的应用,做了如下测试,

# python3
>>> y = np.ones([8,])
>>> a = [[9 if i>0 else 1] for i in y]
>>> print (a)
[[9], [9], [9], [9], [9], [9], [9], [9]]

>>> a = [9 if i>0 else 1  for i in y]
>>> print(a)
[9, 9, 9, 9, 9, 9, 9, 9]

>>> a = [[9 if i>0 else 1 for i in y] ]
>>> print (a)
[[9, 9, 9, 9, 9, 9, 9, 9]]

# 注意这种写法python 是不认的,其他写法认,但是生成的数据类型不一样
>>> a = [[9 if i>0 else 1 ] ] for i in y
SyntaxError: invalid syntax

# 这里生成了迭代器,迭代器相关的资料可以自行百度
>>> a = ([[9 if i>0 else 1 ] ] for i in y)
>>> print(a)
<generator object <genexpr> at 0x000001D3FCE25E40>
>>> for i in a:
	print (i)

	
[[9]]
[[9]]
[[9]]
[[9]]
[[9]]
[[9]]
[[9]]
[[9]]
>>> a = [[[9 if i>0 else 1 ] ] for i in y]
>>> print a
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(a)?
>>> print (a)
[[[9]], [[9]], [[9]], [[9]], [[9]], [[9]], [[9]], [[9]]]
>>> 

4.5模块化神经网络搭建八股 

前向传播就是搭建神经网络,设计网络结构(forward.py)

#forward.py
def forward(x, regularizer):
    w=
    b=
    y=
    return y

def get_weight(shape, regularizer):
    w= tf.Variable()
    tf.add_to_collection("losses", tf.contrib.layers.l2.regularizer(regularizer)(w))
    return w

def get_bias(shape):
    b=tf.Variable()
    return b

反向传播就是训练网络,优化网络参数(backward.py)
 

def backward():
    x =tf.placeholder()
    y= tf.placeholder()
    global_step = tf.Variable(0, trainable=Flase)
    loss= 
    
    #loss 可以是均方误差也可以是交叉熵
    #正则化则需要在loss 的基础上加上 tf.add_n(tf.get_collection("losses"))

    #指数衰减学系率
    learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, 数据集总样本数/BATCH_SIZE,LEARNING_RATE_DECAY, staircase= True)

    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)

    #滑动平均
    #ema =tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECY, global_step)
    #ena_op = ema.apply(tf.trainable_variables())
    # 下面的操作是使训练时先,train_step 后ema_op .no_op函数是没有操作的意思,
    #with tf.control_dependencies([train_step,ema_op]):
    #    train_op = tf.no_op(name='train')
    
    with tf.Session() as sess:
        init_op = tf.global_variables_initializer()
        sess.run(init_op)

        for in ranges(STEPS):
            sess.run(train_step, feed_dict={x: , y_: }
            if i %轮数 == 0:
                print 
if __name__ == '__main__':
    backward()

使用模块化方法将opt4_7.py重新设计 分为 forward  backward generateds(数据生成三部分)

#coding:utf-8
#op4_8_generateds.py
#import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt


seed = 2

def generateds():

	rdm = np.random.RandomState(seed)
	X = rdm.randn(300,2)

	Y_ = [int(x0*x0 + x1*x1 < 2) for (x0,x1) in X]
	#Z_ = [[int(x0*x0 + x1*x1 <2) ] for (x0,x1) in X]
	Y_c = [['red' if y else 'blue'] for y in Y_]
	X = np.vstack(X).reshape(-1, 2)
	Y_ = np.vstack(Y_).reshape(-1, 1)
	return X, Y_, Y_c

——————————————————————————————————————————————————————
#coding:utf-8
#opt4_8_forward.py
import tensorflow as tf


def get_weight(shape, regularizer):
	w = tf.Variable(tf.random_normal(shape),dtype=tf.float32)
	tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w))
	return w

def get_bias(shape):
	b = tf.Variable(tf.constant(0.01, shape=shape))
	return b

def forward(X, regularizer):

	w1 = get_weight([2, 11], regularizer)
	b1 = get_bias([11]) 
	y1 = tf.nn.relu(tf.matmul(X, w1)+b1)

	w2 = get_weight([11,1], regularizer)
	b2 = get_bias([1])
	y = tf.matmul(y1, w2)+b2
	return y

———————————————————————————————————————————————————————
#coding:utf-8
# op4_8backward.py
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import opt4_8_generateds
import opt4_8_forward


STEPS = 40000
BATCH_SIZE = 30

LEARNING_RATE_BASE = 0.001
LEARNING_RATE_DECAY = 0.999
REGULARIZER = 0.01


def backward():
	x = tf.placeholder(tf.float32, shape=(None, 2))
	y_ = tf.placeholder(tf.float32, shape=(None, 1))

	X, Y_, Y_c = opt4_8_generateds.generateds()

	y = opt4_8_forward.forward(X, REGULARIZER)

	global_step = tf.Variable(0, trainable=False)
	learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE,global_step,
	300/BATCH_SIZE,#计数
	LEARNING_RATE_DECAY,
	staircase = True)# 选择不同的衰减方式

	loss_mse = tf.reduce_mean(tf.square(y-y_))
	loss_total = loss_mse + tf.add_n(tf.get_collection('losses'))
	# 定义反向传播算法,包含正则化
	train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss_total)

	with tf.Session() as sess:
		init_op = tf.global_variables_initializer()
		sess.run(init_op)
		for i in range(STEPS):
			start = (i*BATCH_SIZE) % 300
			end = start + BATCH_SIZE
			sess.run(train_step, feed_dict={x:X[start:end], y_:Y_[start:end]})
			if i % 2000 ==0:
				loss_v = sess.run(loss_total,feed_dict={x:X, y_:Y_})
				print ("After %d steps, loss is :%f"  %(i, loss_v))
		xx, yy = np.mgrid[-3:3:.01, -3:3:.01]
		grid = np.c_[xx.ravel(), yy.ravel()]
		probs = sess.run(y, feed_dict={x:grid})

		probs = probs.reshape(xx.shape)


	plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c))
	plt.contour(xx, yy, probs, levels=[.5])
	plt.show()

if __name__ == "__main__":
	backward()

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值