1 初始化Lr
import torch.nn.init as init
# 初始化参数为均匀分布 ~ U(−a,a)
def xavier(param):
init.xavier_uniform_(param)
def weights_init(m):
if isinstance(m, nn.Conv2d):
xavier(m.weight.data)
m.bias.data.zero_()
elif isinstance(m, nn.ConvTranspose2d):
xavier(m.weight.data)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
xavier(m.weight.data)
m.bias.data.zero_()
if isCuda:
model = Model_nV1(n_class=10).cuda()
else:
model = Model_nV1(n_class=10)
if isInit:
model.features.apply(weights_init)
2 动态调整Lr
optimizer = optim.SGD(model.parameters(),
lr=1e-3,
momentum=0.9,
weight_decay=1e-5)
# 动态调整学习率
# scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma = 0.95)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50, 100, 150], gamma=0.1)
# Epoch End
scheduler.step() # 更新学习率
参考:各种学习率的设置及选择.
3 显示Lr
print(optimizer.param_groups[0]["lr"])
4 计算时间
方法1
import datetime
starttime = datetime.datetime.now()
#long running
endtime = datetime.datetime.now()
print (endtime - starttime).seconds
方法 2
start = time.time()
run_fun()
end = time.time()
print end-start
方法3
start = time.clock()
run_fun()
end = time.clock()
print end-start
方法1和方法2都包含了其他程序使用CPU的时间,是程序开始到程序结束的运行时间。
方法3算只计算了程序运行的CPU时间