tensorflow代价敏感因子、增加正则化项、学习率衰减

1.代价敏感:

 
    outputs, end_points = vgg.all_cnn(Xinputs,
                                          num_classes=num_classes,
                                          is_training=True,
                                          dropout_keep_prob=0.5,
                                          spatial_squeeze=True,
                                          scope='all_cnn'

    cross_entrys=tf.nn.softmax_cross_entropy_with_logits(logits=outputs, labels=Yinputs)
    # w_temp = tf.matmul(Yinputs, w_ls) #代价敏感因子w_ls=tf.Variable(np.array(w,dtype='float32'),name="w_ls",trainable=False),w是权重项链表
    # loss=tf.reduce_mean(tf.multiply(cross_entrys,w_temp))  #代价敏感下的交叉熵损失


2. 正则化项:


    weights_norm=tf.reduce_sum(input_tensor=weight_dacay*tf.stack([tf.nn.l2_loss(i) for i in tf.get_collection('weights')]),name='weights_norm' )
    loss=tf.add(cross_entrys,weights_norm) #包含正则化项损失,对应于caffe里面的weight-decay因子λ,因为在梯度反向传递时'l2-正则化:1/2*λ*||W||^2'对应的更新值就是权重衰减因子,W-△w=w-(△w_分类损失部分+λ*w)=-△w_分类损失部分+(1-λ)*w。通常λ=0.001~0.0005


3. 学习率衰减:

global_step = tf.Variable(0, trainable=False)
add_g=global_step.assign_add(1)
starter_learning_rate = 0.001
decay_steps = 10
#tf.train.下面有多个衰减函数可用
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, decay_steps, decay_rate=0.01)
#train_op = tf.train.MomentumOptimizer(learning_rate,0.9).minimize(loss) #用于优化损失
#decayed_learning_rate = learning_rate *  decay_rate ^ (global_step / decay_steps)
init = tf.initialize_all_variables()
# 启动图 (graph),查看衰减状态
with tf.Session() as sess:
    sess.run(init)
    for i in range(15):
        _,r=sess.run([add_g, learning_rate])
        print(_,"=",r)




评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值