tf1.15 gpu版使用中的一些心得(应用包等)

知识点:

扩展数据维数

img1 = tf.expand_dims(img1, 0)
img1 = tf.expand_dims(img1, -1)

tf1.x中,numpy与tensor的格式互换

data = np.random.random([2,3])
data_tensor = tf.convert_to_tensor(data) #numpy-tensor

with tf.Session() as sess:
	print("sess.run(tensor): {}".format(sess.run(data_tensor)))
	print("tensor.eval(session=sess): {}".format(data_tensor.eval(session=sess)))

安装skimage库:

参考链接
conda install scikit-image

更改使用方法:

from skimage.measure import compare_ssim as sk_cpt_ssim
为
from skimage.metrics import structural_similarity as sk_cpt_ssim

此时可以使用了!
还有一种方法,安装旧版本,可以使用上面第一条命令
查看skimage对应scipy版本

conda install scikit-image=0.15.0
pip install scipy=1.4.1 -U -i https://pypi.tuna.tsinghua.edu.cn/simple
这下可以使用:
from skimage.measure import compare_ssim as ssim_fn

使用ssim算法,numpy版本参考

def batch_ssim(im1,im2):
    imgsize=im1.shape[1]*im1.shape[2]
    avg1=im1.mean((1,2),keepdims=1)
    avg2=im2.mean((1,2),keepdims=1)
    std1=im1.std((1,2),ddof=1)
    std2=im2.std((1,2),ddof=1)
    cov=((im1-avg1)*(im2-avg2)).mean((1,2))*imgsize/(imgsize-1)
    avg1=np.squeeze(avg1)
    avg2=np.squeeze(avg2)
    k1=0.01
    k2=0.03
    c1=(k1*255)**2
    c2=(k2*255)**2
    c3=c2/2
    return np.mean((2*avg1*avg2+c1)*2*(cov+c3)/(avg1**2+avg2**2+c1)/(std1**2+std2**2+c2))

tf可用版本参考


一个生成器及调用,用于返回已经训练好的参数数据

def uni_initial_iter(self,layer_num):  #在test中测试过了
    path = ['fusion_model/layer{0:d}/w{0:d}'.format(layer_num)]
    path.append('fusion_model/layer{0:d}/b{0:d}'.format(layer_num))

    if not FLAGS.is_train:
        for wb_path in path:
            yield self.reader.get_tensor(wb_path)
    else:
        yield tf.truncated_normal_initializer(stddev=1e-3)
        yield tf.constant_initializer(0.0)
        
init_iter = self.uni_initial_iter(1)
weights = tf.get_variable("w1", initializer=tf.constant(next(init_iter)))
bias = tf.get_variable("b1", initializer=tf.constant(next(init_iter)))

模型训练及使用

  1. 模型的读取和使用
    读取已保存的模型,模型路径./CGAN_120/CGAN.model-17
reader = tf.train.NewCheckpointReader('./CGAN'+path+'/CGAN.model-'+ str(num_epoch))

读取特定变量的值

weights=tf.get_variable("w1",initializer=tf.constant(reader.get_tensor('fusion_model/layer1/w1')))
bias=tf.get_variable("b1",initializer=tf.constant(reader.get_tensor('fusion_model/layer1/b1')))
  1. denseNet的使用案例
    dense本来是将当前层的卷积结果插入到后面每一层,比如5层网络,第一层结果conv1_ir不仅输入给layer2,而且给layer3,layer4,layer5;而conv2也间接给了layer4,layer5;以此类推。
    如下,是通过更新一个添加列表(XX_add)实现的。
    这里结合的,还在每层再插入了一张可见光图像(只需要在第一个列表加入该可见光图像)
vivi = tf.concat([images_vi, images_vi], axis=-1)  #可见光图像插入

conv2_add =tf.concat([vivi,conv1_ir],axis=-1) #
conv2_ir= tf.contrib.layers.batch_norm(tf.nn.conv2d(conv2_add,...))   #conv2_add 卷积后规范化
conv2_ir = lrelu(conv2_ir)   #激活函数

conv3_add = tf.concat([conv2_add, conv2_ir], axis=-1) #含[conv1_ir],实现denseblock;含vivi,实现可见光图像插入

conv4_add = tf.concat([conv3_add, conv3_ir], axis=-1) #含[conv1_ir,conv2_ir], 实现denseblock;含vivi,实现可见光图像插入

conv5_add = tf.concat([conv4_add, conv4_ir], axis=-1) #含[conv1_ir,conv2_ir,conv3_ir] 实现denseblock;含vivi,实现可见光图像插入

一些小trick

  1. 禁用warning(具体什么warning忘了,反正tf1.x在一开始特别多):
import warnings
warnings.filterwarnings("ignore")
  1. 禁止:Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
    还有一大堆warning
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = '2'

import warnings
warnings.filterwarnings("ignore")
  1. 解决AttributeError: module ‘scipy.misc’ has no attribute 'imread’报错问题
    直接上答案: pip install scipy==1.2.1

  2. 安装cv2:
    pip install opencv-python

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,你可以将最佳权重保存路径添加到代码,如下所示: ```python import numpy as np import tensorflow as tf import os # 加载数据集 with open('poems.txt', 'r', encoding='utf-8') as f: data = f.read() # 构建词典 vocab = sorted(set(data)) char2idx = {char: idx for idx, char in enumerate(vocab)} idx2char = np.array(vocab) # 将文本数据转换为数字 text_as_int = np.array([char2idx[c] for c in data]) # 定义训练数据和标签 seq_length = 100 examples_per_epoch = len(data) // (seq_length + 1) char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int) sequences = char_dataset.batch(seq_length + 1, drop_remainder=True) def split_input_target(chunk): input_text = chunk[:-1] target_text = chunk[1:] return input_text, target_text dataset = sequences.map(split_input_target) BATCH_SIZE = 128 BUFFER_SIZE = 10000 dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True) # 构建模型 vocab_size = len(vocab) embedding_dim = 256 rnn_units = 1024 def build_model(vocab_size, embedding_dim, rnn_units, batch_size): model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]), tf.keras.layers.GRU(rnn_units, return_sequences=True, stateful=True, recurrent_initializer='glorot_uniform'), tf.keras.layers.Dense(vocab_size) ]) return model model = build_model( vocab_size=len(vocab), embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE) # 定义损失函数 def loss(labels, logits): return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True) # 编译模型 model.compile(optimizer='adam', loss=loss) # 定义检查点 checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback=tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) # 定义最佳权重检查点 BEST_MODEL_PATH = './best_model.h5' best_checkpoint = tf.keras.callbacks.ModelCheckpoint(BEST_MODEL_PATH, monitor='val_loss', save_best_only=True, mode='min', save_weights_only=True) # 训练模型 EPOCHS = 50 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback, best_checkpoint]) # 生成诗歌 def generate_text(model, start_string): num_generate = 100 input_eval = [char2idx[s] for s in start_string] input_eval = tf.expand_dims(input_eval, 0) text_generated = [] temperature = 1.0 model.reset_states() for i in range(num_generate): predictions = model(input_eval) predictions = tf.squeeze(predictions, 0) predictions = predictions / temperature predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy() input_eval = tf.expand_dims([predicted_id], 0) text_generated.append(idx2char[predicted_id]) return (start_string + ''.join(text_generated)) # 加载检查点 model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1) model.load_weights(BEST_MODEL_PATH) model.build(tf.TensorShape([1, None])) # 生成一首诗 print(generate_text(model, start_string=u"山")) ``` 现在,模型将保存最佳的权重到文件 `best_model.h5`。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值