一文读懂如何使用自己的数据进行网络模型的训练

一文读懂如何使用自己的数据进行网络模型的训练

在学习相关深度学习框架时,我们往往会采用MNIST数据集进行学习。比如:

https://blog.csdn.net/louishao/article/details/76218083

还有:https://blog.csdn.net/louishao/article/details/60867339

但是图像分割,去噪等图像到图像的任务中,我们往往需要训练的是自己的数据,而不是框架中封装好的mnist接口,具体如何实现,本文基于TensorFlow给出其中一种实现方法。


数据导入TensorFlow的方式

(1)Feeding: Python直接云心,直接一点就是“喂数据”;

(2)预加载数据:TensorFlow图中的常量或变量,保存所有数据;

(3)从文件中读取:输入管道从TensorFlow图的开始处读取文件中的数据。

以上三种方法,(1)(2)仅适合于微型、小型数据,或者仅适合调试使用;只有(3)适合于实际训练使用。


使用Feeding和预加载数据的方式

此处不展开讲解这两种方式,简单来说。Feeding使用的是placeholder方法,在feed数据;而预加载数据使用的是constant或者Variable,而Variable时需要初始化。直接给出具体的例子。

# -*- coding:utf-8 -*-
import tensorflow as tf
inputx = tf.placeholder(tf.float32)
weight = tf.placeholder(tf.float32)
out = tf.multiply(inputx, weight)  # classifier: inputx * weight

with tf.Session() as sess:
	print(sess.run(out, feed_dict={inputx: [2.], weight: [8.]}))

从文件中读取的方式

重点介绍这种方式。具体实现的方法很多,本文只介绍一种TensorFlow中的常用方法,就是制作TFRecord文件。

制作TFRecord的步骤:make TFRecord,即编码;之后就是解码,并创建batch进行测试。

首先,make TFRecord

def TFrecord_maker(save_tfrecord, class_path, label_path):
    """
    :param save_tfrecord: tflearn filename
    :param class_path: where the images
    :param label_path: where the label
    :return: the saved tfrecord file
    """
    writer = tf.python_io.TFRecordWriter(save_tfrecord)  # build a writer
    img_name = os.listdir(class_path)
    label_name = os.listdir(label_path)

    img_name = utils.shuffle(img_name, random_state=1234)  # shuffle the data
    label_name = utils.shuffle(label_name, random_state=1234)

    if len(img_name) == len(label_name):
        for i in range(len(img_name)):
            img_path = class_path + img_name[i]
            label_path1 = label_path + label_name[i]
            img = Image.open(img_path)
            mask_label = Image.open(label_path1)
            img = img.resize((128, 128))  # size can be modified
            mask_label = mask_label.resize((128, 128))  # size can be modified
                
            img_raw = img.tobytes()
            mask_raw =mask_label.tobytes()
            example = tf.train.Example(features=tf.train.Features(feature={
                    'mask_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[mask_raw])),
                    'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw]))
                    }))
            writer.write(example.SerializeToString())  # write and save
        writer.close()
    else:
        raise 'NameError'

之后,读取和解码函数的实现

# decode tfrecord files
def read_and_decode(filename):
    """
    :param filename: the saved tfRecord file
    :return: image and label
    """
    filename_queue = tf.train.string_input_producer([filename])
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
    features = tf.parse_single_example(serialized_example,
                                   features={
                                       'mask_raw': tf.FixedLenFeature([], tf.string),
                                       'img_raw': tf.FixedLenFeature([], tf.string),
                                   })  # obtain the image and label feature object
    image = tf.decode_raw(features['img_raw'], tf.uint8)
    image = tf.reshape(image, [128, 128])  # can be modified
    mask = tf.decode_raw(features['mask_raw'], tf.uint8)
    mask = tf.reshape(mask, [128, 128])
    return image, mask

在正式训练时,还有构造batch函数

# make tfrecord dataset
def CreateBatch(list_image, batchsize):  # generate shuffle batch to feed into the network
    min_after_dequeue = 10  # define how big a buffer we will randomly sample
    capacity = min_after_dequeue + 3 * batchsize  # the recommend value
    image_batch, label_batch = tf.train.batch(list_image, batch_size=batchsize, capacity=capacity)
    # label_batch = tf.one_hot(label_batch,depth=2)  # it can be applied in classification. One-hot code.
    return image_batch, label_batch 

实战

主函数中调用TFrecord_maker函数制作TFRecord文件,如:

if __name__ == '__main__':
    save_tfrecord = 'TFdata.tfrecords'
    class_path = './image/'
    label_path = '/label/'

    # make tfrecord file
    TFrecord_maker(save_tfrecord, class_path, label_path)  # make our own tfrecord file

运行后会生成TFdata.tfrecords文件。

之后,我们可以使用上面已经实现了的read_and_decode函数和CreateBatch函数替代mnist进行训练。采用的网络结构,完全采用的之前博客所写的自编码器,https://blog.csdn.net/louishao/article/details/76218083#comments。

给出完整的代码:

#-*- coding:utf-8 -*-
import numpy as np
import os
import sklearn.preprocessing as prep
import tensorflow as tf
import matplotlib.pyplot as plt
from TFrecordMaker_Reader import *

# 对于深度学习,权重初始值不能设得太小,也不能太大,而Xaiver初始化器则是用于
# 恰好设置初始值用的
def xavier_init(fan_in,fan_out,constant = 1):
    low = -constant * np.sqrt(6.0/(fan_in+fan_out))
    high = constant * np.sqrt(6.0/(fan_in+fan_out))
    return tf.random_uniform((fan_in,fan_out),minval = low,maxval = high,dtype = tf.float32)

def plot_image(image):
    plt.imshow(image.reshape((28,28)),interpolation='nearest',cmap='binary')
    plt.show()

# 下面是去噪自编码器的class
class AdditiveGaussianAutoencoder(object):
    def __init__(self,n_input,n_hidden,transfer_function=tf.nn.softplus,optimizer=tf.train.AdamOptimizer(),scale=1.0):
        self.n_input = n_input
        self.n_hidden = n_hidden
        self.transfer = transfer_function
        self.scale = tf.placeholder(tf.float32)
        self.training_scale = scale
        network_weights = self._initalize_weights()
        self.weights = network_weights

        # 定义网络结构
        self.x = tf.placeholder(tf.float32,[None,self.n_input])  # 输入层
        self.noisex = self.x+scale*tf.random_normal((n_input,))  # 加入噪声的输入图像
        # 下面是给输入的数据加入了噪声
        self.hidden = self.transfer(tf.add(tf.matmul(self.noisex,self.weights['w1']),self.weights['b1']))
        # 上一行是隐含层
        # 输出层
        self.reconstruction = tf.add(tf.matmul(self.hidden,self.weights['w2']),self.weights['b2'])

        # 定义自编码器的损失函数,这里使用平方误差作为cost
        self.cost = 0.5*tf.reduce_sum(tf.pow(tf.subtract(self.reconstruction,self.x),2.0))
        self.optimizer = optimizer.minimize(self.cost)

        init = tf.global_variables_initializer()
        self.sess = tf.Session()
        self.sess.run(init)

    # 编写成员函数
    def _initalize_weights(self):
        all_weights = dict()
        all_weights['w1'] = tf.Variable(xavier_init(self.n_input,self.n_hidden))
        all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden],dtype=tf.float32))
        all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden,self.n_input],dtype=tf.float32)) # 输出神经元数和输入一样
        all_weights['b2'] = tf.Variable(tf.zeros([self.n_input],dtype = tf.float32))
        return all_weights

    # 定义计算损失cost以及执行一步训练的函数
    def partial_fit(self,X):
        cost,opt = self.sess.run((self.cost,self.optimizer),feed_dict={self.x:X,self.scale:self.training_scale})
        return cost

    #
    def calc_total_cost(self,X):
        return self.sess.run(self.cost,feed_dict={self.x:X,self.scale:self.training_scale})

    # 定义transform函数,返回隐含层的结果
    def transform(self,X):
        return self.sess.run(self.hidden,feed_dict={self.x:X,self.scale:self.training_scale})

    # 定义generate()函数
    def generate(self,hidden=None):
        if hidden:
            hidden = np.random.normal(size=self.weights["b1"])
        return self.sess.run(self.reconstruction,feed_dict={self.hidden:hidden})

    def reconstrdata(self,X):
        return self.sess.run(self.reconstruction,feed_dict={self.x:X,self.scale:self.training_scale})

    # 获取隐含层的权重w1
    def getWeights(self):
        return self.sess.run(self.weights["w1"])

    # 获取隐含层的偏置系数b1
    def getBiases(self):
        return self.sess.run(self.weights['b1'])

# 定义一个对训练、测试数据进行标准化(0均值,且标准差为1的分布)处理的函数。
def standard_scale(X_train,X_test):
    preprocessor = prep.StandardScaler().fit(X_train)  # 这句是保证训练、测试数据都使用完全相同Scalar
    X_train = preprocessor.transform(X_train)
    X_test = preprocessor.transform(X_test)
    return X_train, X_test

def get_random_block_from_data(data,batch_size):
    start_index = np.random.randint(0, len(data)-batch_size)
    return data[start_index:(start_index+batch_size)]

if __name__ == '__main__':
    n_samples = 10  # the number of the training set
    training_epochs = 200
    batch_size = 2
    display_step = 1

    autoencoder = AdditiveGaussianAutoencoder(n_input=128*128,
                                              n_hidden=200,
                                              transfer_function=tf.nn.softplus,
                                              optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
                                              scale=0.1)

    with tf.Session() as sess:
        init_op = tf.global_variables_initializer()
        sess.run(init_op)
        coord = tf.train.Coordinator()
        for e in range(training_epochs):  # train all the in an epoach
            image, mask = read_and_decode('TFdata.tfrecords')  # Read the tfRecord file
            image_batch, label_batch = CreateBatch([image, mask], batch_size)
            threads = tf.train.start_queue_runners(coord=coord)  # Start input enqueue threads.
            avg_cost = 0.
            total_batch = int(n_samples / batch_size)

            for i in range(total_batch):
                batch_x = tf.reshape(image_batch, (image_batch.shape[0], (image_batch.shape[1]*image_batch.shape[2])))  # image_batch will change for each i
                batch_x = batch_x.eval()
                batch_x = batch_x / 255.0  # Normalize
                cost = autoencoder.partial_fit(batch_x)
                avg_cost += cost / total_batch
            if e % display_step == 0:
                print("Epoch:", '%04d' % (e + 1), "cost=", "{:.9f}".format(avg_cost))

运行结果:

结语

上述的实战部分只是一个小小的应用,本博文主要是介绍了tfRecord的制作,了解之后,可以使用自己的数据去跑模型。当然,实现使用自己数据集训练模型的方法有很多,本文只是介绍了其中一种。希望觉得这篇博文有用的朋友,关注一下,并点赞呀,^ _ ^。

  • 4
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值