tensorflow学习之旅

Tensorflow学习之旅

说明

18年上旬开始,由于公司项目需求,研究了一段时间的keras,实现了一个版本的图像识别代码,但是由于项目紧任务重,一直没有时间静下心来更深入学习。好容易项目修修改改弄了一年,终于算是结束了。年尾换了一份轻松一些的工作,终于有时间自己去研究一些自己感兴趣的主题了,过去三个月在闲余时间写了一个基于Keras的神经网络的应用部署框架(Janna),还没有开源,但是会尽快吧,框架的功能基本完备,如果过几天开源了,我会考虑写一篇文章专门介绍一下。
这篇文章主要是为了记录自己阅读Tensorflow文档后学习笔记。干杯,为了好玩。

文档阅读

写在前面

深度学习或者说神经网络的本质是什么?这里笔者的理解是经验存储计算矩阵组,这也就可以解释为什么深度学习如此强调深度的网络的训练的原因。
因为更深的网络能够存储更多的经验映射计算(经验映射是笔者自己的概念,基本上来说就是看起来像某个东西这种概念),所以更深的网络会记住更多看起来像的东西。这里的深度和每一层的张量大小其实都映射了一个概念----容量。
但是由于深度网络所做的映射最终起作用的就是最后一层,所以深度增加怎么把输入层的变化有效传递到最下面一层才是最关键的,而所谓的深度,其实可以理解为将一个巨大的张量堆叠起来,形成看起来像一层层的结构,但是本质上还是还是一张巨型的网络更来得实际一些。
笔者认为变得更深并不是深度学习的最好方向,因为深度总归是有限度的,这样就把深度学习技术制约在所谓的硬件层了,技术最后就会集中到拥有创建超规模的大型公司中。
笔者认为细分使用领域,提供成长性是使神经网络兴盛的一个好方向。而一些通用的提取,其实使用统一标准化的提取就可以了,细分领域只关注自己的分类和分割信息。

源码仓库地址

https://github.com/lipopo/ML.git

基础分类

# -*- coding: utf8 -*-
# build in libraries
import random
import sys
import threading

# tensorflow and keras
import tensorflow as tf
from tensorflow import keras

# helpers libraries
import matplotlib.pyplot as plt
import numpy as np

# local helper functions
from .helper import flatten_list, wrapper_func

print("tensorflow version: {version}".format(version=tf.__version__))

# fashion_mnist classification test
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

# desc data shapes
print("train_image_shape: {shape},".format(shape=train_images.shape),)
print("train_label_shape: {shape}".format(shape=train_labels.shape))
print("test_image_shape: {shape}".format(shape=test_images.shape),)
print("test_label_shape: {shape}".format(shape=test_labels.shape))

# label decode
labels = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 
          'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

# normalized input image
train_images_normalized = train_images / 255.0
test_images_normalized = test_images / 255.0

# show some image
fig, axes = plt.subplots(5, 5)
flatten_axes = flatten_list(axes.tolist())
images = [ flatten_axes[i].imshow(train_images[i]) for i in range(len(flatten_axes))]
plt.show()

# make model
model = keras.Sequential(
    layers=[
        keras.layers.Flatten(input_shape=(28, 28)),
        keras.layers.Dense(units=128, activation=tf.nn.relu),
        keras.layers.Dense(units=10, activation=tf.nn.softmax)
    ],
    name="MyFirstClsModel"
)

# make machine learning params
model.compile(
    optimizer=tf.train.AdamOptimizer(),
    loss="sparse_categorical_crossentropy",
    metrics=["accuracy"]
)

# train_model
model.fit(x=train_images_normalized, y=train_labels, epochs=5)

# evaluate model
test_loss, test_acc = model.evaluate(x=test_images_normalized, y=test_labels)
print("test loss: {loss}".format(loss=test_loss),)
print("test acc: {acc}".format(acc=test_acc))

# predict one random image
predict_image_index = random.choice(range(len(test_images_normalized)))
predict_image = test_images_normalized[predict_image_index]
test_image_true_label_index = test_labels[predict_image_index]
test_image_true_label = labels[test_image_true_label_index]
pack_image = np.expand_dims(predict_image, 0)
predict_ans = model.predict(pack_image)
predict_ansuse = predict_ans[0]
predict_num = np.max(predict_ansuse) * 100
label_name_index = np.argmax(predict_ansuse)
predict_label_name = labels[label_name_index]

# print answer
print("{} {:2.0f}% {}".format(test_image_true_label, predict_num, predict_label_name))

# show predict answer
axes = plt.subplot(111)
axes.imshow(predict_image)
axes.set_xticks([])
axes.set_yticks([])
axes.set_xlabel("{} {:2.0f}% {}".format(test_image_true_label, predict_num, predict_label_name))
plt.show()

文本分类

# -*- coding: utf8 -*-
# builtin module
import random

# thried part module
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras

# local module

print("tensorflow version: {}".format(tf.__version__))

# load data
imdb = keras.datasets.imdb
(train_datas, train_labels), (test_datas, test_labels) = imdb.load_data(num_words=10000)

# desc dataset
print("train_datas shape: {}".format(train_datas.shape),)
print("train_labels shape: {}".format(train_labels.shape))
print("test_datas shape: {}".format(test_datas.shape),)
print("test_labels shape: {}".format(test_labels.shape))

# get word index
word_index = imdb.get_word_index()
word_index = {k: (v+3) for k, v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2
word_index["<UNUSED>"] = 3

# normalized data
train_datas = keras.preprocessing.sequence.pad_sequences(
    train_datas,
    maxlen=256,
    value=word_index["<PAD>"],
    padding="post"
)

test_datas = keras.preprocessing.sequence.pad_sequences(
    test_datas,
    maxlen=256,
    value=word_index["<PAD>"],
    padding="post"
)

# deal work index
reverse_word_index = dict([(value, key) for key, value in word_index.items()])

def decode_review(text):
    return " ".join([reverse_word_index.get(i, '?') for i in text])

# decode test
data_choose = random.choice(train_datas)
print("origin data: %s" % " ".join(["{}".format(data_index) for data_index in data_choose]))
print("word decode: %s" % decode_review(data_choose))

# build model
vocab_size = 10000
model = keras.Sequential(
    [
        keras.layers.Embedding(vocab_size, 16),
        keras.layers.GlobalAveragePooling1D(),
        keras.layers.Dense(units=16, activation=tf.nn.relu),
        keras.layers.Dense(units=1, activation=tf.nn.sigmoid)
    ],
    name="decode word"
)

model.summary()

# compile param
model.compile(
    optimizer=tf.train.AdamOptimizer(),
    loss="binary_crossentropy",
    metrics=[
        "accuracy"
    ]
)

# make vaild datasset
x_val = train_datas[:10000]
partial_x_train = train_datas[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

# train_model
history = model.fit(
    partial_x_train,
    partial_y_train,
    batch_size=512,
    epochs=40,
    validation_data=(x_val, y_val),
    verbose=1
)

# evaluate model
results = model.evaluate(x=test_datas, y=test_labels)
print("evaluate loss: {}, evaluate accuracy: {}".format(*results))

# plot fit status
acc = history.history["acc"]
val_acc = history.history["val_acc"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]

epochs = range(1, len(acc) + 1)
axes = plt.subplot(211)
axes.plot(epochs, loss, "bo", label="Training loss")
axes.plot(epochs, val_loss, "b", label="Validation loss")
axes.set_title("Training and Validation loss")
axes.set_xlabel("Epochs")
axes.set_ylabel("Loss")

axes_2 = plt.subplot(212)
axes_2.plot(epochs, acc, "bo", label="Training acc")
axes_2.plot(epochs, val_acc, "b", label="Validation acc")
axes_2.set_title("Training and Validation acc")
axes_2.set_xlabel("Epochs")
axes_2.set_ylabel("Acc")

plt.legend()
plt.show()

回归

# -*- coding: utf8 -*-
# builtin model
import pathlib

# thired part model
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from tensorflow import keras
import tensorflow as tf

# local model
import ssl

ssl._create_default_https_context = ssl._create_unverified_context
# print tensorflow's version
print("tensorlfow version: {}".format(tf.__version__))

# load_data
data_path = keras.utils.get_file("auto-mpg.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
                'Acceleration', 'Model Year', 'Origin'] 
raw_dataset = pd.read_csv(
    data_path, names=column_names, na_values="?", comment="\t",
    sep=" ",skipinitialspace=True
)
dataset = raw_dataset.copy()
print("{v} dataset tail {v}".format(v="-"*20))
print(dataset.tail())
print("{v} dataset desc {v}".format(v="-"*20))
print(dataset.describe())

# calc unkown value
print("{v} nan value desc {v}".format(v="-"*20))
print(dataset.isna().sum())

# clean data
# drop nan value
dataset = dataset.dropna()

# put origin out
origin = dataset.pop("Origin")

# normalized dataset
dataset["USA"] = (origin==1) * 1.0
dataset["Europe"] = (origin==2) * 1.0
dataset["Japan"] = (origin==3) * 1.0

print("{v} clean data tail {v}".format(v="-"*20))
print(dataset.tail())

# split train and test data
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)

# inspect data
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
plt.show()

# split labels
train_labels = train_dataset.pop("MPG")
test_labels = test_dataset.pop("MPG")

train_state = train_dataset.describe().T
print("{v} train_state {v}".format(v="-"*20))
print(train_state)

# normalized fit dataset
def norm(x):
    return (x - train_state["mean"]) / train_state["std"]

normed_train_dataset = norm(train_dataset)
normed_test_dataset = norm(test_dataset)

# build model
model = keras.Sequential(
    [
        keras.layers.Dense(units=64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]),
        keras.layers.Dense(units=64, activation=tf.nn.relu),
        keras.layers.Dense(1)
    ],
    name="regression_model"
)


# set up compile params
model.compile(
    optimizer=keras.optimizers.RMSprop(0.001),
    loss="mse",
    metrics=[
        "mae",
        "mse"
    ]
)

# inspect model
model.summary()

# create monitor
monitor_val_loss = keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)

# fit model
history = model.fit(
    x=train_dataset,
    y=train_labels,
    epochs=1000,
    validation_split=0.2,
    verbose=0
)

# plot history
mae = history.history["mean_absolute_error"]
val_mae = history.history["val_mean_absolute_error"]
mse = history.history["mean_squared_error"]
val_mse = history.history["val_mean_squared_error"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(mae) + 1)

figure, axes = plt.subplots(3)
axes[0].plot(epochs, mae, "b", label="mae")
axes[0].plot(epochs, val_mae, "y", label="val_mae")
axes[0].set_xlabel("Epochs")
axes[0].set_ylabel("Mae")
axes[0].set_title("Mae and Vaildation Mae")
axes[0].set_ylim([0, 5])

axes[1].plot(epochs, mse, "b", label="mse")
axes[1].plot(epochs, val_mse, "y", label="val_mse")
axes[1].set_xlabel("Epochs")
axes[1].set_ylabel("Mse")
axes[1].set_title("Mse and Vaildation Mse")
axes[1].set_ylim([0, 20])

axes[2].plot(epochs, loss, "b", label="loss")
axes[2].plot(epochs, val_loss, "y", label="val_loss")
axes[2].set_xlabel("Epochs")
axes[2].set_ylabel("Loss")
axes[2].set_title("Loss and Vaildation Loss")

plt.legend()
plt.show()

过拟合和欠拟合

过拟合是指的神经网络中的变量过度拟合了数据,不仅仅是拟合数据的有效规律,甚至对于变量中的噪声也一并拟合了。对于合适的模型在合适的状态下,稍微过拟合并不会影响模式识别的准确度,但是对于过度复杂的模型,过拟合可能导致数据在非数据分布区域产生剧烈波动,是除了训练集外的数据在使用模型时,正确率会下降。
要防止过拟合,一般会采用手动或自动控制训练轮次, 增加训练集的数据量,调整模型的复杂度到适当范围等。
欠拟合往往是因为数据训练轮次过少或者训练的模型复杂度不够,导致的数据分布规律拟合失败。
避免欠拟合的方式主要包括调整训练轮次或者调整模型到适当的复杂度。
以上可以知道,调整模型基本是一个万灵药,但是数据其实更重要。

保存和恢复模型

# -*- coding: utf8 -*-
from __future__  import absolute_import, division, print_function
import os

import tensorflow as tf
from tensorflow import keras

import ssl
ssl._create_default_https_context = ssl._create_unverified_context
BASE_DIR = os.path.dirname(os.path.relpath(__file__))

print(tf.__version__)

(train_images, train_labels), (test_images, test_labels) = keras.datasets.mnist.load_data()

train_labels = train_labels[:1000]
test_labels = test_labels[:1000]

train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0

# create model
def create_model():
    model = keras.Sequential(
        layers=[
            keras.layers.Dense(
                units=512, activation=tf.nn.relu, input_shape=(28 * 28,)),
            keras.layers.Dropout(0.2),
            keras.layers.Dense(
                units=10, activation=tf.nn.softmax
            )
        ]
    )
    model.compile(
        optimizer=keras.optimizers.Adam(),
        loss=keras.losses.sparse_categorical_crossentropy,
        metrics=[
            "accuracy"
        ]
    )
    return model
model = create_model()
# desc model
model.summary()

# create check point
check_point_path = os.path.join(BASE_DIR, "assets/train_1/point.cpkt")
cp_callback = keras.callbacks.ModelCheckpoint(
    filepath=check_point_path,
    save_weights_only=True,
    verbose=1
)

# fit model
model.fit(
    train_images, train_labels, epochs=10,
    validation_data=(test_images, test_labels),
    callbacks=[cp_callback]
)

# load from cpkt
new_model = create_model()
loss, acc = new_model.evaluate(test_images, test_labels)
print("New Model(before load weight): Loss {} Acc {}".format(loss, acc))

latest_cpkt = tf.train.latest_checkpoint(os.path.dirname(check_point_path))
new_model.load_weights(latest_cpkt)
loss, acc = new_model.evaluate(test_images, test_labels)
print("New Model(after load weight): Loss {} Acc {}".format(loss, acc))

# save weight 
weight_path = os.path.join(BASE_DIR, "assets/train_1/test_save")
model.save_weights(
    weight_path
)

new_model = create_model()
loss, acc = new_model.evaluate(test_images, test_labels)
print("New Model(before load weight): Loss {} Acc {}".format(loss, acc))

new_model.load_weights(weight_path)
loss, acc = new_model.evaluate(test_images, test_labels)
print("New Model(after load weight): Loss {} Acc {}".format(loss, acc))

# save model
model_path = os.path.join(BASE_DIR, "assets/train_1/model_saved")
model.save(model_path)

loss, acc = model.evaluate(test_images, test_labels)
print("Model: Loss {} Acc {}".format(loss, acc))

new_model = keras.models.load_model(model_path)
loss, acc = new_model.evaluate(test_images, test_labels)
print("New Model: Loss {} Acc {}".format(loss, acc))

eager execution

# -*- coding: utf8 -*-
import time

import tensorflow as tf

import tempfile

import numpy as np

tf.enable_eager_execution()

# use tensor and some op
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(9))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello~"))

print(tf.square(9) + tf.square(8))
# some tensor attribute
a_tensor = tf.matmul([[1]], [[2, 3]])
print("tensor shape: {}".format(a_tensor.shape))
print("tensor type: {}".format(a_tensor.dtype))

# convert np.array and tensor
ndarry = np.ones((3, 3))

print("Use Tensor Method Operate Ndarry")
tensor = tf.multiply(ndarry, 42)
print("Tensor: \n{}\ntype: {}".format(tensor, tensor.dtype))

print("Use Ndarry Method Operate Tensor")
ndarry_add = np.add(tensor, 1)
print("ndarry_add: \n{}\ntype: {}".format(ndarry_add, ndarry_add.dtype))

print("Get Numpy Object From Tensor")
print("narray: \n{}\ntype: {}".format(tensor.numpy(), tensor.numpy().dtype))

# 查看gpu是否可用
x = tf.random_uniform((3, 3))
print("Gpu Available: {}".format(tf.test.is_gpu_available()))
print("Tensor Use {}".format(x.device))

# 控制Tensor的放置位置
def time_matmul(x):
    start = time.time()
    for loop in range(10):
        tf.matmul(x, x)
    result = time.time() - start
    print("10 Loops: {:0.2f}ms".format(result * 1000))

# 测试在cpu上的效率
print("On Cpu")
with tf.device("CPU:0"):
    x = tf.random_uniform((1000, 1000))
    assert x.device.endswith("CPU:0")
    time_matmul(x)

# 测试在gpu上的效率
if tf.test.is_gpu_available():
    print("On Gpu")
    with tf.device("GPU:0"):
        x = tf.random_uniform((1000, 1000))
        assert x.device.endswith("GPU:0")
        time_matmul(x)

# 数据集的管理
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
print("ds_tensors: {}".format(ds_tensors))

# Create CSV file
_, filename = tempfile.mkstemp()
with open(filename, "w") as f:
    f.write("Line1\nLine2\nLine3")

ds_file = tf.data.TextLineDataset(filename)
print("ds_file: {}".format(ds_file))

# 处理前输出
print("ds_tensors(before operate): ~")
for tensor in ds_tensors:
    print(tensor)

print("ds_file(before operate): ~")
for tensor in ds_file:
    print(tensor)

# 将一些操作应用到数据集中
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)

print("ds_tensors: ~")
for tensor in ds_tensors:
    print(tensor)

print("ds_file: ~")
for tensor in ds_file:
    print(tensor)

自动微分

# -*- coding: utf8 -*-
# 自动微分说明
import tensorflow as tf

print(tf.__version__)
# enable eager execution
tf.enable_eager_execution()

# 计算梯度实例 变量的
x = tf.ones((2, 2))
with tf.GradientTape() as t:
    t.watch(x)
    y = tf.reduce_sum(x)
    z = tf.multiply(y, y)
print()
print("x: \n{}\n".format(x))
print("z: \n{}\n".format(z))

dz_dx = t.gradient(z, x)
print("gradient: \n{}\n".format(dz_dx))
for i in [0, 1]:
    for j in [0, 1]:
        assert dz_dx[i][j].numpy() == 8.0


x = tf.constant(3.0)
with tf.GradientTape() as t:
    t.watch(x)
    y = x * x
    z = y * y

print("x: \n{}\n".format(x))
print("z: \n{}\n".format(z))
dz_dx = t.gradient(z, x)
print("gradient: \n{}\n".format(dz_dx))

def f(x, y):
    output = 1.0
    for i in range(y):
        if i > 1 and i < 5:
            output = tf.multiply(output, x)
    return output

def grad(x, y):
    with tf.GradientTape() as t:
        t.watch(x)
        out = f(x, y)
    return t.gradient(out, x)

x = tf.convert_to_tensor(2.0)

assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0

# 二阶微分测试
x = tf.Variable(1.0)

with tf.GradientTape() as t:
    with tf.GradientTape() as t2:
        y = x * x * x
    dy_dx = t2.gradient(y, x)
d2y_d2x = t.gradient(dy_dx, x)

print("x: \n{}\n".format(x))
print("y: \n{}\n".format(y))
print("dy_dx: \n{}\n".format(dy_dx))
print("d2y_d2x: \n{}\n".format(d2y_d2x))

assert dy_dx == 3
assert d2y_d2x == 6

自定义训练

# -*- coding: utf8 -*-
# 自定义训练
import matplotlib.pyplot as plt

import pandas as pd

import seaborn as sns

import tensorflow as tf


print(tf.__version__)
tf.enable_eager_execution()

x = tf.zeros((10, 10))
x += 2
print("Tensor \n{}\n".format(x))

# 演示对于变量的值操作
v = tf.Variable(1.0)
assert v.numpy() == 1.0

v.assign(3.0)
assert v.numpy() == 3.0

v.assign(tf.square(v))
assert v.numpy() == 9.0

# 训练线性模型
# 1.定义一个模型
# 2.定义一个残差函数
# 3.获取训练数据
# 4.运行模型生产数据,计算残差并使用优化器优化

# 定义一个模型
class Model(object):
    def __init__(self):
        self.W = tf.Variable(5.0)
        self.b = tf.Variable(2.0)
    
    def __call__(self, x):
        return self.W * x + self.b

model = Model()
assert model(3.0).numpy() == 17.0

# 定义一个误差函数
def error_function(predicted_y, desired_y):
    return tf.reduce_mean(tf.square(predicted_y - desired_y))

# 生成训练数据
TRUE_W = 3.0
TRUE_b = 5.0

NUM_EXAMPLES = 1000

inputs = tf.random_normal(shape=[NUM_EXAMPLES])
noise = tf.random_normal(shape=[NUM_EXAMPLES])
outputs = TRUE_W * inputs + TRUE_b + noise

train_dataset = pd.DataFrame(data={"inputs": inputs, "outputs": outputs})

sns.lmplot("inputs", "outputs", data=train_dataset)
plt.show()

# 定义训练和参数调整
def train(model, inputs, outputs, learning_rate):
    with tf.GradientTape() as t:
        current_loss = error_function(model(inputs), outputs)
    # 计算变量的变化
    dW, db = t.gradient(current_loss, [model.W, model.b])
    model.W.assign_sub(learning_rate * dW)
    model.b.assign_sub(learning_rate * db)

# 开始训练
model = Model()

Ws, bs = [], []
losses = []
learning_rate = 0.1
epochs = range(10)
for epoch in epochs:
    Ws.append(model.W.numpy())
    bs.append(model.b.numpy())
    current_loss = error_function(model(inputs), outputs)
    losses.append(current_loss)
    train(model, inputs, outputs, learning_rate)
    print(
        "Epochs %d W=%1.2f b=%1.2f loss=%2.5f" % (
            epoch, Ws[-1], bs[-1], losses[-1]
        )
    )

train_set = pd.DataFrame(data={
    "epoch": epochs,
    "w": Ws,
    "b": bs,
    "losses": losses
})

ax = plt.subplot(211)
ax.plot(epochs, Ws, color='b', label="Train_W")
ax.plot(epochs, [TRUE_W] * len(epochs), 'b--', label="True_W")
ax.plot(epochs, bs, color='r', label="Train_b")
ax.plot(epochs, [TRUE_b] * len(epochs), 'r--', label="True_b")
plt.legend()

ax2 = plt.subplot(212)
ax2.plot(epochs, losses, 'b', label="Loss")
plt.legend()
plt.show()

自定义层

# -*- coding: utf8 -*-
import tensorflow as tf


print(tf.__version__)
tf.enable_eager_execution()

# 关于层的一些探究
layer = tf.keras.layers.Dense(units=10, input_shape=(None, 5))

layer(tf.zeros((10, 5)))
# 这里边包含了初始化的权重和偏置
print("Veriable: \n{}".format(layer.variables))

print("Weights: \n{}".format(layer.kernel))
print("Bias: \n{}".format(layer.bias))

# 自定义一个层
class MyDenseLayer(tf.keras.layers.Layer):
    def __init__(self, num_output):
        super(MyDenseLayer, self).__init__()
        self.num_output = num_output
    
    def build(self, input_shape):
        self.kernel = self.add_variable(
            "kernel", shape=[
                int(input_shape[-1]),
                self.num_output
            ]
        )

    def call(self, inputs):
        return tf.matmul(inputs, self.kernel)

layer = MyDenseLayer(10)

print(layer(tf.zeros((10, 5))))
print(layer.trainable_variables)

# 组建自己的层组
class ResnetIdentityBlock(tf.keras.Model):
    def __init__(self, kernel_size, filters):
        super(ResnetIdentityBlock, self).__init__(name='')
        filter1, filter2, filter3 = filters

        self.conv1 = tf.keras.layers.Conv2D(filter1, (1, 1))
        self.bn1 = tf.keras.layers.BatchNormalization()
        self.conv2 = tf.keras.layers.Conv2D(filter2, kernel_size, padding="same")
        self.bn2 = tf.keras.layers.BatchNormalization()
        self.conv3 = tf.keras.layers.Conv2D(filter3, (1, 1))
        self.bn3 = tf.keras.layers.BatchNormalization()

    def call(self, input_tensor, training=False):
        x = self.conv1(input_tensor)
        x = self.bn1(x, training=training)
        x = tf.nn.relu(x)

        x = self.conv2(x)
        x = self.bn2(x, training=training)
        x = tf.nn.relu(x)

        x = self.conv3(x)
        x = self.bn3(x, training=training)

        x += input_tensor
        return tf.nn.relu(x)

block = ResnetIdentityBlock(1, [1, 1, 1])
print("block variables: {}".format(block(tf.zeros((1, 2, 3, 3)))))
print("block name: {}".format([x.name for x in block.trainable_variables]))

# 定义一个常用的序贯模型
seq = tf.keras.Sequential(
    layers=[
        tf.keras.layers.Conv2D(1, (1, 1)),
        tf.keras.layers.BatchNormalization(),
        tf.keras.layers.Conv2D(
            2, 1, padding="same"
        ),
        tf.keras.layers.BatchNormalization(),
        tf.keras.layers.Conv2D(3, (1, 1)),
        tf.keras.layers.BatchNormalization()
    ]
)

seq(tf.zeros((1, 2, 3, 3)))
print("seq: {}".format(seq))
print("seq_variables: {}".format(seq.trainable_variables))

自定义训练2

# -*- coding: utf8 -*-
# 自定义训练: 演示
import os

import matplotlib.pyplot as plt

import pandas as pd

import seaborn as sns

import tensorflow as tf
from tensorflow import keras
import tensorflow.contrib.eager as tfe


print(tf.__version__)
tf.enable_eager_execution()

# 步骤
# 1.导入和解析数据集
# 2.选择和构建模型
# 3.训练模型
# 4.评估模型效果
# 5.使用经过训练的模型进行预测

# 导入和解析数据
train_set_url = "http://download.tensorflow.org/data/iris_training.csv"
iris_fp = keras.utils.get_file(
    fname=os.path.basename(train_set_url),
    origin=train_set_url
)

test_url = "http://download.tensorflow.org/data/iris_test.csv"

iris_test_fp = keras.utils.get_file(
    fname=os.path.basename(test_url),
    origin=test_url)

iris_data = pd.read_csv(
    iris_fp, 
    header=None,
    skiprows=1, 
    names=["sepal_length", "sepal_width", "petal_length", "petal_width", "species"]
    )
iris_data = iris_data
print(iris_data.head())
print(iris_data.describe())


class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
# 查看数据
sns.pairplot(
    iris_data, hue='species', 
    x_vars=["sepal_length", "sepal_width", "petal_length", "petal_width"],
    y_vars=["sepal_length", "sepal_width", "petal_length", "petal_width"]
    )
plt.show()

# 生成训练数据集
batch_size = 32
train_set = tf.contrib.data.make_csv_dataset(
    iris_fp,
    batch_size,
    column_names=["sepal_length", "sepal_width", "petal_length", "petal_width", "species"],
    label_name="species",
    num_epochs=1
)

test_set = tf.contrib.data.make_csv_dataset(
    iris_test_fp,
    batch_size,
    column_names=["sepal_length", "sepal_width", "petal_length", "petal_width", "species"],
    label_name="species",
    num_epochs=1
)

# 测试生成数据
features, labels = next(iter(train_set))
print("features: \n{}\n".format(features))

plt.scatter(
    features["petal_length"],
    features["sepal_length"],
    c=labels,
    cmap="viridis"
)

plt.xlabel("petal_length")
plt.ylabel("sepal_length")
plt.show()

# 将生成的模型压制成合适的形状
def pack_features_vetor(features, labels):
    features = tf.stack(list(features.values()), axis=1)
    return features, labels

train_dataset = train_set.map(pack_features_vetor)
test_dataset = test_set.map(pack_features_vetor)
features, labels = next(iter(train_dataset))

print("features: \n{}\n".format(features))
print("labels: \n{}\n".format(labels))

# 构建我们的模型
model = keras.Sequential(
    layers=[
        keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(None, 4)),
        keras.layers.Dense(10, activation=tf.nn.relu),
        keras.layers.Dense(3)
    ]
)

# 测试我们的模型
predictions = model(features)
prediction_activation = tf.nn.softmax(predictions)

print("Prediction: {}".format(tf.argmax(prediction_activation, axis=1)))
print("True labels: {}".format(labels))

# 训练我们的模型

# 定义一个损失函数
def loss(model, x, y):
    y_ = model(x)
    loss_ = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)
    return loss_

# 计算一下目前的误差
l = loss(model, features, labels)
print("current loss: \n{}\n".format(l))

# 获取损失和梯度
def grad(model, inputs, labels):
    with tf.GradientTape() as t:
        loss_value = loss(model, inputs, labels)
    deleta_v = t.gradient(loss_value, model.trainable_variables)
    return loss_value, deleta_v

# 定义优化器
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
# 定义全局计数器
global_step = tf.train.get_or_create_global_step()

# 尝试训练一轮
loss_value, grad_value = grad(model, features, labels)
print("Step {} Initial Loss {}".format(global_step.numpy(), loss_value.numpy()))
# 优化模型
optimizer.apply_gradients(zip(grad_value, model.variables), global_step)
# 查看优化效果
print("Step {} Loss {}".format(global_step.numpy(), loss(model, features, labels)))

# 循环训练模型
train_loss_results = []
train_accuracy_results = []

num_epochs = 200

for epoch in range(num_epochs):
    epoch_loss_avg = tfe.metrics.Mean()
    epoch_accuracy = tfe.metrics.Accuracy()

    for x, y in train_dataset:
        # 计算梯度
        loss_value, grad_value = grad(model, x, y)
        optimizer.apply_gradients(zip(grad_value, model.variables), global_step)

        # 记录损失值
        epoch_loss_avg(loss_value)
        # 记录准确率
        epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y)

    train_loss_results.append(epoch_loss_avg.result())
    train_accuracy_results.append(epoch_accuracy.result())

    if epoch % 50 == 0:
        print("Epoch {} Loss: {:.3f} Accuracy: {:.3%}".format(
            epoch, epoch_loss_avg.result(), epoch_accuracy.result()
        ))


# 绘制损失曲线和正确率曲线
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle("Train Metrics")

axes[0].set_ylabel("loss")
axes[0].plot(train_loss_results)

axes[1].set_ylabel("accuracy")
axes[1].set_xlabel("epochs")
axes[1].plot(train_accuracy_results)
plt.show()

# 评估我们的模型
test_accuracy = tfe.metrics.Accuracy()

for (x, y) in test_dataset:
    logits = model(x)
    predictions = tf.argmax(logits, axis=1, output_type=tf.int32)
    test_accuracy(predictions, y)

print("Test Accuracy: {:.3%}".format(test_accuracy.result()))


# 使用我们的模型进行预测
prediction_dataset = tf.convert_to_tensor([
    [5.1, 3.3, 1.7, 0.5,],
    [5.9, 3.0, 4.2, 1.5,],
    [6.9, 3.1, 5.4, 2.1]  
])

predictions = model(prediction_dataset)

for i, logits in enumerate(predictions):
    class_idx = tf.argmax(logits).numpy()
    # 概率值
    p = tf.nn.softmax(logits)[class_idx]
    # 名称
    name = class_names[class_idx]
    print("Example {} prediction: {} ({:.1%})".format(
        i, name, p
    ))

pix2pix

# -*- coding: utf8 -*-
# builtin model
import os
import random
import ssl

# third part model
from PIL import Image

import matplotlib.pyplot as plt

import numpy as np

from tensorflow import keras
import tensorflow as tf

# set ssl verify
ssl._create_default_https_context = ssl._create_unverified_context

# enable eager execution
tf.enable_eager_execution()

# local model

print("tensorflow version: {}".format(tf.__version__))

# load data
zip_path = keras.utils.get_file(
    fname="facades.tar.gz",
    origin="https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/facades.tar.gz",
    extract=True,
    cache_subdir=os.path.abspath(".")    
)

image_dir_path = os.path.join(
    os.path.dirname(zip_path), "facades"
)

print(os.listdir(image_dir_path))

train_dataset_dir_path = os.path.join(image_dir_path, "train")
test_dataset_dir_path = os.path.join(image_dir_path, "test")
val_dataset_dir_path = os.path.join(image_dir_path, "val")

train_dataset_file_paths = (os.path.join(train_dataset_dir_path, fname)
                            for fname in os.listdir(train_dataset_dir_path) if fname.endswith(".jpg"))

test_dataset_file_paths = (os.path.join(test_dataset_dir_path, fname)
                           for fname in os.listdir(test_dataset_dir_path) if fname.endswith(".jpg"))

val_dataset_file_paths = (os.path.join(val_dataset_dir_path, fname)
                          for fname in os.listdir(val_dataset_dir_path) if fname.endswith(".jpg"))

BUFFER_SIZE = 400
IMG_WIDTH = 256
IMG_HEIGHT = 256


def read_image_to_array(file_path):
    return np.array(Image.open(file_path))


train_data_ogn = [read_image_to_array(fp) for fp in train_dataset_file_paths]
test_data_ogn = [read_image_to_array(fp) for fp in test_dataset_file_paths]
val_data_ogn = [read_image_to_array(fp) for fp in val_dataset_file_paths]


def normalized_image(image, istrain):
    height, width, channel = image.shape
    # 左半边为input 右半边为target
    half_width = width // 2
    input_img = image[:, half_width:]
    target_img = image[:, :half_width]

    if istrain:
        # 缩放尺寸
        input_img_resized = tf.image.resize(
            input_img, (286, 286)
        )
        target_img_resized = tf.image.resize(
            target_img, (286, 286)
        )
        # 将两张图片压制到一起
        stack_input_and_target_img = tf.stack([input_img_resized, target_img_resized], axis=0)
        # 随机切分图像
        crop_images = tf.random_crop(stack_input_and_target_img, [2, IMG_HEIGHT, IMG_WIDTH, 3])
        # 分解切分结果
        input_img, target_img = crop_images[0], crop_images[1]
        # 随机翻转图像(水平方向)
        if random.random() > .5:
            input_img = input_img[:, ::-1, :]
            target_img = target_img[:, ::-1, :]
    else:
        # 只进行规定的尺寸缩放和坐标轴扩展
        input_img = tf.image.resize(input_img, (IMG_WIDTH, IMG_HEIGHT)).numpy()
        target_img = tf.image.resize(target_img, (IMG_WIDTH, IMG_HEIGHT)).numpy()

    # 归一化处理
    input_img = input_img / 127.5 - 1
    target_img = target_img / 127.5 - 1
    return input_img, target_img


def img_generate(imgs, istrain=False):
    def gen():
        yield normalized_image(random.choice(imgs), istrain)
    return gen


train_generate = img_generate(train_data_ogn, True)
test_generate = img_generate(test_data_ogn)
val_generate = img_generate(val_data_ogn)


# 格式化所有的图像
train_dataset = tf.data.Dataset.from_generator(
    train_generate, (tf.float32, tf.float32)
).batch(1)

test_dataset = tf.data.Dataset.from_generator(
    test_generate, (tf.float32, tf.float32)
).batch(1)

val_dataset = tf.data.Dataset.from_generator(
    val_generate, (tf.float32, tf.float32)
).batch(1)


# 构建生成器
class DownSample(keras.Model):
    """
    下采样
    """
    def __init__(self, filters, size, apply_batchnormal=True):
        super(DownSample, self).__init__()
        self.apply_batchnormal = apply_batchnormal

        self.conv = keras.layers.Conv2D(
            filters,
            (size, size),
            strides=2,
            padding="same",
            use_bias=False
        )

        if self.apply_batchnormal:
            self.batch_norm = keras.layers.BatchNormalization()

    def call(self, x, training):
        # 卷积
        x = self.conv(x)
        if self.apply_batchnormal:
            x = self.batch_norm(x, training=training)

        # 激活
        x = tf.nn.leaky_relu(x)
        return x


class UpSample(keras.Model):
    """
    上采样
    """
    def __init__(self, filters, size, apply_dropout=False):
        super(UpSample, self).__init__()
        self.apply_dropout = apply_dropout

        self.up_conv = keras.layers.Conv2DTranspose(
            filters,
            (size, size),
            strides=2,
            padding="same",
            use_bias=False
        )

        self.batch_norm = keras.layers.BatchNormalization()
        if self.apply_dropout:
            self.dropout = keras.layers.Dropout(.5)

    def call(self, x1, x2, training):
        # 上卷积
        x = self.up_conv(x1)
        x = self.batch_norm(x, training=training)
        # dropout
        if self.apply_dropout:
            x = self.dropout(x)
        # 激活
        x = tf.nn.relu(x)
        x = tf.concat([x, x2], axis=-1)

        return x


class Generator(keras.Model):
    def __init__(self):
        super(Generator, self).__init__()
        self.down1 = DownSample(64, 4, apply_batchnormal=False)
        self.down2 = DownSample(128, 4)
        self.down3 = DownSample(256, 4)
        self.down4 = DownSample(512, 4)
        self.down5 = DownSample(512, 4)
        self.down6 = DownSample(512, 4)
        self.down7 = DownSample(512, 4)
        self.down8 = DownSample(512, 4)

        self.up1 = UpSample(512, 4, apply_dropout=True)
        self.up2 = UpSample(512, 4, apply_dropout=True)
        self.up3 = UpSample(512, 4, apply_dropout=True)
        self.up4 = UpSample(512, 4)
        self.up5 = UpSample(256, 4)
        self.up6 = UpSample(128, 4)
        self.up7 = UpSample(64, 4)

        self.last_layer = keras.layers.Conv2DTranspose(
            3,
            (4, 4),
            strides=2,
            padding="same"
        )

    def call(self, x, training):
        # 降采样
        x1 = self.down1(x, training)
        x2 = self.down2(x1, training)
        x3 = self.down3(x2, training)
        x4 = self.down4(x3, training)
        x5 = self.down5(x4, training)
        x6 = self.down6(x5, training)
        x7 = self.down7(x6, training)
        x8 = self.down8(x7, training)

        # 升采样
        x9 = self.up1(x8, x7, training)
        x10 = self.up2(x9, x6, training)
        x11 = self.up3(x10, x5, training)
        x12 = self.up4(x11, x4, training)
        x13 = self.up5(x12, x3, training)
        x14 = self.up6(x13, x2, training)
        x15 = self.up7(x14, x1, training)

        x16 = self.last_layer(x15)
        # 激活
        x16 = tf.nn.tanh(x16)

        return x16


# 构建鉴别器
class DiscDownsample(keras.Model):
    def __init__(self, filters, size, apply_batchnorm=True):
        super(DiscDownsample, self).__init__()
        self.apply_batchnorm = apply_batchnorm
        self.conv = keras.layers.Conv2D(
            filters,
            (size, size),
            strides=2,
            padding="same",
            use_bias=False
        )
        if self.apply_batchnorm:
            self.batch_norm = keras.layers.BatchNormalization()

    def call(self, x, training):
        # 卷积
        x = self.conv(x)
        if self.apply_batchnorm:
            x = self.batch_norm(x, training=training)

        # 激活
        x = tf.nn.leaky_relu(x)
        return x


class Discriminator(keras.Model):
    def __init__(self):
        super(Discriminator, self).__init__()
        self.down1 = DiscDownsample(64, 4, False)
        self.down2 = DiscDownsample(128, 4)
        self.down3 = DiscDownsample(256, 4)

        self.zero_pad1 = keras.layers.ZeroPadding2D()

        self.conv = keras.layers.Conv2D(
            512, (4, 4), strides=2, use_bias=False
        )
        self.batch_norm = keras.layers.BatchNormalization()
        self.zero_pad2 = keras.layers.ZeroPadding2D()
        self.last = keras.layers.Conv2D(
            1, (4, 4), strides=1
        )

    def call(self, inp, tar, training):
        x = tf.concat([inp, tar], axis=-1)
        x = self.down1(x, training)
        x = self.down2(x, training)
        x = self.down3(x, training)
        x = self.zero_pad1(x)
        x = self.conv(x)
        x = self.batch_norm(x)
        x = tf.nn.leaky_relu(x)
        x = self.zero_pad2(x)
        x = self.last(x)

        return x


# 定义损失函数
LAMBDA = 100


def generator_loss_function(gen_out, preb, tar):
    # 采用sigmoid 交叉熵衡量
    # 这里是为了是生成器生成的更像真的而预设的错误,反向训练生成器中的权重
    gen_out_loss = tf.nn.sigmoid_cross_entropy_with_logits(
        labels=tf.ones_like(gen_out),
        logits=gen_out
    )
    # 计算生成图像的误差 这个误差也是为了使图像更接近原图
    # 但是gen_out_loss 是使权重在顾忌生成器的同时,也估计评估器
    # 让其的权重也能适应评估器,是两者的对抗更有效
    ll_loss = tf.reduce_mean(tf.abs(preb - tar))
    total_loss = gen_out_loss + LAMBDA * ll_loss
    return total_loss


def discriminator_loss_function(preb, tar):
    # 采用 sigmoid 交叉熵衡量 两者之间的相似度
    rel_loss = tf.nn.sigmoid_cross_entropy_with_logits(
        labels=tf.ones_like(tar),
        logits=tar
    )

    generate_loss = tf.nn.sigmoid_cross_entropy_with_logits(
        labels=np.zeros_like(preb),
        logits=preb
    )

    # 计算总误差
    total_loss = rel_loss + generate_loss
    return total_loss


# 定义优化器
# 生成器优化器
gen_optimizer = tf.train.AdamOptimizer(2e-4, 0.5)
# 评估器优化器
disc_optimizer = tf.train.AdamOptimizer(2e-4, 0.5)


# 开始训练
epoch_num = 10

# 定义生成器和评估器
generator = Generator()
discriminator = Discriminator()


# 查看模型效果
for test_img, test_target in test_dataset.take(1):
    test_img_show = (test_img + 1) * 127.5
    test_target_show = (test_target + 1) * 127.5
    predict_img = (generator(test_img, False) + 1) * 127.5
    fig, axes = plt.subplots(ncols=3, sharey=True)
    axes[0].set_title("test_input")
    axes[0].imshow(test_img_show[0].numpy().astype(np.uint8))
    axes[1].set_title("preb_output")
    axes[1].imshow(predict_img[0].numpy().astype(np.uint8))
    axes[2].set_title("target_output")
    axes[2].imshow(test_target_show[0].numpy().astype(np.uint8))

    plt.show()


# 定义训练流程
def train(dataset, epoches):
    for epoch in range(epoches):
        i = 0
        for input_image, target in dataset.take(100):
            i += 1
            with tf.GradientTape() as gen_gradient, tf.GradientTape() as disc_gradient:
                # 使用生成器计算
                gen_out = generator(input_image, True)

                # 使用评估器计算
                disc_rel_out = discriminator(input_image, target, True)
                disc_gen_out = discriminator(input_image, gen_out, True)

                # 计算误差
                gen_loss = generator_loss_function(disc_gen_out, gen_out, target)
                disc_loss = discriminator_loss_function(disc_gen_out, disc_rel_out)

            # 分别计算生成器和评估器的梯度
            gen_gradient_value = gen_gradient.gradient(
                gen_loss, generator.variables
            )

            disc_gradient_value = disc_gradient.gradient(
                disc_loss, discriminator.variables
            )

            # 使用优化器基于计算的梯度调整变量
            gen_optimizer.apply_gradients(
                zip(gen_gradient_value, generator.variables)
            )
            disc_optimizer.apply_gradients(
                zip(disc_gradient_value, discriminator.variables)
            )
            print("Epoch {}/{} Gen Loss {} DiscLoss {}".format(epoch, epoches, tf.reduce_mean(gen_loss), tf.reduce_mean(disc_loss)))


# 训练10轮
train(train_dataset, 100)

# 查看模型效果
for test_img, test_target in test_dataset.take(1):
    test_img_show = (test_img + 1) * 127.5
    test_target_show = (test_target + 1) * 127.5
    predict_img = (generator(test_img, False) + 1) * 127.5
    fig, axes = plt.subplots(ncols=3, sharey=True)
    axes[0].set_title("test_input")
    axes[0].imshow(test_img_show[0].numpy().astype(np.uint8))
    axes[1].set_title("preb_output")
    axes[1].imshow(predict_img[0].numpy().astype(np.uint8))
    axes[2].set_title("target_output")
    axes[2].imshow(test_target_show[0].numpy().astype(np.uint8))

    plt.show()

一些细节

这些数据的下载,都是走的google的地址,国内可能会需要翻墙,这里提供一个下载地址,将文件解压产生的数据放到~/.keras目录下,如果没有就创建一个。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值