汽车状态分类器代码详解(未完待续)

先是data_processing文件:

import pandas as pd
from urllib.request import urlretrieve


def load_data(download=True):
    # download data from : http://archive.ics.uci.edu/ml/datasets/Car+Evaluation
    if download:
        data_path, _ = urlretrieve("http://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data", "car.csv")
        print("Downloaded to car.csv")

    # use pandas to view the data structure
    col_names = ["buying", "maint", "doors", "persons", "lug_boot", "safety", "class"]
    data = pd.read_csv("car.csv", names=col_names)
    return data


def convert2onehot(data):
    # covert data to onehot representation
    return pd.get_dummies(data, prefix=data.columns)


if __name__ == "__main__":
    data = load_data(download=True)
    new_data = convert2onehot(data)

    print(data.head())
    print("\nNum of data: ", len(data), "\n")  # 1728
    # view data values
    for name in data.keys():
        print(name, pd.unique(data[name]))
    print("\n", new_data.head(2))
    new_data.to_csv("car_onehot.csv", index=False)

1.pd.read_csv
功能:给数据添加一行列索引,也就是横在数据上面的表头
返回一个诡异的数据类型:
data: <class ‘pandas.core.frame.DataFrame’>

col_names = ["buying", "maint", "doors", "persons", "lug_boot", "safety", "class"]
data = pd.read_csv("car.csv", names=col_names)

2.get_dummies(data, prefix=data.columns)
pandas的功能真的很强大,可以直接以data.columns为标准生成独热码形式
https://blog.csdn.net/qq_35290785/article/details/91415240

return pd.get_dummies(data, prefix=data.columns)

3.data.head()
读取前五行数据
https://blog.csdn.net/qq_18649781/article/details/89033749

print(data.head())

4.pd.unique
用法自己琢磨把…

    for name in data.keys():
        print(name, pd.unique(data[name]))

5.to_csv
dt.to_csv() #默认dt是DataFrame的一个实例,
index=False表示不保留行索引,也就是不保留最左侧的竖列索引

new_data.to_csv("car_onehot.csv", index=False)

综上所述,data_processing代码完成了one-hot的csv文件的建立

接下来 我们来看model文件

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import data_processing

data = data_processing.load_data(download=True)
new_data = data_processing.convert2onehot(data)


# prepare training data
new_data = new_data.values.astype(np.float32)       # change to numpy array and float32
np.random.shuffle(new_data)
sep = int(0.7*len(new_data))
train_data = new_data[:sep]                         # training data (70%)
test_data = new_data[sep:]                          # test data (30%)


# build network
tf_input = tf.placeholder(tf.float32, [None, 25], "input")
tfx = tf_input[:, :21]
tfy = tf_input[:, 21:]

l1 = tf.layers.dense(tfx, 128, tf.nn.relu, name="l1")
l2 = tf.layers.dense(l1, 128, tf.nn.relu, name="l2")
out = tf.layers.dense(l2, 4, name="l3")
prediction = tf.nn.softmax(out, name="pred")


loss = tf.losses.softmax_cross_entropy(onehot_labels=tfy, logits=out)
accuracy = tf.metrics.accuracy(          # return (acc, update_op), and create 2 local variables
    labels=tf.argmax(tfy, axis=1), predictions=tf.argmax(out, axis=1))[1]
opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train_op = opt.minimize(loss)

sess = tf.Session()
sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()))

# training
plt.ion()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))
accuracies, steps = [], []
for t in range(4000):
    # training
    batch_index = np.random.randint(len(train_data), size=32)
    sess.run(train_op, {tf_input: train_data[batch_index]})

    if t % 50 == 0:
        # testing
        acc_, pred_, loss_ = sess.run([accuracy, prediction, loss], {tf_input: test_data})
        accuracies.append(acc_)
        steps.append(t)
        print("Step: %i" % t,"| Accurate: %.2f" % acc_,"| Loss: %.2f" % loss_,)

        # visualize testing
        ax1.cla()
        for c in range(4):
            bp = ax1.bar(c+0.1, height=sum((np.argmax(pred_, axis=1) == c)), width=0.2, color='red')
            bt = ax1.bar(c-0.1, height=sum((np.argmax(test_data[:, 21:], axis=1) == c)), width=0.2, color='blue')
        ax1.set_xticks(range(4), ["accepted", "good", "unaccepted", "very good"])
        ax1.legend(handles=[bp, bt], labels=["prediction", "target"])
        ax1.set_ylim((0, 400))
        ax2.cla()
        ax2.plot(steps, accuracies, label="accuracy")
        ax2.set_ylim(ymax=1)
        ax2.set_ylabel("accuracy")
        plt.pause(0.01)

plt.ioff()
plt.show()

1.tf.nn.softmax
将所选择的一维数组归一化,也就是把每一个元素都变成0-1之间的数字

2.tf.metrics.accuracy
tf.metrics.accuracy返回两个值,accuracy为到上一个batch为止的准确度,update_op为更新本批次后的准确度。
https://blog.csdn.net/lyb3b3b/article/details/83047148
因为有局部变量,所以要初始化局部变量,sess.run(tf.local_variables_initializer())
这就是为什么最后有初始化局部变量的原因

3.plt.ion()
打开交互模式:若想动态显示图像,则需要使用交互(interactive)模式。
https://www.cnblogs.com/wmy-ncut/p/10172601.html

4.plt.subplots
前两个数字是子图的行列数
figsize=(8, 4)估计就是确定figure的大小

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))

5.numpy.random.randint(low, high=None, size=None, dtype=‘l’)
函数的作用是,返回一个随机整型数,范围从低(包括)到高(不包括),即[low, high)。
如果没有写参数high的值,则返回[0,low)的值。
size: int or tuple of ints(可选)
输出随机数的尺寸,比如size = (m * n* k)则输出同规模即m * n* k个随机数。默认是None的,仅仅返回满足要求的单一随机数。
https://blog.csdn.net/u011851421/article/details/83544853
即:整数,生成随机元素的个数或者元组,数组的行和列
也就是这里生成的是array([1,5,6,8,4,3…]) 共32个元素

batch_index = np.random.randint(len(train_data), size=32)

6.现在不用feed_dict={}这种形式了
用:

sess.run(train_op, {tf_input: train_data[batch_index]})

7.ax1.cla()
plt.cla() # 清除axes(轴),即清楚当前 figure 中的活动的axes,但其他axes保持不变。
暂时存疑

8.ax1.bar
https://blog.csdn.net/liangzuojiayi/article/details/78187704

  1. left:x轴的位置序列,一般采用arange函数产生一个序列;
  2. height:y轴的数值序列,也就是柱形图的高度,一般就是我们需要展示的数据;
  3. alpha:透明度
  4. width:为柱形图的宽度,一般这是为0.8即可;
  5. color或facecolor:柱形图填充的颜色;
  6. edgecolor:图形边缘颜色
  7. label:解释每个图像代表的含义
  8. linewidth or linewidths or lw:边缘or线的宽度
 bp = ax1.bar(c+0.1, height=sum((np.argmax(pred_, axis=1) == c)), width=0.2, color='red')

后面的先不写了,matplotlib有点麻烦

补充:
1.dataframe格式建立:
https://blog.csdn.net/qq_39161737/article/details/78866399
https://blog.csdn.net/weixin_40240670/article/details/80506402

加入了save,并且单独摘2个测试 data_processing:

import pandas as pd
from urllib.request import urlretrieve
import numpy as np


def load_data(download=True):
    # download data from : http://archive.ics.uci.edu/ml/datasets/Car+Evaluation
    if download:
        data_path, _ = urlretrieve("http://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data", "car.csv")
        print("Downloaded to car.csv")

    # use pandas to view the data structure
    col_names = ["buying", "maint", "doors", "persons", "lug_boot", "safety", "class"]
    data = pd.read_csv("car.csv", names=col_names)
    data_test = pd.read_csv("car_test.csv", names=col_names)
    # print(data_test)   #<class 'pandas.core.frame.DataFrame'>
    return data, data_test


def convert2onehot(data, train):
    # covert data to onehot representation
    if train == True:
        print(type(pd.get_dummies(data, prefix=data.columns)))
        return pd.get_dummies(data, prefix=data.columns)
    else:
        x = [[0,0,0,1,0,0,0,1,1,0,0,0,1,0,0,0,0,1,0,1,0,0,0,1,0],[0,0,0,1,0,0,0,1,1,0,0,0,1,0,0,0,0,1,0,0,1,0,0,1,0]]
        return pd.DataFrame(x)




if __name__ == "__main__":
    data, data_test= load_data(download=True)
    new_data = convert2onehot(data, train=True)
    new_data_test = convert2onehot(data_test, train=False)
    # new_data_test = new_data_test.values.astype(np.float32)
    # print("自己建立的是:", new_data_test)
    # print("new:", new_data)

    print(data.head())
    print("\nNum of data: ", len(data), "\n")  # 1728  也就是1728行
    # view data values
    for name in data.keys():
        print(name, pd.unique(data[name]))
    print("\n", new_data.head(2))

    new_data.to_csv("car_onehot.csv", index=False)
    # new_data_test.to_csv("car_onehot_test.csv", index=False)

model:

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import data_processing
import os

tf.app.flags.DEFINE_integer("is_train", 2, "指定是否是训练模型,还是拿数据去预测")
FLAGS = tf.app.flags.FLAGS

data, data_test = data_processing.load_data(download=True)

new_data = data_processing.convert2onehot(data, train=True)
new_data_test = data_processing.convert2onehot(data, train=False)


# prepare training data
new_data = new_data.values.astype(np.float32)       # change to numpy array and float32  转换为numpy数组形式并规定为浮点数
new_data_test = new_data_test.values.astype(np.float32)
#print("打印:", new_data)
np.random.shuffle(new_data)
sep = int(0.7*len(new_data))
train_data = new_data[:sep]                         # training data (70%)
test_data = new_data[sep:]                          # test data (30%)
realtest_data = new_data_test[::]


np.random.shuffle(new_data_test)

# build network
tf_input = tf.placeholder(tf.float32, [None, 25], "input")   #其中最后4个是汽车状态评价
tfx = tf_input[:, :21]
tfy = tf_input[:, 21:]

# tfx_test = tf_input[:, :21]
# tfy_test = tf_input[:, 21:]

l1 = tf.layers.dense(tfx, 128, tf.nn.relu, name="l1")
l2 = tf.layers.dense(l1, 128, tf.nn.relu, name="l2")
out = tf.layers.dense(l2, 4, name="l3")
prediction = tf.nn.softmax(out, name="pred")
# print("prediction:", prediction)


loss = tf.losses.softmax_cross_entropy(onehot_labels=tfy, logits=out)
accuracy = tf.metrics.accuracy(          # return (acc, update_op), and create 2 local variables
    labels=tf.argmax(tfy, axis=1), predictions=tf.argmax(out, axis=1))[1]
opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train_op = opt.minimize(loss)


#进行模型保存
# (2)收集要显示的变量
# 先收集损失和准确率
tf.summary.scalar("losses", loss)
tf.summary.scalar("accuracy", accuracy)

# (3)合并所有变量op
merged = tf.summary.merge_all()
# 创建模型保存与加载
saver = tf.train.Saver()


with tf.Session() as sess:

    sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()))



    # (1)创建一个events文件实例
    file_writer = tf.summary.FileWriter("./tmp/summary2/", graph=sess.graph)

    # 加载模型
    if os.path.exists("./tmp/modelckpt2/checkpoint"):
        saver.restore(sess, "./tmp/modelckpt2/cnn_model")  # 注意modelckpt2这个文件夹要自己建立
        # 也就是说 模型保存和加载的时候 也就是saver.save或saver.restore的路径需要自己建立 否则会蓝屏
        # 但是创建envents实例化的路径可以不用自己建立






    # training+test1

    if FLAGS.is_train == 1:
        accuracies, steps = [], []
        for t in range(4000):
            # training
            batch_index = np.random.randint(len(train_data), size=32)    #生成32个随机数,返回array数组
            sess.run(train_op, {tf_input: train_data[batch_index]})      #注意feeddict可以不用了

            # 运行合变量op,写入事件文件当中
            summary = sess.run(merged, {tf_input: train_data[batch_index]})
            file_writer.add_summary(summary, t)
            if t % 2 == 0:
                saver.save(sess, "./tmp/modelckpt2/cnn_model")

            if t % 50 == 0:
                # testing
                acc_, pred_, loss_ = sess.run([accuracy, prediction, loss], {tf_input: test_data})
                accuracies.append(acc_)
                steps.append(t)
                print("Step: %i" % t,"| Accurate: %.2f" % acc_,"| Loss: %.2f" % loss_,)

    # test2
    else:
        for t in range(2):
            # training
            batch_index = np.random.randint(len(realtest_data), size=1)
            # sess.run(train_op, {tf_input_test: test_real[batch_index]})
            acc_, pred_, loss_ = sess.run([accuracy, prediction, loss], {tf_input: realtest_data[batch_index]})
            print("真实值为:", sess.run(tf.argmax(realtest_data[batch_index][:, 21:], axis=1)))
            print("预测值为:", sess.run(tf.argmax(pred_, axis=1)))

在这里插入图片描述在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值