TensorFlow 2.0 简单车牌识别

12 篇文章 5 订阅

TensorFlow 2.0 简单车牌识别

在这里插入图片描述
说明:使用tensorflow2 训练模型,然后冻结模型,使用opencv 的 dnn 模块进行推断。这只是个简单的demo,本质上就是个分类模型。主要工作还是在车牌字符的分割。在推断部分只要主意好图片的格式就好。

GitHub

1、tensorflow 训练模型
(1)数据集准备
使用 百度 ai studio 上提供的车牌数据集

部分数据集图片(图片命名还是比较乱的):数据集部分
在这里插入图片描述

(2)数据集处理
data_path = 'E:\\code\\tf\\proj\\car_num\\data'
character_folders = os.listdir(data_path)
label = 0
LABEL_temp = {}
if(os.path.exists('./train_data.list')):
    os.remove('./train_data.list')
if(os.path.exists('./test_data.list')):
    os.remove('./test_data.list')
for character_folder in character_folders:
    with open('./train_data.list', 'a') as f_train:
        with open('./test_data.list', 'a') as f_test:
            if character_folder == '.DS_Store' or character_folder == '.ipynb_checkpoints' or character_folder == 'data23617':
                continue
            print(character_folder + " " + str(label))
            LABEL_temp[str(label)] = character_folder     #存储一下标签的对应关系
            character_imgs = os.listdir(os.path.join(data_path, character_folder))
            for i in range(len(character_imgs)):
                if i%10 == 0:
                    f_test.write(os.path.join(os.path.join(data_path, character_folder), character_imgs[i]) + "\t" + str(label) + '\n')
                else:
                    f_train.write(os.path.join(os.path.join(data_path, character_folder), character_imgs[i]) + "\t" + str(label) + '\n')
    label = label + 1
print('图像列表已生成')

all_image_paths = []
all_image_labels = []
test_image_paths = []
test_image_labels = []
with open('./train_data.list', 'r') as f:
    lines = f.readlines()
    for line in lines:
        img, label = line.split('\t')
        all_image_paths.append(img)
        all_image_labels.append(int(label))
with open('./test_data.list', 'r') as f:
    lines = f.readlines()
    for line in lines:
        img, label = line.split('\t')
        test_image_paths.append(img)
        test_image_labels.append(int(label))
def preprocess_image(image):
    image = tf.image.decode_jpeg(image, channels=3)
    image = tf.cast(image,dtype=tf.float32)
    image = tf.image.resize(image, [20, 20])
    image /= 255.0  # normalize to [0,1] range
    return image

def load_and_preprocess_image(path,label):
    image = tf.io.read_file(path)
    return preprocess_image(image), label


ds = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))
train_data = ds.map(load_and_preprocess_image).batch(64)
db = tf.data.Dataset.from_tensor_slices((test_image_paths, test_image_labels))
test_data = db.map(load_and_preprocess_image).batch(64)
(3)搭建网络训练模型
def train_model(train_data,test_data):
    #构建模型
    network = keras.Sequential([
        keras.layers.Conv2D(32, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
        keras.layers.BatchNormalization(),
        keras.layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),
        keras.layers.Conv2D(64, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
        keras.layers.BatchNormalization(),
        keras.layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),
        keras.layers.Conv2D(64, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
        keras.layers.BatchNormalization(),
        keras.layers.Flatten(),
        keras.layers.Dense(512, activation='relu'),
        keras.layers.Dropout(0.5),
        keras.layers.Dense(128, activation='relu'),
        keras.layers.Dense(65)])
    network.build(input_shape=(None, 20, 20, 3))
    network.summary()

    reduce_lr = ReduceLROnPlateau(monitor='val_loss', patience=10, mode='auto')
    network.compile(optimizer=optimizers.SGD(lr=0.001),
                    loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                    metrics=['accuracy'])
    network.fit(train_data, epochs=100, validation_data=test_data, callbacks=[reduce_lr])
    network.evaluate(test_data)
    tf.saved_model.save(network, 'E:\\code\\tf\proj\\car_num\\model\\')
train_model(train_data,test_data)

模型测试集上的准确率大概 97%,忘了截图了。

2、模型转换
import os
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
import matplotlib.pyplot as plt
os.environ["TF_CPP_MIN_LOG_LEVEL"] = '2'
print('tf版本:',tf.__version__)

DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load('.\\model\\')
network = loaded.signatures[DEFAULT_FUNCTION_KEY]
print(list(loaded.signatures.keys()))
print('加载 weights 成功')

# Convert Keras model to ConcreteFunction
full_model = tf.function(lambda x: network(x))
full_model = full_model.get_concrete_function(
tf.TensorSpec(network.inputs[0].shape, network.inputs[0].dtype))

# Get frozen ConcreteFunction
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()

layers = [op.name for op in frozen_func.graph.get_operations()]
print("-" * 50)
print("Frozen model layers: ")
for layer in layers:
    print(layer)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)
# Save frozen graph from frozen ConcreteFunction to hard drive
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
		logdir="./frozen_models",
		name="frozen_graph.pb",
		as_text=False)
3、OpenCV C++ 推断。
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>

#define DEBUG
using namespace std;
using namespace cv;
using namespace cv::dnn;

string label_list[] = {"0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "A", "B",
              "C", "川",
              "D", "E", "鄂", "F",
              "G", "赣", "甘", "贵", "桂",
              "H", "黑", "沪",
              "J", "冀", "津", "京", "吉",
              "K", "L", "辽", "鲁", "M", "蒙", "闽",
              "N", "宁",
              "P", "Q", "青", "琼",
              "R", "S", "陕", "苏", "晋",
              "T", "U", "V", "W", "皖",
              "X", "湘", "新",
              "Y", "豫", "渝", "粤", "云",
              "Z", "藏", "浙"};
int main()
{
    string  model_path = "../infer/frozen_graph.pb";
    Net net = readNetFromTensorflow(model_path);
    Mat license_plate = imread("E:\\code\\tf\\proj\\car_num\\zf.png",1);
    int image_h = license_plate.rows;
//    cout<<image_h<<endl;
    Mat gray_plate;
    Mat binary_plate;
    cvtColor(license_plate,gray_plate,COLOR_BGR2GRAY);
    threshold(gray_plate,binary_plate,175,255,THRESH_OTSU);


    vector<uint> pix(binary_plate.cols,0);

    for(int i = 0; i < binary_plate.cols;i++)
    {
        pix.push_back(0);
        for(int j = 0;j<binary_plate.rows;j++)
        {
           pix[i] = pix[i] +  binary_plate.at<uchar>(j,i);
        }
//        printf("%d ",pix[i]);
//        cout<<endl;
    }

    uint i = 0;
    uint num = 0;
    vector<Point> index_range;
    while(i < pix.size())
    {
        if (pix[i] == 0)
        {
            i +=1 ;
        }
        else
        {
            uint index = i + 1;
            while(pix[index] != 0)
            {
                index += 1;
            }
            index_range.push_back(Point(i,index-1));
            num += 1;
            i = index;
        }
    }
//    cout<<index_range.size();
    vector<Mat> seg_img;
    for(uint i = 0,num = 0;i<index_range.size();i++)
    {
        if(i == 2)
            continue;
        Mat img(binary_plate,Rect(index_range[i].x,0,index_range[i].y-index_range[i].x,binary_plate.rows));
        Mat temp_img = Mat::zeros(img.size(),CV_8UC1);
//        cout<<index_range[i].y<<","<<index_range[i].x<<endl;
        int pad = (binary_plate.rows -(index_range[i].y - index_range[i].x))/2;
        img.copyTo(temp_img);
        copyMakeBorder(temp_img,temp_img,0,0,pad,pad, cv::BORDER_CONSTANT,Scalar(0,0,0));

        imwrite(to_string(num++)+".png",temp_img);
        imshow(to_string(i+10),temp_img);
    }
    cout <<"车牌识别结果:";
    for(int i = 0;i<7;i++)
    {
        Mat frame = imread(to_string(i)+".png",1);
        Mat frame_32F;
        frame.convertTo(frame_32F,CV_32FC1);

        Mat blob = blobFromImage(frame_32F/255.0,
                                      1.0,
                                      Size(20,20),
                                      Scalar(0,0,0));

//        cout<<(blob.size);
        net.setInput(blob);
        Mat out = net.forward();
        Point maxclass;
        minMaxLoc(out, NULL, NULL, NULL, &maxclass);
        cout <<label_list[maxclass.x];
    }
    cout<<endl;

#ifdef DEBUG
    imshow("1",license_plate);
    imshow("2",gray_plate);
    imshow("3",binary_plate);
    while(1)
        if(waitKey(0) == '1')
            break;
#endif
    return 0;
}

在这里插入图片描述

  • 12
    点赞
  • 39
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 10
    评论
为进行TensorFlow 2.0的猫狗图像识别,你可以按照以下步骤进行操作。 首先,加载图像数据集并进行预处理。你可以使用TensorFlow自带的tf.image模块来加载和处理图像数据。你可以通过修改图像的对比度、裁剪、亮度等方式增加数据集的数量。这样做是因为神经网络处理的是图像的像素,当改变图像的条件时,图像矩阵的数值也会相应地发生变化,从而生成新的图像。下面是一个代码示例: ```python def load_preprocess_image(path,label): image = tf.io.read_file(path) image = tf.image.decode_jpeg(image,channels=3) image = tf.image.resize(image,[360,360]) image = tf.image.random_crop(image,[256,256,3]) image = tf.image.random_flip_left_right(image) image = tf.image.random_flip_up_down(image) image = tf.image.random_brightness(image,0.5) image = tf.image.random_contrast(image,0,1) image = tf.cast(image,tf.float32) image = image/255 label = tf.reshape(label,[1]) return image,label ``` 接下来,你可以将处理后的图像数据打包成张量,并将数据集的顺序打乱,以便进行训练。你可以使用tf.data.Dataset.from_tensor_slices函数将图像路径和标签转换成张量数据集。然后使用tf.data.Dataset.map函数应用load_preprocess_image函数进行图像加载和预处理。最后,使用shuffle和batch函数来打乱顺序和提取部分作为batch。代码如下: ```python train_image_ds = tf.data.Dataset.from_tensor_slices((train_image_path,train_image_label)) AUTOTUNE = tf.data.experimental.AUTOTUNE train_image_ds = train_image_ds.map(load_preprocess_image,num_parallel_calls=AUTOTUNE) train_image_ds = train_image_ds.shuffle(train_count).batch(BATCH_SIZE) train_image_ds = train_image_ds.prefetch(AUTOTUNE) ``` 这样,你就可以使用处理后的图像数据集进行猫狗图像识别的训练了。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [Tensorflow2.0实战练习之猫狗数据集(包含自定义训练和迁移学习)](https://blog.csdn.net/weixin_43938099/article/details/104481172)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]
评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

FlyDremever

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值