tflite 学习——生成.tflite 模型与验证

用Resnet 模型进行验证

Step 1 导入resnet 50模型 

import numpy as np
import tensorflow as tf



physical_devices = tf.config.list_physical_devices('GPU')
for device in physical_devices: # 使用GPU 
    tf.config.experimental.set_memory_growth(device, True)

model = tf.keras.applications.resnet50.ResNet50(weights='imagenet') # 导入resnet 模型 
img = tf.keras.utils.load_img('123.jpg', target_size=[224, 224,3])  # 载入图片并给出size 

x = tf.keras.preprocessing.image.img_to_array(img)  # 处理输入图片尺寸 
x = np.expand_dims(x, axis=0)
x = tf.keras.applications.resnet50.preprocess_input(x)


preds = model.predict(x)  # 预测结果 
# 将结果解码为元组列表 (class, description, probability)
# (一个列表代表批次中的一个样本)
print('Predicted:', tf.keras.applications.resnet50.decode_predictions(preds, top=3)[0])








结果: Predicted: [('n03887697', 'paper_towel', 0.71528304), ('n15075141', 'toilet_tissue', 0.16052581), ('n02948072', 'candle', 0.03291733)]

Step 2 存储Resnet 模型为 Savemodel的格式  两种方法

(1)低级API 

存储 tf.saved_model.save (model, path)

这种形式保存以后文件夹是这样的

加载 tf.saved_model.load(path_to_dir)

具体代码如下 

load_model = tf.saved_model.load('resnet50_low_level_save/') # 加载模型
labeling = load_model(x)   # 
imagenet_labels = np.array(open('ImageNetLabels.txt').read().splitlines())  # ImageNetLabels.txt 文件中包含1000个被识别的物体名字列表 和 一个backgroud 所以下面的物体标号是+1 
decoded = imagenet_labels[np.argsort(labeling)[0, ::-1][:3] + 1]
print(decoded)

(2)高级API 

存储  tf.keras.models.save_model(model, path)

这种形式保存以后文件夹是这样的 多了一个keras_metadata.pb的文件 

加载  tf.keras.models.save_model(path_to_dir)

具体代码如下 

tf.keras.models.save_model(model, 'resnet50_high_level_save/')
load_model = tf.keras.models.load_model('resnet50_high_level_save/') # 加载模型
preds = load_model.predict(x)
decoded = imagenet_labels[np.argsort(preds)[0, ::-1][:3] + 1]

这种方法好处在于可以继续使用keras 的方法

Step 3 利用 Savedmodel 生成.tflite模型 

(1) 低级API 

convert = tf.lite.TFLiteConverter.from_saved_model('resnet50_low_level_save')
tflite_model = convert.convert()

tflite_model_file = pathlib.Path('resnet50_low_level.tflite')
tflite_model_file.write_bytes(tflite_model)

(2) 高级API 

convert = tf.lite.TFLiteConverter.from_saved_model('resnet50_high_level_save')
tflite_model = convert.convert()

tflite_model_file = pathlib.Path('resnet50_high_level.tflite')
tflite_model_file.write_bytes(tflite_model)

Step4 .tflite 模型结果验证

(1)低级API 输入图片的处理利用tensorflow 处理 

import numpy as np
import tensorflow as tf

interpreter = tf.lite.Interpreter(model_path="resnet50_low_level.tflite")
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.allocate_tensors()
imagenet_labels = np.array(open('ImageNetLabels.txt').read().splitlines())


################################### 输入矩阵处理方式 1 ###################################
img_path = '123.jpg'
print(np.shape(img_path))
img = tf.keras.preprocessing.image.load_img(img_path, target_size=(224, 224, 3))   # 输入矩阵处理方式 1
x = tf.keras.preprocessing.image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = tf.keras.applications.resnet50.preprocess_input(x)


print(np.shape(x))

interpreter.set_tensor(input_details[0]['index'], x)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])

decoded = imagenet_labels[np.argsort(output_data)[0, ::-1][:3] + 1]
print(output_data[:,[np.argsort(output_data)[0, ::-1][:3]]])
print("Result before saving:\n", decoded)

结果 : 

与前面的结果基本一样,所以证明一切正确。

 (2)低级API 输入图片的处理利用Opencv 处理 

import numpy as np
import tensorflow as tf
import cv2
interpreter = tf.lite.Interpreter(model_path="resnet50_low_level.tflite")
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.allocate_tensors()
imagenet_labels = np.array(open('ImageNetLabels.txt').read().splitlines())


################################### 输入矩阵处理方式 2 利用Opencv ###################################
img = cv2.imread("123.jpg")
new_img = cv2.resize(img, (224, 224))
new_img = new_img.astype(dtype=np.float32)
# new_img = tf.cast(new_img , dtype=np.float32)
x = np.expand_dims(new_img, axis=0)

print(np.shape(x))

interpreter.set_tensor(input_details[0]['index'], x)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])

decoded = imagenet_labels[np.argsort(output_data)[0, ::-1][:3] + 1]
print(output_data[:,[np.argsort(output_data)[0, ::-1][:3]]])
print("Result before saving:\n", decoded)

 从结果看,Opencv 处理的图片,识别效果更好。

(3)高级API 输入图片的处理利用tensorflow 处理 

只需要把低级API保存的模型,改成高级的。

 

结论保存结果的形式,对于tflite的结果无影响。

(4)高级API 输入图片的处理利用tensorflow 处理 

只需要把低级API保存的模型,改成高级的。

 

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
将.tflite模型转换为.kmodel模型的代码如下: ```python import tensorflow as tf import numpy as np from tensorflow.keras import Model from tensorflow.keras.layers import Input # 加载tflite模型 interpreter = tf.lite.Interpreter(model_path="model.tflite") interpreter.allocate_tensors() # 获取输入输出张量的索引 input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # 创建Keras模型 inputs = Input(shape=input_details[0]['shape'][1:]) x = tf.keras.layers.Lambda(lambda x: x / 255.)(inputs) # 对输入进行归一化 for i, layer in enumerate(interpreter.get_tensor_details()): if i < 2: continue op = layer['op'] name = layer['name'] shape = layer['shape'] weights = interpreter.get_tensor(layer['index']) if op == 'CONV_2D': x = tf.keras.layers.Conv2D( filters=shape[-1], kernel_size=shape[1:3], strides=layer['stride'][1:3], padding='same', use_bias=layer['quantization_parameters']['quantized_dimension'] == 3, kernel_initializer=tf.keras.initializers.Constant(weights[0]), bias_initializer=tf.keras.initializers.Constant(weights[1]) )(x) elif op == 'DEPTHWISE_CONV_2D': x = tf.keras.layers.DepthwiseConv2D( kernel_size=shape[1:3], strides=layer['stride'][1:3], padding='same', use_bias=layer['quantization_parameters']['quantized_dimension'] == 3, depthwise_initializer=tf.keras.initializers.Constant(weights[0]), bias_initializer=tf.keras.initializers.Constant(weights[1]) )(x) elif op == 'AVERAGE_POOL_2D': x = tf.keras.layers.AveragePooling2D( pool_size=shape[1:3], strides=layer['stride'][1:3], padding='same' )(x) elif op == 'MAX_POOL_2D': x = tf.keras.layers.MaxPooling2D( pool_size=shape[1:3], strides=layer['stride'][1:3], padding='same' )(x) elif op == 'ADD': x = tf.keras.layers.Add()([x, tf.keras.layers.Lambda(lambda y: y[..., 0])(x)]) elif op == 'RELU': x = tf.keras.layers.ReLU()(x) elif op == 'RESHAPE': x = tf.keras.layers.Reshape(target_shape=shape[1:])(x) elif op == 'FULLY_CONNECTED': x = tf.keras.layers.Dense( units=shape[-1], use_bias=True, kernel_initializer=tf.keras.initializers.Constant(weights[0]), bias_initializer=tf.keras.initializers.Constant(weights[1]) )(x) elif op == 'SOFTMAX': x = tf.keras.layers.Softmax()(x) outputs = x keras_model = Model(inputs, outputs) # 将Keras模型转换为K210的.kmodel模型 import tensorflow.keras.backend as K from tensorflow.keras.models import model_from_json # 保存Keras模型的权重 keras_model.save_weights('weights.h5') # 保存Keras模型的结构 keras_model_json = keras_model.to_json() with open('model.json', 'w') as f: f.write(keras_model_json) # 读取Keras模型的结构 with open('model.json', 'r') as f: keras_model_json = f.read() # 将Keras模型的结构转换为K210的.kmodel模型 k210_model = model_from_json(keras_model_json) k210_model.load_weights('weights.h5') # 保存K210的.kmodel模型 k210_model.save('model.kmodel') ``` 注意:此代码只适用于具有以下操作的.tflite模型:CONV_2D,DEPTHWISE_CONV_2D,AVERAGE_POOL_2D,MAX_POOL_2D,ADD,RELU,RESHAPE,FULLY_CONNECTED和SOFTMAX。如果您的模型包含其他操作,则需要对代码进行修改。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值