tensorflow 2剪枝(tensorflow_model_optimization)API

 找tf关于Pruning和quantization的用例较少,正好在做这方面工作,就搬一搬一些官方文档的应用。

下面的代码主要是结合一个官方Mnist的示例和guide文档看看tf的API中是怎么做pruning这一步优化的。

tensorflow/model-optimization--comprehensive_guide

pruning_with_keras


总的思路是: 建baseline model → 加入剪枝操作→ 对比模型大小、acc等变化

其中关注其中如何自定义自己的pruning case和后续quantization等 

目录

1.导入一些依赖库,后面似乎没用到tensorboard,暂时注释掉

2.导入Mnist数据集,作简单规整

3.建立一个Baseline模型,并保存权重,方便后续比较性能

4.对整个模型直接magnitude,建立剪枝模型,顺便看看模型前后变化

5.选定某个层进行magnitude(这里选择Dense layer),建立剪枝模型,看看模型变化

6.自定义剪枝操作

7.Tensorboard 可视化

8.保存模型 比较精度、模型大小

 

提高修剪模型的准确性Tips:

Common mistake: 



import tempfile
import os
import zipfile
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
from tensorflow import keras

#%load_ext tensorboard

1.导入一些依赖库,后面似乎没用到tensorboard,暂时注释掉

#加载MNIST数据集
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
#将图像像素值规整到[0,1]
train_images = train_images / 255.0
test_images = test_images / 255.0

 2.导入Mnist数据集,作简单规整



#建立模型
def setup_model():
    model = keras.Sequential([
        keras.layers.InputLayer(input_shape=(28, 28)),
        keras.layers.Reshape(target_shape=(28, 28, 1)),
        keras.layers.Conv2D(filters=12,kernel_size=(3, 3), activation='relu'),
        keras.layers.MaxPooling2D(pool_size=(2,2)),
        keras.layers.Flatten(),
        keras.layers.Dense(10)
    ])
    return model

#训练分类模型参数
def setup_pretrained_weights():
    model = setup_model()
    
    model.compile(optimizer = 'adam',
                  loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits = True),
                  metrics = ['accuracy']
    )

    model.fit(train_images,
              train_labels,
              epochs = 4,
              validation_split = 0.1,
    )

    _, pretrained_weights = tempfile.mkstemp('.tf')
    
    model.save_weights(pretrained_weights)
    return pretrained_weights

3.建立一个Baseline模型,并保存权重,方便后续比较性能

setup_model()

pretrained_weights = setup_pretrained_weights()

#
Train on 54000 samples, validate on 6000 samples
Epoch 1/4
54000/54000 [==============================] - 7s 133us/sample - loss: 0.2895 - accuracy: 0.9195 - val_loss: 0.1172 - val_accuracy: 0.9685
Epoch 2/4
54000/54000 [==============================] - 5s 99us/sample - loss: 0.1119 - accuracy: 0.9678 - val_loss: 0.0866 - val_accuracy: 0.9758
Epoch 3/4
54000/54000 [==============================] - 5s 100us/sample - loss: 0.0819 - accuracy: 0.9753 - val_loss: 0.0757 - val_accuracy: 0.9787
Epoch 4/4
54000/54000 [==============================] - 6s 103us/sample - loss: 0.0678 - accuracy: 0.9797 - val_loss: 0.0714 - val_accuracy: 0.9815

4.对整个模型直接magnitude,建立剪枝模型,顺便看看模型前后变化

#比较baselin与剪裁模型的差别
base_model = setup_model()
base_model.summary()

base_model.load_weights(pretrained_weights)

model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
model_for_pruning.summary()

#
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
reshape_4 (Reshape)          (None, 28, 28, 1)         0         
_________________________________________________________________
conv2d_4 (Conv2D)        
  • 10
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值