模型压缩,裁剪,量化

压缩

模型训练通常采用float32, 在部署时不需要高的数据精度,可以将类型转化为float16进行保存,这样可以降低45%左右的权重大小。

  1. 训练并保存模型权重
import timm
model = timm.create_model("mobilevit_xss“, pretrained=False, num_classes=8)
model.load_state_dict(torch.load("model_mobilevit_xss.pth"))
  1. 转换数据类型并存储
params = torch.load("model_mobilevit_xss.pth")
for key in params.keys():
	params[key] = params[key].half
torch.save(params, 'model_mobilevit_xss_half.pth")

裁剪

在模型训练完之后可以对权重进行裁剪,方法如下:

  1. 按照比例随机裁剪
  2. 按照权重大小裁剪
import torch.nn.utils.prune as prune
import numpy as np

model = timm.create_model('mobilevit_xxs', pretrained=False, num_classes=8)
model.load_state_dict(torch.load('model_mobilevit_xxs.pth'))

# 选中需要裁剪的层
module = model.head.fc

# random_unstructured裁剪
prune.random_unstructured(module, name="weight", amount=0.3)

# l1_unstructured裁剪
prune.l1_unstructured(module, name="weight", amount=0.3)

# ln_structured裁剪
prune.ln_structured(module, name="weight", amount=0.5, n=2, dim=0)

在使用权重裁剪需要注意:

权重裁剪并不会改变模型的权重大小,只是增加了稀疏性;
权重裁剪并不会减少模型的预测速度,只是减少了计算量;
权重裁剪的参数比例会对模型精度有影响,需要测试和验证;

量化

32-bit的乘加变成了8-bit的乘加,模型权重大小减少,对内存的要求降低了。

1.Eager Mode Quantization

import torch

# define a floating point model
class M(torch.nn.Module):
    def __init__(self):
        super(M, self).__init__()
        self.fc1 = torch.nn.Linear(100, 40)
        self.fc2 = torch.nn.Linear(1000, 400)

    def forward(self, x):
        x = self.fc1(x)
        return x

# create a model instance
model_fp32 = M()
torch.save(model_fp32.state_dict(), 'tmp_float32.pth')

# create a quantized model instance
model_int8 = torch.quantization.quantize_dynamic(
    model_fp32,  # the original model
    {torch.nn.Linear},  # a set of layers to dynamically quantize
    dtype=torch.qint8)  # the target dtype for quantized weights

# run the model
input_fp32 = torch.randn(4, 4, 4, 4)
res = model_int8(input_fp32)
torch.save(model_int8.state_dict(), 'tmp_int8.pth')

2.Post Training Static Quantization

import torch

# define a floating point model where some layers could be statically quantized
class M(torch.nn.Module):
    def __init__(self):
        super(M, self).__init__()
        # QuantStub converts tensors from floating point to quantized
        self.quant = torch.quantization.QuantStub()
        self.conv = torch.nn.Conv2d(1, 100, 1)
        self.relu = torch.nn.ReLU()
        self.fc = torch.nn.Linear(100, 10)
        # DeQuantStub converts tensors from quantized to floating point
        self.dequant = torch.quantization.DeQuantStub()

    def forward(self, x):
        # manually specify where tensors will be converted from floating
        # point to quantized in the quantized model
        x = self.quant(x)
        x = self.conv(x)
        x = self.relu(x)
        # manually specify where tensors will be converted from quantized
        # to floating point in the quantized model
        x = self.dequant(x)
        return x

# create a model instance
model_fp32 = M()
torch.save(model_fp32.state_dict(), 'tmp_float32.pth')

model_fp32.eval()

model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm')

model_fp32_fused = torch.quantization.fuse_modules(model_fp32, [['conv', 'relu']])
model_fp32_prepared = torch.quantization.prepare(model_fp32_fused)

input_fp32 = torch.randn(4, 1, 4, 4)
model_fp32_prepared(input_fp32)

model_int8 = torch.quantization.convert(model_fp32_prepared)
res = model_int8(input_fp32)
torch.save(model_int8.state_dict(), 'tmp_int8.pth')
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

浪子私房菜

给小强一点爱心呗

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值