【论文阅读】Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model(2021)

摘要

Knowledge distillation (KD 知识蒸馏) is a successful approach(方法) for deep neural network(深度神经网络) acceleration(加速), with which a compact network(紧凑网络) (student) is trained by mimicking the softmax output of a pre-trained(预训练) high-capacity(高容量) network (teacher). In tradition(传统上), KD usually relies on(依赖) access to(获取) the training samples(训练样本) and the parameters of the white-box teacher(白盒教师的参数) to acquire the transferred knowledge(获取迁移知识). However, these prerequisites are not always realistic due to(由于) storage costs(存储成本) or privacy issues(隐私问题) in real-world applications(实际应用程序). Here we propose the concept of decision-based black-box(基于决策黑箱) (DB3) knowledge distillation, with which the student is trained by distilling

  • 11
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Bosenya12

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值