fine-tune&poisoning watermarking &PST method 9.26

  • fine-tune

    通过修改预训练网络模型结构(如修改样本类别输出个数),选择性载入预训练网络模型权重(通常是载入除最后的全连接层的之前所有层 ,也叫瓶颈层)

    再用自己的数据集重新训练模型就是微调的基本步骤。 微调能够快速训练好一个模型,用相对较小的数据量,还能达到不错的结果。

    Fine-tuning卷积网络。替换掉网络的输入层(数据),使用新的数据继续训练。Fine-tune时可以选择fine-tune全部层或部分层。通常,前面的层提取的是图像的通用特征(generic features)(例如边缘检测,色彩检测),这些特征对许多任务都有用。后面的层提取的是与特定类别有关的特征,因此fine-tune时常常只需要Fine-tuning后面的层。

    They carefully designed learning rate schedule and synthesized fine-tuning samples to make
    the watermarked model forget the watermark samples. Unfortunately, these approaches also suffers from several fatal limitations: they are not watermark-agnostic, as they need to access the original training data

  • data-poisoning watermarking

    which modifies the model to give pre-defined output on some carefully-crafted watermark samples. Then the owners with only black-box accesses to the suspicious model can extract such watermarks by querying the model
    在这里插入图片描述
    在这里插入图片描述
    satisfy two requirements. The first one is functionality-preserving: the embedded watermarks should not affect the performance of the target model on normal samples. The second is robustness, where the watermarks cannot be removed with common model transformation, e.g. , fine-tuning, model compression, etc. Even if the adversary knows the target model is watermarked, he has no means to remove the watermarks if he does not know the details of watermark samples.

  • data augmentation(数据增强)

    数据增强是非常重要的提高目标检测算法鲁棒性的手段

    卷积神经网络CNN,对放置在不同方向的对象,也能进行稳健的分类,即具有不变性的属性。更具体地,CNN对于平移,不同视角,尺度大小或光照等(或上述的组合)可以是不变的。

    改变后的图片放入神经网络进行训练可以提高网络的鲁棒性,降低各方面额外因素对识别的影响。

    每一张图片进行训练时都会进行上述数据增强操作。一般情况下我们会利用一个数据集进行多次训练,由于每次数据增强都是随机的,所以每一次训练的数据集和前一次训练的数据集数据是不一样的,这样就相当于增加了数据集的数据数量 。

  • Existing watermarking schemes for DNNs can be classified into two categories. The first category is parameter embedding watermarking which embed watermarks into the parameters without decreasing the model’s performance. Verification of such watermarks require white-box access to the target models, which may not be achievable in some scenarios. We are more interested in the second category, data poisoning watermarking. These solutions take a set of carefully
    crafted sample-label pairs as watermarks and embed their correlation into DL models during the training process.

  • 一、Min-max 标准化(也叫 归一化

    min-max标准化方法是对原始数据进行线性变换。设minA和maxA分别为属性A的最小值和最大值,将A的一个原始值x通过min-max标准化映射成在区间[0,1]中的值x’,其公式为:

    新数据=(原数据-最小值)/(最大值-最小值)

    二、z-score 标准化

    这种方法基于原始数据的均值(mean)和标准差(standard deviation)进行数据的标准化。将A的原始值x使用z-score标准化到x’。

    z-score标准化方法适用于属性A的最大值和最小值未知的情况,或有超出取值范围的离群数据的情况。

    新数据=(原数据-均值)/标准差

  • PST

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wvcQGR7G-1632660903741)(C:\Users\86181\AppData\Roaming\Typora\typora-user-images\image-20210926195731396.png)]

Since the adversary has no prior knowledge about the watermarking scheme and watermark samples, we design a random yet easily identifiable pattern to alter the memory of fb about the watermark samples.

PST can significantly reduce the accuracy of the watermarked model on all types of watermarks
without heavily affecting PD. ST is slightly worse than PST,
but is still much better than the other transformations.

在这里插入图片描述

先把模型fine-tune,然后将输入pst,f(g(xi))=y~ ,对比yi和y~,如果小于预设值λ return1

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值