基于近似计算的同态加密方案CKKS17----Relinearization和Rescaling

Relinearization

        重线性化主要是针对密文乘法后密文尺寸膨胀的问题。

        先来看加解密过程。生成公私钥过程:

                s\leftarrow\mathcal{H}WT(h),a\leftarrow\mathcal{R}_{qL},b\leftarrow-as+e

                pk\leftarrow(b,a)=(-as+e,a)

                sk\leftarrow(1,s)

        加解密过程:

                \mu \in \mathbb{Z}_q[X]/(X^N+1)

                \texttt{Encrypt}(\mu,pk) = c = (c_0,c_1) = (\mu,0) + pk = (b + \mu,a) \in \mathcal{R}_q^2

                \texttt{Decrypt}(c,sk) = c_0 + c_1.s = \mu + e

        同态加法及解密正确性:

                \texttt{C}_\texttt{ADD}(c,c') = (c_0 + c_0',c_1+c_1') = c + c' = c_{add}

                \texttt{Decrypt}(c_{add},sk) = c_0 + c_0' + (c_1 + c_1').s = c_0 + c_1.s + c_0' + c_1'.s = \texttt{Decrypt}(c,sk) + \texttt{Decrypt}(c',sk) \approx \mu + \mu'

        同态乘法:

        如果同态乘法是正确的,则有:

        ​​​​​​​        \texttt{DecryptMult}(\texttt{CMult}(c,c'),sk) = \texttt{Decrypt}(c,sk).\texttt{Decrypt}(c',sk)

        已知\texttt{Decrypt}(c,sk) = c_0 + c_1 .s,则可以推出,同态乘法正确计算并解密后的结果为:

        ​​​​​​​        \texttt{Decrypt}(c,sk) . \texttt{Decrypt}(c',sk) = (c_0 + c_1.s) . (c_0' + c_1'.s) = c_0.c_0' + (c_0.c_1' + c_0'.c_1).s + c_1.c_1'.s^2 = d_0 + d_1.s + d_2.s^2

        其中d_0 = c_0.c_0', d_1 = (c_0.c_1' + c_0'.c_1), d_2 = c_1.c_1'

        可以观察到,密文经过一次乘法后膨胀了,原始密文只有两项,而同态乘法后变为了三项。如果不经处理,一直这样乘下去,那么两次乘法后就会变为五项,三次乘法后就会变为七项......

        所以需要在每次乘法后进行重线性化,来防止密文尺寸的增加。

重线性化

        ​​​​​​​我们可以知道,重线性化后的结果应该是:(d_0',d_1') = \texttt{Relin}(c_{mult})。对重线性化后的结果解密可以得到:

        ​​​​​​​        \texttt{Decrypt}((d_0',d_1'),sk) = d_0' + d_1'.s = d_0 + d_1.s + d_2.s^2 = \texttt{Decrypt}(c,sk) . \texttt{Decrypt}(c',sk)

,观察上式可以看到:d_0' + d_1'.s = d_0 + d_1.s + d_2.s^2。如果存在一组多项式P,满足\texttt{Decrypt}(P,sk)=d_2.s^2,则(d_0',d_1') = (d_0,d_1) + P,即:

        ​​​​​​​        \texttt{Decrypt}((d_0',d_1'),sk) = \texttt{Decrypt}((d_0,d_1),sk) +\texttt{Decrypt}(P,sk) = d_0 + d_1.s + d_2.s^2

        那么现在问题就是如何确定P了。解密过程最多只乘进来一个s,而期待的\texttt{Decrypt}(P,sk) =d_2.s^2中s为二次,那么P中一定是包含s的。问题来了:sk是要我们自己偷偷保管的啊,不可以给别人,那么又该怎样把s传到P中呢?用到的就是evaluation key。

evaluation key

        令evk = (-a_0.s + e_0 + s^2, a_0),则\texttt{Decrypt}(evk,sk) = e_0 + s^2 \approx s^2。同时可以看到,evk中添加了两个随机化参数,可以保证s不泄露。

        所以可以设置P = d_2.evk = (d_2.(-a_0 + e_0 + s^2), d_2.a_0)\texttt{Decrypt}(P,sk) = d_2.s^2 + d_2.e_0​​​​​​​。但是问题又来了:d_2.s^2 + d_2.e_0 \neq d_2.s^2d_2较大。

        解决办法就是更改下evk的形式,设置evk = (-a_0.s + e_0 + p.s^2, a_0) (\text{mod } p.q),其中a_0\leftarrow\mathcal{R}_{p.q}

        故而P = \lfloor p^{-1}.d_2.evk \rceil (\text{mod } q)\texttt{Relin}((d_0,d_1,d_2),evk) = (d_0,d_1) + \lfloor p^{-1}.d_2.evk \rceil

Rescaling

        为了保证计算精度,在编码过程中预先乘入了缩放因子\Delta。两次乘法后缩放因子就又膨胀了,变为\Delta^2

        设置q_0 \geq \Deltaq = \Delta^L.q_0q_0代表整数部分精度,\Delta代表小数部分精度。则有重缩放过程:

        ​​​​​​​        RS_{l \rightarrow l-1}(c) = \lfloor \frac{q_{l-1}}{q_l} c \rceil (\text{mod } q_{l-1}) = \lfloor \Delta^{-1} c \rceil (\text{mod } q_{l-1})

  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是class-agnostic模块作用于faster_RCNN目标检测的PyTorch训练示例代码,注释已经加入到代码中: ``` python import torch import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor # load a pre-trained model for classification and return only the features backbone = torchvision.models.mobilenet_v2(pretrained=True).features # FasterRCNN needs to know the number of output channels in a backbone. # For mobilenet_v2, it's 1280, so we need to add it here backbone.out_channels = 1280 # let's make the RPN generate 5 x 3 anchors per spatial location, with 5 different sizes and 3 different aspect ratios. # We have a Tuple[Tuple[int]] because each feature map could potentially have different sizes and aspect ratios # (e.g., if your backbone produces a few feature maps of different sizes). anchor_generator = torchvision.models.detection.rpn.AnchorGenerator(sizes=((32, 64, 128, 256, 512),), aspect_ratios=((0.5, 1.0, 2.0),)) # let's define what are the feature maps that we will use to perform the region of interest cropping, # as well as the size of the crop after rescaling. # if your backbone returns a Tensor, featmap_names needs to be ['0']. More generally, the backbone should return an # OrderedDict[Tensor], and in featmap_names you can choose which feature maps to use. roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'], output_size=7, sampling_ratio=2) # put the pieces together inside a FasterRCNN model model = torchvision.models.detection.FasterRCNN(backbone, num_classes=2, rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler) # define a class-agnostic module class ClassAgnosticModule(torch.nn.Module): def __init__(self, in_channels, num_classes): super().__init__() self.conv = torch.nn.Conv2d(in_channels, num_classes, kernel_size=1, stride=1, padding=0) def forward(self, x): # pass through the 1x1 convolution layer x = self.conv(x) # flatten the tensor x = x.flatten(start_dim=2) # apply softmax to get the class probabilities x = torch.nn.functional.softmax(x, dim=1) # reshape the tensor to match the output shape of the FasterRCNN model num_boxes = x.shape[1] x = x.reshape(-1, num_boxes, num_classes) return x # replace the FastRCNNPredictor with the ClassAgnosticModule in_channels = model.roi_heads.box_predictor.cls_score.in_features num_classes = 2 model.roi_heads.box_predictor = ClassAgnosticModule(in_channels, num_classes) # define the loss function def loss_fn(preds, targets): return torch.nn.functional.cross_entropy(preds.squeeze(), targets) # define the optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # define the data loader data_loader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=True) # train the model for epoch in range(num_epochs): for images, targets in data_loader: # move the images and targets to the device images = list(image.to(device) for image in images) targets = [{k: v.to(device) for k, v in t.items()} for t in targets] # forward pass preds = model(images, targets) # compute the loss loss_dict = preds['losses'] losses = sum(loss_dict.values()) # backward pass optimizer.zero_grad() losses.backward() optimizer.step() ``` 该示例代码中,我们首先加载了一个预训练的分类模型,并删去了分类层。然后,我们定义了一个class-agnostic模块,并将FastRCNNPredictor替换为该模块。模型的其余部分与标准的FasterRCNN模型相同。最后,我们定义了一个损失函数和一个优化器,并使用数据加载器训练模型。 需要注意的是,该示例代码中的dataset和num_epochs变量没有给出,需要根据具体情况进行设置。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值