TF-Encrypted安全多方计算(MPC)框架介绍及核心代码分析

介绍及代码示例

TF-Encrypted是在2018年3月开源的基于TensorFlow的多方计算框架。主要参与组织有DropoutLabs, Openmined, Alibaba。

代码地址:https://github.com/tf-encrypted/tf-encrypted

TF-Encrypted默认使用的pond协议(类似spdz的mpc协议),还支持SecureNN, ABY3协议。本文主要分析pond协议的执行。

输入方可以有多方,但是pond协议必须至少有3方参与计算。其中S0和S1进行数据处理和计算,S2用来生成密钥。数据从输入方进行加密,然后有计算方进行计算,最后由数据结果接收方进行解密得到最终结果。由此可见,pond协议其实是一个3PC协议。
在这里插入图片描述

使用场景

可参考alibaba对于TF-Encrypted介绍的“Collaborative fraud detection”以及示例代码
https://alibaba-gemini-lab.github.io/docs/blog/tfe/
其中,Alice (银行)提供credit card bills, Bob (政府)提供tax information联合训练模型。

多方计算配置示例

配置文件config.json:

{
    "alice": "192.168.1.57:4440",
    "bob": "192.168.1.24:4440",
    "crypto-producer": "192.168.1.17:4440"
}

协议offline阶段

  1. 编码阶段,将tensor编码,结果为scaled Integer类型的tensor。
  2. 数据封装,将数据封装到PondPrivateTensor,其关键属性share0,share1持有部分Tensor的数据(share0或者share1 = random(), share1 + share0 = tensor)。

协议online阶段

输入数据处理

  1. 使用TensorFlow的能力,在不同设备上执行相应代码。
with tf.device(player.device_name):

根据配置文件生成的device_name

alice.device_name = '/job:tfe/replica:0/task:0/cpu:0'
bob.device_name = '/job:tfe/replica:0/task:1/cpu:0'
crypto-producer.device_name = '/job:tfe/replica:0/task:2/cpu:0'
  1. 将多方数据进行聚合,share0 = concat(share0,share0’), share1 = concat(share1,share’)
x_train = tfe.concat([x_train_0, x_train_1], axis=1)

核心代码如下,其中factory即为NativeFactory,是对TensorFlow的封装。

def _concat_private(
    prot: Pond, xs: List[PondPrivateTensor], axis,
) -> PondPrivateTensor:
    factory = xs[0].backing_dtype
    is_scaled = xs[0].is_scaled
    xs0, xs1 = zip(*(x.unwrapped for x in xs))
    with tf.name_scope("concat"):
        with tf.device(prot.server_0.device_name):
            x0_concat = factory.concat(xs0, axis=axis)
        with tf.device(prot.server_1.device_name):
            x1_concat = factory.concat(xs1, axis=axis)
        return PondPrivateTensor(prot, x0_concat, x1_concat, is_scaled)

加法运算

加运算操作即可

乘法运算

使用Beaver的乘法三元组协议

  1. random sharings [a0],[a1] and [a] = [a0] + [a1]
  2. Additively hide [x] and [y] with appropriately sized [a] and [b]
  3. Open ([alpha] = [x] - [a]) and ([beta] = [y] - [b]), and ([a * b] = [ab0] + [ab1])
  4. Return [z] = alpha * beta + alpha * [b] + [a] * beta + [a] * [b]

推理过程:

z = x * y = z0 + z1

x * y = (x0 + x1)*(y0 + y1)
      = (x - a + a)*(y - b + b) 
      = (alpha + a)(beta + b) 
      = alpha * beta + alpha * b + a * beta + a * b
      = alpha * beta + alpha * (b0 + b1) + (a0 + a1) * beta + (ab0 + ab1)
      = (ab0 + a0 * beta + alpha * b0 + alpha * beta) +
        (ab1 + a1 * beta + alpha * b1)
      = z0 + z1

核心代码

def mask(self, backing_dtype, shape):
    with tf.name_scope("triple-generation"):
        with tf.device(self.producer.device_name):
            a0 = backing_dtype.sample_uniform(shape)
            a1 = backing_dtype.sample_uniform(shape)
            a = a0 + a1
    d0, d1 = self._build_queues(a0, a1)
    return a, d0, d1
def _mask_private(prot: Pond, x: PondPrivateTensor) -> PondMaskedTensor:
    x0, x1 = x.unwrapped
    with tf.name_scope("mask"):
        a, a0, a1 = prot.triple_source.mask(x.backing_dtype, x.shape)
        with tf.name_scope("online"):
            with tf.device(prot.server_0.device_name):
                alpha0 = x0 - a0
            with tf.device(prot.server_1.device_name):
                alpha1 = x1 - a1
            with tf.device(prot.server_0.device_name):
                alpha_on_0 = alpha0 + alpha1
            with tf.device(prot.server_1.device_name):
                alpha_on_1 = alpha0 + alpha1
    return PondMaskedTensor(prot, x, a, a0, a1, alpha_on_0, alpha_on_1, x.is_scaled,)

def _matmul_masked_masked(prot, x, y):
    assert isinstance(x, PondMaskedTensor), type(x)
    assert isinstance(y, PondMaskedTensor), type(y)
    a, a0, a1, alpha_on_0, alpha_on_1 = x.unwrapped
    b, b0, b1, beta_on_0, beta_on_1 = y.unwrapped
    with tf.name_scope("matmul"):
        ab0, ab1 = prot.triple_source.matmul_triple(a, b)
        with tf.device(prot.server_0.device_name):
            with tf.name_scope("combine"):
                alpha = alpha_on_0
                beta = beta_on_0
                z0 = ab0 + a0.matmul(beta) + alpha.matmul(b0) + alpha.matmul(beta)
        with tf.device(prot.server_1.device_name):
            with tf.name_scope("combine"):
                alpha = alpha_on_1
                beta = beta_on_1
                z1 = ab1 + a1.matmul(beta) + alpha.matmul(b1)
        z = PondPrivateTensor(prot, z0, z1, x.is_scaled or y.is_scaled)
        z = prot.truncate(z) if x.is_scaled and y.is_scaled else z
        return z
  • 3
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值