FATE —— 二.2.6 Homo-NN使用FATE接口Trainer

前言

在本教程中,我们将演示如何使用培训师用户界面返回格式化的预测结果,评估模型的性能,保存模型,并在仪表板上显示损失曲线和性能分数。这些接口允许您的培训师与FATE框架集成,使其更易于使用。

由于官方网站的示例代码有一定的错误,所以在此进行声明,改正后的_proximal_term如下所示:

def _proximal_term(self, model_a, model_b):
    diff_ = 0
    for p1, p2 in zip(model_a.parameters(), model_b.parameters()):
        diff_ += ((p1-p2.detach())**2).sum()
    return diff_

在本教程中,我们将继续开发我们的玩具FedProx训练器。

FedProx的玩具实现

在上一个教程中,我们通过演示FedProx算法的玩具实现提供了一个具体的示例。在FedProx中,训练过程与标准FedAVG算法略有不同,因为在计算损失时,需要从当前模型和全局模型计算近端项。代码在这里:

from pipeline.component.nn import save_to_fate
%%save_to_fate trainer fedprox.py
import copy
import torch as t
from federatedml.nn.homo.trainer.trainer_base import TrainerBase
from torch.utils.data import DataLoader
# We need to use aggregator client&server class for federation
from federatedml.framework.homo.aggregator.secure_aggregator import SecureAggregatorClient, SecureAggregatorServer
# We use LOGGER to output logs
from federatedml.util import LOGGER


class ToyFedProxTrainer(TrainerBase):

    def __init__(self, epochs, batch_size, u):
        super(ToyFedProxTrainer, self).__init__()
        # trainer parameters
        self.epochs = epochs
        self.batch_size = batch_size
        self.u = u

    # Given two model, we compute the proximal term
    def _proximal_term(self, model_a, model_b):
        diff_ = 0
        for p1, p2 in zip(model_a.parameters(), model_b.parameters()):
            diff_ += ((p1-p2.detach())**2).sum()
        return diff_

    # implement the train function, this function will be called by client side
    # contains the local training process and the federation part
    def train(self, train_set, validate_set=None, optimizer=None, loss=None, extra_data={}):
        
        sample_num = len(train_set)
        aggregator = None
        if self.fed_mode:
            aggregator = SecureAggregatorClient(True, aggregate_weight=sample_num, 
                                                communicate_match_suffix='fedprox')  # initialize aggregator

        # set dataloader
        dl = DataLoader(train_set, batch_size=self.batch_size, num_workers=4)

        for epoch in range(self.epochs):
            
            # the local training process
            LOGGER.debug('running epoch {}'.format(epoch))
            global_model = copy.deepcopy(self.model)
            loss_sum = 0

            # batch training process
            for batch_data, label in dl:
                optimizer.zero_grad()
                pred = self.model(batch_data)
                loss_term_a = loss(pred, label)
                loss_term_b = self._proximal_term(self.model, global_model)
                loss_ = loss_term_a + (self.u/2) * loss_term_b
                loss_.backward()
                loss_sum += float(loss_.detach().numpy())
                optimizer.step()

            # pring loss
            LOGGER.debug('epoch loss is {}'.format(loss_sum))

            # the aggregation process
            if aggregator is not None:
                self.model = aggregator.model_aggregation(self.model)
                converge_status = aggregator.loss_aggregation(loss_sum)

    # implement the aggregation function, this function will be called by the sever side
    def server_aggregate_procedure(self, extra_data={}):
        
        # initialize aggregator
        if self.fed_mode:
            aggregator = SecureAggregatorServer(communicate_match_suffix='fedprox')

        # the aggregation process is simple: every epoch the server aggregate model and loss once
        for i in range(self.epochs):
            aggregator.model_aggregation()
            merge_loss, _ = aggregator.loss_aggregation()
用户界面

现在我们向您介绍TrainerBase类提供的用户界面,我们将使用这些功能来改进我们的培训师。

格式预测结果

此函数将组织预测结果并返回StdReturnFormat对象,该对象将包装结果。您可以在预测函数的末尾使用此函数返回FATE框架可以解析并显示在命运板上的标准格式。这种标准化格式还允许下游组件(例如评估组件)使用预测结果。

此函数接受四个参数:

  • sample_ids:示例ID列表

  • predict_result:预测得分的张量

  • true_label:真标签张量

  • task_type:正在执行的任务类型。默认值为“auto”,它将自动推断任务类型。其他选项包括“二进制”、“多”和“回归”。目前,FATE仪表板仅支持显示二进制/多分类和回归任务。如果选择“自动”,则将自动推断任务类型。

稍后我们将在FedProx培训器中实现预测。
import torch as t 
from typing import List

def format_predict_result(self, sample_ids: List, predict_result: t.Tensor,
                              true_label: t.Tensor, task_type: str = None):
    ...
callback_metric和callback_loss

顾名思义,这两个功能使您能够保存数据点,并在命盘上显示自定义评估指标和损失曲线。

使用回调度量函数时,需要提供度量名称、浮点值,并指定度量类型('train'或'validate')和历元索引。使用回调损失函数时,需要提供浮点损失值和历元索引。您的数据将显示在命盘上。

def callback_metric(self, metric_name: str, value: float, metric_type='train', epoch_idx=0):
    ...

def callback_loss(self, loss: float, epoch_idx: int):
    ...
总结

此函数允许您在字典中保存训练过程的摘要,例如丢失历史和最佳时期。任务完成后,您可以从管道中检索此摘要。

def summary(self, summary_dict: dict):
    ...
保存和检查点

您可以使用“save”保存模型,并使用“checkpoint”功能设置模型检查点。需要注意的是:

  • “save”仅将模型存储在内存中,因此您保存的模型将是上次使用“save”功能保存的模型。

  • “checkpoint”直接将模型保存到磁盘。

  • “save”只能在客户端(在“train”函数中)调用,而“checkpoint”应该在客户端和服务器端(在“rain”和“server_aggregate_proccure”函数中调用)调用,以确保检查点机制正常工作。

函数中的“extra_data”参数允许您在字典中保存其他数据。这在热启动模型时非常有用,因为您可以使用“train”和“server_aggregate_proccure”函数中的“extra_data”参数检索保存的数据。

def save(
    self,
    model=None,
    epoch_idx=-1,
    optimizer=None,
    converge_status=False,
    loss_history=None,
    best_epoch=-1,
    extra_data={}): ...

def checkpoint(
    self,
    epoch_idx,
    model=None,
    optimizer=None,
    converge_status=False,
    loss_history=None,
    best_epoch=-1,
    extra_data={}): ...
评价

此界面允许您通过自动计算各种性能指标来评估模型。计算的指标取决于数据集和任务的类型

  • 二进制分类:“AUC”和“ks”

  • 多级分类:“准确度”、“精度”和“召回”

  • 回归:“rmse”和“mae”

您可以在参数中指定数据集的类型(“训练”或“验证”)和任务类型(“二元”、“多元”或“回归”)。如果未指定任务类型,将自动从您的分数和标签中推断出任务类型。

def evaluation(self, sample_ids: list, pred_scores: t.Tensor, label: t.Tensor, dataset_type='train',
                     epoch_idx=0, task_type=None):
    ...

改进的FedProx训练器

在本节中,我们将使用前面介绍的接口来改进我们的FedProx Trainer,使其成为一个更全面的培训工具。我们:

  • 我们实现了predict函数,并返回格式化的结果

  • 添加求值函数

  • 培训结束时保存模型

  • 回调损失以保存损失曲线

  • 我们计算准确度分数,然后使用回调度量显示。

from pipeline.component.nn import save_to_fate
%%save_to_fate trainer fedprox_v2.py
import copy
import torch as t
from federatedml.nn.homo.trainer.trainer_base import TrainerBase
from federatedml.nn.dataset.base import Dataset
from torch.utils.data import DataLoader
# We need to use aggregator client&server class for federation
from federatedml.framework.homo.aggregator.secure_aggregator import SecureAggregatorClient, SecureAggregatorServer
# We use LOGGER to output logs
from federatedml.util import LOGGER


class ToyFedProxTrainer(TrainerBase):

    def __init__(self, epochs, batch_size, u):
        super(ToyFedProxTrainer, self).__init__()
        # trainer parameters
        self.epochs = epochs
        self.batch_size = batch_size
        self.u = u

    # Given two model, we compute the proximal term
    def _proximal_term(self, model_a, model_b):
        diff_ = 0
        for p1, p2 in zip(model_a.parameters(), model_b.parameters()):
            diff_ += ((p1-p2.detach())**2).sum()
        return diff_

    # implement the train function, this function will be called by client side
    # contains the local training process and the federation part
    def train(self, train_set, validate_set=None, optimizer=None, loss=None, extra_data={}):
        
        sample_num = len(train_set)
        aggregator = None
        if self.fed_mode:
            aggregator = SecureAggregatorClient(True, aggregate_weight=sample_num, 
                                                communicate_match_suffix='fedprox')  # initialize aggregator

        # set dataloader
        dl = DataLoader(train_set, batch_size=self.batch_size, num_workers=4)

        loss_history = []
        for epoch in range(self.epochs):
            
            # the local training process
            LOGGER.debug('running epoch {}'.format(epoch))
            global_model = copy.deepcopy(self.model)
            loss_sum = 0

            # batch training process
            for batch_data, label in dl:
                optimizer.zero_grad()
                pred = self.model(batch_data)
                loss_term_a = loss(pred, label)
                loss_term_b = self._proximal_term(self.model, global_model)
                loss_ = loss_term_a + (self.u/2) * loss_term_b
                LOGGER.debug('loss is {} loss a is {} loss b is {}'.format(loss_, loss_term_a, loss_term_b))
                loss_.backward()
                loss_sum += float(loss_.detach().numpy())
                optimizer.step()
                
            # print loss
            LOGGER.debug('epoch loss is {}'.format(loss_sum))
            loss_history.append(loss_sum)

            # we callback loss here
            self.callback_loss(loss_sum, epoch)

            # we evaluate out model here
            sample_ids, preds, labels = self._predict(train_set)
            self.evaluation(sample_ids, preds, labels, 'train', task_type='binary', epoch_idx=epoch)

            # we manually compute accuracy:
            acc = ((preds > 0.5 + 0) == labels).sum() / len(labels)
            acc = float(acc.detach().numpy())
            self.callback_metric('my_accuracy', acc, epoch_idx=epoch)

            # the aggregation process
            if aggregator is not None:
                self.model = aggregator.model_aggregation(self.model)
                converge_status = aggregator.loss_aggregation(loss_sum)

        # We will save model at the end of the training
        self.save(self.model, epoch, optimizer)
        # We will save model summary
        self.summary({'loss_history': loss_history})

    # implement the aggregation function, this function will be called by the sever side
    def server_aggregate_procedure(self, extra_data={}):
        
        # initialize aggregator
        if self.fed_mode:
            aggregator = SecureAggregatorServer(communicate_match_suffix='fedprox')

        # the aggregation process is simple: every epoch the server aggregate model and loss once
        for i in range(self.epochs):
            aggregator.model_aggregation()
            merge_loss, _ = aggregator.loss_aggregation()


    def _predict(self, dataset: Dataset):
        len_ = len(dataset)
        dl = DataLoader(dataset, batch_size=len_)
        preds, labels = None, None
        for data, l in dl:
            preds = self.model(data)
            labels = l
        sample_ids = dataset.get_sample_ids()
        return sample_ids, preds, labels

    # We implement the predict function here
    def predict(self, dataset):
        
        sample_ids, preds, labels = self._predict(dataset)
        return self.format_predict_result(sample_ids, preds, labels, 'binary')

提交Pipeline

在这里,我们提交了一个新的Pipeline来测试我们的新trainer

# torch
import torch as t
from torch import nn
from pipeline import fate_torch_hook
fate_torch_hook(t)
# pipeline
from pipeline.component.homo_nn import HomoNN, TrainerParam  # HomoNN Component, TrainerParam for setting trainer parameter
from pipeline.backend.pipeline import PipeLine  # pipeline class
from pipeline.component import Reader, DataTransform, Evaluation # Data I/O and Evaluation
from pipeline.interface import Data  # Data Interaces for defining data flow


# create a pipeline to submitting the job
guest = 9999
host = 10000
arbiter = 10000
pipeline = PipeLine().set_initiator(role='guest', party_id=guest).set_roles(guest=guest, host=host, arbiter=arbiter)

# read uploaded dataset
train_data_0 = {"name": "breast_homo_guest", "namespace": "experiment"}
train_data_1 = {"name": "breast_homo_host", "namespace": "experiment"}
reader_0 = Reader(name="reader_0")
reader_0.get_party_instance(role='guest', party_id=guest).component_param(table=train_data_0)
reader_0.get_party_instance(role='host', party_id=host).component_param(table=train_data_1)

# The transform component converts the uploaded data to the DATE standard format
data_transform_0 = DataTransform(name='data_transform_0')
data_transform_0.get_party_instance(
    role='guest', party_id=guest).component_param(
    with_label=True, output_format="dense")
data_transform_0.get_party_instance(
    role='host', party_id=host).component_param(
    with_label=True, output_format="dense")

"""
Define Pytorch model/ optimizer and loss
"""
model = nn.Sequential(
    nn.Linear(30, 1),
    nn.Sigmoid()
)
loss = nn.BCELoss()
optimizer = t.optim.Adam(model.parameters(), lr=0.01)


"""
Create Homo-NN Component
"""
nn_component = HomoNN(name='nn_0',
                      model=model, # set model
                      loss=loss, # set loss
                      optimizer=optimizer, # set optimizer
                      # Here we use fedavg trainer
                      # TrainerParam passes parameters to fedavg_trainer, see below for details about Trainer
                      trainer=TrainerParam(trainer_name='fedprox_v2', epochs=3, batch_size=128, u=0.5),
                      torch_seed=100 # random seed
                      )

# define work flow
pipeline.add_component(reader_0)
pipeline.add_component(data_transform_0, data=Data(data=reader_0.output.data))
pipeline.add_component(nn_component, data=Data(train_data=data_transform_0.output.data))
pipeline.compile()
pipeline.fit()

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值