paddlepaddle pipeline部署,写入milvus gpu报错,以及docker本地和容器之间拷贝文件,方便本地调试,不必每次都拉镜像

106 篇文章 1 订阅
21 篇文章 5 订阅

报错一

[Hint: ‘cudaErrorInitializationError’. The API call failed because the CUDA driver and runtime could not be initialized. ] (at /paddle/paddle/fluid/platform/gpu_info.cc:355)

注意以下几个包的引入是在函数里面,不是函数外面,这样才能每次应用GPU后自动释放GPU
import paddlenlp as ppnlp
from paddlenlp.data import Stack, Tuple, Pad

import logging
import numpy as np
import sys

from paddle_serving_server.web_service import WebService, Op

_LOGGER = logging.getLogger()


def convert_example(example,
                    tokenizer,
                    max_seq_length=512,
                    pad_to_max_seq_len=False):
    result = []
    for text in example:
        encoded_inputs = tokenizer(
            text=text,
            max_seq_len=max_seq_length,
            pad_to_max_seq_len=pad_to_max_seq_len)
        input_ids = encoded_inputs["input_ids"]
        token_type_ids = encoded_inputs["token_type_ids"]
        result += [input_ids, token_type_ids]
    return result


class ErnieOp(Op):
    def init_op(self):
        import paddlenlp as ppnlp
        self.tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained(
            'ernie-1.0')

    def preprocess(self, input_dicts, data_id, log_id):
        from paddlenlp.data import Stack, Tuple, Pad

        (_, input_dict), = input_dicts.items()
        print("input dict", input_dict)
        batch_size = len(input_dict.keys())
        examples = []
        for i in range(batch_size):
            input_ids, segment_ids = convert_example([input_dict[str(i)]],
                                                     self.tokenizer)
            examples.append((input_ids, segment_ids))
        batchify_fn = lambda samples, fn=Tuple(
            Pad(axis=0, pad_val=self.tokenizer.pad_token_id, dtype='int64'),  # input
            Pad(axis=0, pad_val=self.tokenizer.pad_token_id, dtype='int64'),  # segment
        ): fn(samples)
        input_ids, segment_ids = batchify_fn(examples)
        feed_dict = {}
        feed_dict['input_ids'] = input_ids
        feed_dict['token_type_ids'] = segment_ids
        return feed_dict, False, None, ""

    def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
        new_dict = {}
        new_dict["output_embedding"] = str(fetch_dict["output_embedding"]
                                           .tolist())
        return new_dict, None, ""


class ErnieService(WebService):
    def get_pipeline_response(self, read_op):
        ernie_op = ErnieOp(name="ernie", input_ops=[read_op])
        return ernie_op


ernie_service = ErnieService(name="ernie")
ernie_service.prepare_pipeline_config("config_nlp.yml")
ernie_service.run_service()

docker 拷贝

 docker cp /data/xx/xx/nlp_work/deploy/docker_server/web_service.py 796a758245f8:/deploy/
docker restart 796a758245f8
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值