Transformers实际应用案例

首先安装transformers

pip install transformers

import warnings
warnings.filterwarnings('ignore')
from transformers import pipeline

1–情感分类(Sequence Classification)

classifier = pipeline('sentiment-analysis')
result = classifier("I hate you")[0]
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
classifier("I love cyx")[0]

label: NEGATIVE, with score: 0.9991
{‘label’: ‘POSITIVE’, ‘score’: 0.9996902942657471}

classifier('这部电影真的很垃圾,浪费我的时间!!!')

[{‘label’: ‘NEGATIVE’, ‘score’: 0.6208301782608032}]

2–智能填词(Masked Language Modeling)

unmask = pipeline('fill-mask')

尝试输入一段有空缺的句子,观察被填补的空缺是否符合真实情况

from pprint import pprint
results1 = unmask(f'{unmask.tokenizer.mask_token} is the most beatiful woman in Harry Potter!')
results2 = unmask(f'{unmask.tokenizer.mask_token} is the best player in the NBA!')
pprint(results1)
pprint(results2)

3–文本生成Text Generation

from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2')
set_seed(42)

All model checkpoint layers were used when initializing TFGPT2LMHeadModel.
All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.

generator("Xiao ming loves Xiao Hong secretly,",max_length=100, num_return_sequences=5)

Setting pad_token_id to 50256 (first eos_token_id) to generate sequence [{‘generated_text’: ‘Xiao ming loves Xiao Hong secretly, so why shouldn’t she talk to Xiao Ming? That said, they have no idea what Xiao Hong is thinking now as she is not in any way a master of such a secret or anything!\n\n"Don’t be stubborn. That is not your intention……" A little shocked, then, Xiao Ming began to stir, "I told you this before, that you must not try to convince me any more. Why can’t you just go with the’},

4–抽取式问答Extractive Question Answering

从给定问题的文本中提取答案的任务

question_answerer = pipeline( "question-answering" ) 
# Extractive Question Answering 是从给定问题的文本中提取答案的任务。
# 问答数据集的示例是 SQuAD 数据集,它完全基于该任务。
# 如果您想微调SQuAD 任务上的模型,您可以利用 examples/pytorch/question-回答/run_squad.py 脚本。
context = r"""Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script."""
result = question_answerer(question="What is extractive question answering?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")

Answer: ‘the task of extracting an answer from a text given a question’, score: 0.6177, start: 33, end: 94

5–Translation翻译

WMT 英语到德语数据集,它以英语句子作为输入数据,将相应的德语句子作为目标数据

translator = pipeline("translation_en_to_de")
print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40))

[{‘translation_text’: ‘Hugging Face ist ein Technologieunternehmen mit Sitz in New York und Paris.’}]

更多信息可参考transformers官方网站https://huggingface.co/transformers/task_summary.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Transformer发轫于NLP(自然语言处理),并跨界应用到CV(计算机视觉)领域。 Swin Transformer是基于Transformer的计算机视觉骨干网,在图像分类、目标检测、实例分割、语义分割等多项下游CV应用中取得了SOTA的性能。该项工作也获得了ICCV 2021顶会最佳论文奖。本课程将手把手地教大家使用labelme标注和使用Swin Transformer训练自己的数据集进行图片和视频的实例分割。  本课程将介绍Transformer及在CV领域的应用、Swin Transformer的原理。 本课程以汽车驾驶场景图片和视频开展项目实践:对汽车行驶场景中的路坑、车、车道线进行物体标注和实例分割。  课程在Windows和Ubuntu系统上分别做项目演示。包括:安装软件环境、安装Pytorch、安装Swin-Transformer-Object-Detection、标注自己的数据集、准备自己的数据集、数据集格式转换(Python脚本完成)、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计、日志分析。  本课程提供项目的数据集和相关Python程序文件。相关课程: 《Transformer原理与代码精讲(PyTorch)》https://edu.csdn.net/course/detail/36697《Transformer原理与代码精讲(TensorFlow)》https://edu.csdn.net/course/detail/36699《ViT(Vision Transformer)原理与代码精讲》https://edu.csdn.net/course/detail/36719《DETR原理与代码精讲》https://edu.csdn.net/course/detail/36768《Swin Transformer实战目标检测:训练自己的数据集》https://edu.csdn.net/course/detail/36585《Swin Transformer实战实例分割:训练自己的数据集》https://edu.csdn.net/course/detail/36586《Swin Transformer原理与代码精讲》 https://download.csdn.net/course/detail/37045

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值