以gpt2为例
导入模型,并推理。
from transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Config
import torch
config = GPT2Config.from_pretrained("../model/gpt2")
model = GPT2LMHeadModel.from_pretrained("../model/gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("../model/gpt2")
prompt = "I thought this movie was glorious, I appreciated it. Conclusion: This movie is"
inputs = tokenizer(prompt, return_tensors="pt")
output = model(inputs.input_ids, output_hidden_states=True)
output输出的内容是什么
查看modeling_gpt2的源代码,在import部分:
from ...modeling_outputs import (
BaseModelOutputWithPastAndCrossAttentions,
CausalLMOutputWithCrossAttentions,
QuestionAnsweringModelOutput,
SequenceClassifierOutputWithPast,
TokenClassifierOutput,
)
再进一步查看modeling_outputs.py文件,可以看到output的类
class CausalLMOutputWithCrossAttentions(ModelOutput):
"""
Base class for causal language model (or autoregressive) outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss (for next-token