llama3源码解读之推理-infer

文章目录


前言

本项目是解读开源github的代码,该项目基于Meta最新发布的新一代开源大模型Llama-3开发,是Chinese-LLaMA-Alpaca开源大模型相关系列项目(一期、二期)的第三期。而本项目开源了中文Llama-3基座模型和中文Llama-3-Instruct指令精调大模型。这些模型在原版Llama-3的基础上使用了大规模中文数据进行增量预训练,并且使用精选指令数据进行精调,进一步提升了中文基础语义和指令理解能力,相比二代相关模型获得了显著性能提升。因此,我是基于该项目解读训练与推理相关原理与内容,并以代码形式带领读者一步一步解读,理解其大语言模型运行机理。而该博客首先给出llama3推理源码相关内容解读,我将按照源码流程给出解读。


一、整体源码解读

1、完整main源码

我先给出完整的源码,后面推理使用哪些部分代码,我在深度解读。而一些较为简单内容我不在解读了。

if __name__ == '__main__':
   load_type = torch.float16
   
   # Move the model to the MPS device if available
   if torch.backends.mps.is_available():
       device = torch.device("mps")
   else:
       if torch.cuda.is_available():
           device = torch.device(0)
       else:
           device = torch.device('cpu')
   print(f"Using device: {
     device}")

   if args.tokenizer_path is None:
       args.tokenizer_path = args.base_model
   tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path)
   terminators = [
               tokenizer.eos_token_id,
               tokenizer.convert_tokens_to_ids("<|eot_id|>")
           ]
   if args.use_vllm:
       model = LLM(model=args.base_model,
           tokenizer=args.tokenizer_path,
           tensor_parallel_size=len(args.gpus.split(',')),
           dtype=load_type
           )
       generation_config["stop_token_ids"] = terminators
       generation_config["stop"] = ["<|eot_id|>", "<|end_of_text|>"]
   else:
       if args.load_in_4bit or args.load_in_8bit:
           quantization_config = BitsAndBytesConfig(
               load_in_4bit=args.load_in_4bit,
               load_in_8bit=args.load_in_8bit,
               bnb_4bit_compute_dtype=load_type,
               bnb_4bit_use_double_quant=True,
               bnb_4bit_quant_type="nf4"
           )

       model = AutoModelForCausalLM.from_pretrained(
           args.base_model,
           torch_dtype=load_type,
           low_cpu_mem_usage=True,
           device_map='auto',
           quantization_config=quantization_config if (args.load_in_4bit or args.load_in_8bit) else None,
           attn_implementation="flash_attention_2" if args.use_flash_attention_2 else "sdpa"
       )
       if device==torch.device('cpu'):
           model.float()
       model.eval()
   # test data
   if args.data_file is None:
       examples = sample_data
   else:
       with open(args.data_file, 'r') as f:
           examples = [line.strip() for line in f.readlines()]
       print("first 10 examples:")
       for example in examples[:10]:
           print(example)

   with torch.no_grad():
       if args.interactive:
           print("Start inference with instruction mode.")
           print('='*85)
           print("+ 该模式下仅支持单轮问答,无多轮对话能力。\n"
                 "+ 如要进行多轮对话,请使用llama.cpp")
           print('-'*85)
           print("+ This mode only supports single-turn QA.\n"
                 "+ If you want to experience multi-turn dialogue, please use llama.cpp")
           print('='*85)

           while True:
               raw_input_text = input("Input:")
               if len(raw_input_text.strip())==0:
                   break
               if args.with_prompt:
                   input_text = generate_prompt(instruction=raw_input_text)
               else:
                   input_text = raw_input_text

               if args.use_vllm:
                   output = model.generate([input_text], SamplingParams(**generation_config), use_tqdm=False)
                   response = output[0].outputs[0].text
               else:
                   inputs = tokenizer(input_text,return_tensors="pt")  #add_special_tokens=False ?
                   generation_output = model.generate(
                       input_ids = inputs["input_ids"].to(device),
                       attention_mask = inputs['attention_mask'].to(device),
                       eos_token_id=terminators,
                       pad_token_id=tokenizer.eos_token_id,
                       generation_config = generation_config
                   )
                   s = generation_output[0]
                   output = tokenizer.decode(s, skip_special_tokens=True)
                   if args.with_prompt:
                       response = output.split("assistant\n\n")[-1].strip()
                   else:
                       response = output
               print("Response: ",response)
               print("\n")
       else:
           print("Start inference.")
           results = []
           if args.use_vllm:
               if args.with_prompt is True:
                   inputs = [generate_prompt(example) for example in examples]
               else:
                   inputs = examples
               outputs = model.generate(inputs, SamplingParams(**generation_config))

               for index, (example, output) in enumerate(zip(examples, outputs)):
                   response = output.outputs[0].text
                   print(f"======={
     index}=======")
                   print(f"Input: {
     example}\n")
                   print(f"Output: {
     response}\n")
                   results.append({
   "Input":example,"Output":response})
           else:
               for index, example in enumerate(examples):
                   if args.with_prompt:
                       input_text = generate_prompt(instruction=example)
                   else:
                       input_text = example
                   inputs = tokenizer(input_text,return_tensors="pt")  #add_special_tokens=False ?
                   generation_output = model.generate(
                       input_ids = inputs["input_ids"].to(device),
                       attention_mask = inputs['attention_mask'].to(device),
                       eos_token_id=terminators,
                       pad_token_id=tokenizer.eos_token_id,
                       generation_config = generation_config
                   )
                   s = generation_output[0]
                   output = tokenizer.decode(s,skip_special_tokens=True)
                   if args.with_prompt:
                       response = output.split("assistant\n\n")[1].strip()
                   else:
                       response = output
                   print(f"======={
     index}=======")
                   print(f"Input: {
     example}\n")
                   print(f"Output: {
     response}\n")

                   results.append({
   "Input":input_text,"Output":response})

           dirname = os.path.dirname(args.predictions_file)
           os.makedirs(dirname,exist_ok=True)
           with open(args.predictions_file,'w') as f:
               json.dump(results,f,ensure_ascii=False,indent=2)
           if args.use_vllm:
               with open(dirname+'/generation_config.json','w') as f:
                   json.dump(generation_config,f,ensure_ascii=False,indent=2)
           else:
               generation_config.save_pretrained('./')

2、tokenizer加载

有关tokenzier相关加载可参考博客这里。这里,我直接给出其源码,如下:

tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path)
terminators = [
           tokenizer.eos_token_id,
           tokenizer.convert_tokens_to_ids("<|eot_id|>")
       ]

tokenizer.eos_token_id=128009,而terminators=[128009,128009]。

3、llama3模型加载

huggingface模型加载可参考博客这里。这里,llama3的模型加载不在介绍,如下源码:

model = AutoModelForCausalLM.from_pretrained(
    args.base_model,  # 权重路径文件夹
    torch_dtype=load_type,
    low_cpu_mem_usage=True,
    device_map='auto',
    quantization_config=quantization_config if (args.load_in_4bit or args.load_in_8bit) else None,
    attn_implementation="flash_attention_2" if args.use_flash_attention_2 else "sdpa"
)
if device==torch.device('cpu'):
    model.float()
model.eval()

注意:model.eval()为固定权重方式,这是pytorch评估类似。

4、llama3测试数据文本加载

 # test data
 if args.data_file is None:
     examples = sample_data  #  ["为什么要减少污染,保护环境?","你有什么建议?"]
 else:
     with open(args.data_file, 'r') as f:
         examples 
  • 6
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

tangjunjun-owen

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值