模型名称
shenzhi-wang/Llama3-8B-Chinese-Chat (可能需要一些magic)可以在 Hugging Face上搜索查看。这个作者在llama3.1的基础上也出了一个同样的模型。
安装miniconda
默认你是知道conda是什么东西,可以Google一下。
linux
mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash ~/miniconda3/bin/conda init zsh
macos
mkdir -p ~/miniconda3 curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash ~/miniconda3/bin/conda init zsh
windows
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe -o miniconda.exe start /wait "" miniconda.exe /S del miniconda.exe
环境搭建
# 创建虚拟环境: conda create -n llama3-chinese python=3.11 -y conda activate llama3-chinese # 安装torch pip install torch==2.2.2 --index-url https://pypi.tuna.tsinghua.edu.cn/simple # 安装transformers 和 gradio pip install transformers gradio -i https://pypi.tuna.tsinghua.edu.cn/simple
UI代码
import gradio as gr
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "shenzhi-wang/Llama3-8B-Chinese-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype=torch.float16, device_map="auto"
)
def chatbot(message):
messages = [{"role": "user", "content": message}]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=8192,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
return tokenizer.decode(response, skip_special_tokens=True)
iface = gr.Interface(
fn=chatbot,
inputs=gr.Textbox(lines=2, placeholder="输入你的消息..."),
outputs="text",
title="中文聊天机器人",
description="输入消息与聊天机器人进行对话。"
)
iface.launch()
#iface.launch(server_name="0.0.0.0", server_port=7860)
执行代码:python web-ui.py
使用浏览器打开 http://127.0.0.1:7860
5972

被折叠的 条评论
为什么被折叠?



