书生大模型实践指南: 轻松部署,趣味体验
任务概览
- InternLM2-Chat-1.8B: AI聊天伙伴(基础任务)
任务一: 使用InternLM2-Chat-1.8B
1.1 命令行聊天
首先,通过命令行来体验这个AI聊天伙伴:
-
创建项目文件夹:
mkdir -p /root/demo touch /root/demo/cli_demo.py
-
将以下代码放入
cli_demo.py
:import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name_or_path = "/root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True, device_map='cuda:0') model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='cuda:0') model = model.eval() system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语). - InternLM (书生·浦语) is a conversational language model developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless. - InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user, such as English and 中文. """ messages = [(system_prompt, '')] print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============") while True: input_text = input("\nUser >>> ") input_text = input_text.replace(' ', '') if input_text == "exit": break length = 0 for response, _ in model.stream_chat(tokenizer, input_text, messages): if response is not None: print(response[length:], flush=True, end="") length = len(response)
-
启动AI伙伴:
python /root/demo/cli_demo.py
成功
1.2 Web界面运行
如果想要更直观的体验,可以尝试Web版:
-
启动Streamlit服务:
cd /root/demo streamlit run /root/demo/Tutorial/tools/streamlit_demo.py --server.address 127.0.0.1 --server.port 6006
-
在本地PowerShell中输入以下命令:
ssh -CNg -L 6006:127.0.0.1:6006 root@ssh.intern-ai.org.cn -p 40335
-
打开浏览器,访问
http://localhost:6006
.