任务类型 | 任务内容 | 预计耗时 | 完成度 |
---|---|---|---|
闯关任务 | 使用 Lagent 自定义一个智能体,并使用 Lagent Web Demo 成功部署与调用,记录复现过程并截图。 | 45mins | 完成 |
1. install
-
1.0 create conda env:
conda create -n lagent python=3.10 -y && conda activate lagent
-
1.1 install
lmdeploy
pip install lmdeploy
- or if cu11:
export LMDEPLOY_VERSION=0.5.3 && export PYTHON_VERSION=310 && pip install \ https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl \ --extra-index-url https://download.pytorch.org/whl/cu118
-
1.2 git clone && install lagent
mkdir -p ~/code [ ! -e ~/code/lagent ] && git clone https://github.com/InternLM/lagent.git ~/code/lagent cd ~/code/lagent && pip install -e .
-
1.3 other depends
pip install termcolor==2.4.0 griffe==0.48.0 transformers==4.41.2
2. run demo
2.1 默认方式(推荐)开启前后端分离(lmdeploy
+ internlm2_agent_web_demo.py
)
2.1.1 开启后端
- V
conda activate lagent && lmdeploy serve api_server /share/new_models/Shanghai_AI_Laboratory/internlm2_5-7b-chat --model-name internlm2_5-7b-chat
- 由于internstudio存在端口上面的问题,我们这样无法打开网页(显示 fastapi 404 )
- 我们重新用本地
vscode
连接启动后端
- ^ fastapi页面正常
- 这里显示
GET
200 OK
http1.1也正常.
2.1.2 开启前端
conda activate lagent && cd ~/code/lagent && streamlit run examples/internlm2_agent_web_demo.py
- 打开网页发现有错误,推断可能是后台连接有问题(估计是我安装的最新版cu12的lmdeploy的原因!)
KeyError: 'choices' Traceback: File "/conda/envs/lagent/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 85, in exec_func_with_error_handling result = func() File "/conda/envs/lagent/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 576, in code_to_exec exec(code, module.__dict__) File "/root/code/lagent/examples/internlm2_agent_web_demo.py", line 333, in <module> main() File "/root/code/lagent/examples/internlm2_agent_web_demo.py", line 286, in main for agent_return in st.session_state['chatbot'].stream_chat( File "/root/code/lagent/lagent/agents/internlm2_agent.py", line 303, in stream_chat for model_state, res, _ in self._llm.stream_chat(prompt, **kwargs): File "/root/code/lagent/lagent/llms/lmd
- 注意到这里的模型名字应为
internlm2_5-7b-chat
而不是internlm2_5-chat-7b
,修改正确后错误消失!
2.2 用前后端在一起的demo启动(internlm2_agent_web_demo_hf.py
)
- 由于这个demo文件会自动加载默认的模型,自动下载,如果模型不再缓存中就会无法正常结束或者更改模型,所以这里修改文件
/root/code/lagent/examples/internlm2_agent_web_demo_hf.py
- 只需要把
模型路径
后面的valve
改为你需要的model id
或者路径
即可def setup_sidebar(self): """Setup the sidebar for model and plugin selection.""" # model_name = st.sidebar.selectbox('模型选择:', options=['internlm']) model_name = st.sidebar.text_input('模型名称:', value='internlm2-chat-7b') meta_prompt = st.sidebar.text_area('系统提示词', value=META_CN) da_prompt = st.sidebar.text_area('数据分析提示词', value=INTERPRETER_CN) plugin_prompt = st.sidebar.text_area('插件提示词', value=PLUGIN_CN) model_path = st.sidebar.text_input( '模型路径:', value='/share/new_models/Shanghai_AI_Laboratory/internlm2_5-7b-chat')
- 注意,这个方案不需要后端,只需要这一个demo,所以先把后端关掉;再启动!
conda activate lagent && cd ~/code/lagent && streamlit run examples/internlm2_agent_web_demo_hf.py
- ^ 后端关闭了,且demo加载模型成功!
- 修改模型名字选择tool后,输入
hi
回车
- 这里可能有
transformer版本问题
,pip install transformers==4.41.2
- ^成功了,(发现其实不需要改模型名字.这个demo不需要修改模型名字)
- 选择上
arXiv
然后问个问题
- 不是很满意,以为第二个第三个明明名字无关
ATSS
- - ^ 满意
3. 增加多agent
3.0 intro
使用 Lagent 自定义工具主要分为以下几步:
- 继承
BaseAction
类- 实现简单工具的
run
方法;或者实现工具包内每个子工具的功能- 简单工具的
run
方法可选被tool_api
装饰;工具包内每个子工具的功能都需要被tool_api
装饰
3.1 实操
-
增加文件
touch /root/code/lagent/lagent/actions/magicmaker.py
import json import requests from lagent.actions.base_action import BaseAction, tool_api from lagent.actions.parser import BaseParser, JsonParser from lagent.schema import ActionReturn, ActionStatusCode class MagicMaker(BaseAction): styles_option = [ 'dongman', # 动漫 'guofeng', # 国风 'xieshi', # 写实 'youhua', # 油画 'manghe', # 盲盒 ] aspect_ratio_options = [ '16:9', '4:3', '3:2', '1:1', '2:3', '3:4', '9:16' ] def __init__(self, style='guofeng', aspect_ratio='4:3'): super().__init__() if style in self.styles_option: self.style = style else: raise ValueError(f'The style must be one of {self.styles_option}') if aspect_ratio in self.aspect_ratio_options: self.aspect_ratio = aspect_ratio else: raise ValueError(f'The aspect ratio must be one of {aspect_ratio}') @tool_api def generate_image(self, keywords: str) -> dict: """Run magicmaker and get the generated image according to the keywords. Args: keywords (:class:`str`): the keywords to generate image Returns: :class:`dict`: the generated image * image (str): path to the generated image """ try: response = requests.post( url='https://magicmaker.openxlab.org.cn/gw/edit-anything/api/v1/bff/sd/generate', data=json.dumps({ "official": True, "prompt": keywords, "style": self.style, "poseT": False, "aspectRatio": self.aspect_ratio }), headers={'content-type': 'application/json'} ) except Exception as exc: return ActionReturn( errmsg=f'MagicMaker exception: {exc}', state=ActionStatusCode.HTTP_ERROR) image_url = response.json()['data']['imgUrl'] return {'image': image_url}
-
修改
vi /root/code/lagent/examples/internlm2_agent_web_demo.py
from lagent.actions import ActionExecutor, ArxivSearch, IPythonInterpreter + from lagent.actions.magicmaker import MagicMaker from lagent.agents.internlm2_agent import INTERPRETER_CN, META_CN, PLUGIN_CN, Internlm2Agent, Internlm2Protocol ... action_list = [ ArxivSearch(), + MagicMaker(), ]
-
由于我们刚刚修改的是
internlm2_agent_web_demo.py
,这里选择的是前后端分离方式.所以要按照上文,启动前后端.
3.2 使用agent
- 起手要先问好!以确定 模型正确加载,且模型名字和URI输入正确
- 然后问
arxiv
问题
- 在测试多agent
- ^ 不是很满意
- 这样好多了
- ^ 训练数据太多中国疯了,卡通史努比也一股子唐人街味道