基础
- Agent和AgentExecutor的区别
- AgentExecutor.from_agent_and_tools()和AgentExecutor()的区别
- PromptTemplate和ChatPromptTemplate的区别
- 如何让LLM输出Json格式
- 如何让LLM输出自定义格式
- Agent执行的全过程
LangChain库函数
Pydantic:确保数据符合预期的格式和结构
问题
- Unhashable Type Tool when using custom tools
Json格式输出
from langchain_community.chat_models import ChatZhipuAI
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import JsonOutputParser
from pprint import pprint
llm = ChatZhipuAI(
model="glm-4",
api_key="eb9aed8db44efa3b428072914bc5063e.Vr33yNfWV4QTnUy6",
)
class Ans(BaseModel):
NAME: list = Field(description="有名的诗人的名字")
POEM: str = Field(description="诗词")
parser = JsonOutputParser(pydantic_object=Ans)
prompt = PromptTemplate(
template="{format_instructions}\n{query}",
input_variables=["query"],
partial_variables={
"format_instructions": parser.get_format_instructions()},
)
model = prompt|llm|parser
print(model.invoke({
"query": "有名的诗人和他们的诗,请说中文"}))
基于Json的信息检索
V1.0
import os
import json
import logging
from openai import AzureOpenAI
from langchain_community.chat_models import ChatZhipuAI
from langchain.agents import (AgentExecutor,
create_react_agent)
from langchain_core.tools import tool
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.prompts import PromptTemplate,ChatPromptTemplate
logging.basicConfig(filename="LangChain.log",
level=logging.DEBUG)