前言
LCEL 可以轻松地从基本组件构建复杂的链,并支持开箱即用的功能,例如流式处理、并行性和日志记录。
基本示例:提示 + 模型 + 输出解析器
pip install --upgrade --quiet langchain-core langchain-community langchain-openai、
pip install -qU langchain-openai
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4")
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
output_parser = StrOutputParser()
chain = prompt | model | output_parser
chain.invoke({"topic": "ice cream"})
“Why don’t ice creams ever get invited to parties?\n\nBecause they always drip when things heat up!”
请注意代码的这一行,我们使用 LCEL 将这些不同的组件拼凑成一个链:
chain = prompt | model | output_parser
该 |
符号类似于 unix 管道运算符,它将不同的组件链接在一起,将一个组件的输出作为输入馈送到下一个组件。
在此链中,用户输入被传递到提示模板,然后提示模板输出被传递给模型,然后模型输出被传递到输出解析器。让我们分别看一下每个组件,以真正了解发生了什么。
Prompt
prompt
是 BasePromptTemplate
,这意味着它接受模板变量的字典并生成 PromptValue
.A PromptValue
是围绕已完成提示的包装器,可以传递给一个 LLM
(将字符串作为输入)或 ChatModel
(将一系列消息作为输入)。它可以与任何一种语言模型类型一起使用,因为它定义了用于生成 BaseMessage
s 和生成字符串的逻辑。
prompt_value = prompt.invoke({"topic": "ice cream"})
prompt_value
ChatPromptValue(messages=[HumanMessage(content=‘tell me a short joke about ice cream’)])
prompt_value.to_messages()
[HumanMessage(content=‘tell me a short joke about ice cream’)]
prompt_value.to_string()
‘Human: tell me a short joke about ice cream’
Model
PromptValue
然后传递给 model
。在本例中,我们 model
是一个 ChatModel
,这意味着它将输出一个 BaseMessage
message = model.invoke(prompt_value)
message
AIMessage(content=“Why don’t ice creams ever get invited to parties?\n\nBecause they always bring a melt down!”)
如果 our model
是 LLM
,它将输出一个字符串。
from langchain_openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo-instruct")
llm.invoke(prompt_value)
‘\n\nRobot: Why did the ice cream truck break down? Because it had a meltdown!’
Output parser
最后,我们将 model
输出传递给 output_parser
,这是一个 BaseOutputParser 含义,它采用字符串或 a BaseMessage
作为输入。具体 StrOutputParser
只是将任何输入转换为字符串。
output_parser.invoke(message)
“Why did the ice cream go to therapy? \n\nBecause it had too many toppings and couldn’t find its cone-fidence!”
整个流程管道
input = {"topic": "ice cream"}
prompt.invoke(input)
# > ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])
(prompt | model).invoke(input)
# > AIMessage(content="Why did the ice cream go to therapy?\nBecause it had too many toppings and couldn't cone-trol itself!")
RAG 搜索示例
pip install -qU langchain-openai
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo-0125")
# Requires:
# pip install langchain docarray tiktoken
from langchain_community.vectorstores import DocArrayInMemorySearch
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai import OpenAIEmbeddings
vectorstore = DocArrayInMemorySearch.from_texts(
["harrison worked at kensho", "bears like to eat honey"],
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser
chain.invoke("where did harrison work?")