大模型从入门到应用——LangChain:快速入门-[快速开发聊天模型]

LangChain是一个用于开发语言模型应用的框架,文章详细介绍了如何使用LangChain快速搭建聊天模型,包括安装配置、模型、LLM的API、提示模板、记忆管理、链结构以及如何创建自定义代理。此外,还展示了如何利用LangChain构建聊天对话,通过实例演示了聊天模型的交互过程和记忆机制在对话中的作用。
摘要由CSDN通过智能技术生成

分类目录:《大模型从入门到应用》总目录

LangChain系列文章:


在《自然语言处理从入门到应用——LangChain:快速入门》系列文章中我们会用最简练的语言与示例带领大家快速调试并上手LangChain,读者读完本系列的文章后,就会对LangChain有一个大致的了解并可以将LangChain运用到自己开发的程序中。但如果读者想对LangChain的各个模块进行更深入的了解,可以继续学习《自然语言处理从入门到应用——LangChain》系列文章。本文主要是阐述了如何快速通过LangChain快速开发一个聊天模型。

聊天模型是语言模型的一种变化形式,虽然聊天模型底层使用的是语言模型,但它们提供的接口有些不同:它们没有提供一个“文本输入、文本输出”API,而是提供了一个接口,其中“聊天消息”是输入和输出。聊天模型API是相当新的,所以LangChain仍然在找出正确的抽象。

从聊天模型获取消息完成

我们可以通过向聊天模型传递一条或多条消息来完成聊天,而响应也将是一条消息。LangChain中当前支持的消息类型是AIMessageHumanMessageSystemMessageChatMessage,其中ChatMessage接受任意角色参数。大多数时候,我们只需要处理HumanMessageAIMessageSystemMessage即可.

from langchain.chat_models import ChatOpenAI
from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage
)
chat = ChatOpenAI(temperature=0)

我们通过传入单个消息来完成:

chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])

输出:

AIMessage(content="J'aime programmer.", additional_kwargs={})

我们还可以为OpenAI的gpt-3.5-turbo和gpt-4传递多条消息:

messages = [
    SystemMessage(content="You are a helpful assistant that translates English to French."),
    HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)

输出:

AIMessage(content="J'aime programmer.", additional_kwargs={})

我们还可以更进一步,使用generate为多组消息生成完成,这将返回一个带有附加message参数的LLMResult

batch_messages = [
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="Translate this sentence from English to French. I love programming.")
    ],
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
    ],
]
result = chat.generate(batch_messages)
 
result

输出:

LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})

我们还可以从这个LLMResult中获取字符token的使用情况token_usage

result.llm_output['token_usage']

输出:

{'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}

聊天提示模板

与LLM类似,我们可以通过使用MessagePromptTemplate来使用模板。可以从一个或多个 MessagePromptTemplate生成ChatPromptTemplate。我们可以使用ChatPromptTemplateformat_tip,这将返回一个PromptValue,我们可以将其转换为字符串或Message对象,具体取决于我们是想将格式化的值用作llm或聊天模型的输入。为了方便起见,我们在模板上公开了一个from_template方法。如果我们使用这个模板,它看起来是这样的:

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate
)
 
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
 
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
 

输出:

AIMessage(content="J'aime programmer.", additional_kwargs={})

带聊天模型的链

我们之前讨论的LLMChain也可以用于聊天模型:

from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)
 
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
 

输出:

"J'aime programmer."

具有聊天模型的代理

代理也可以与聊天模型一起使用,我们可以使用AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION作为代理类型来初始化一个聊天模型:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
 
# 首先,我们加载我们要用来控制代理的语言模型
chat = ChatOpenAI(temperature=0)
 
# 其次,我们加载一些要使用的工具。请注意,“llm-math”工具使用LLM,所以我们需要传递它
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
 
# 最后,我们使用工具、语言模型和我们要使用的代理类型来初始化代理
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
 
# 测试代理
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")

我们将会得到输出:

> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
 "action": "Search",
 "action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
 "action": "Search",
 "action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
 "action": "Calculator",
 "action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'

记忆: 向链和代理添加状态

我们也可以对链使用Memory,对代理使用聊天模型进行初始化。这与LLM的Memory之间的主要区别在于我们不需要将以前的所有消息压缩成一个字符串,而是可以将它们保留为自己独特的记忆对象。

from langchain.prompts import (
    ChatPromptTemplate, 
    MessagesPlaceholder, 
    SystemMessagePromptTemplate, 
    HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
 
prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
    MessagesPlaceholder(variable_name="history"),
    HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
 
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
 
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.")
 
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"

参考文献:
[1] LangChain 🦜️🔗 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/
[2] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

von Neumann

您的赞赏是我创作最大的动力~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值