LangChain-v0.2文档翻译:3.13、如何配置运行时链内部结构

如何配置运行时链内部

有时候,您可能想要尝试或甚至向最终用户展示链中不同做事方式的多种可能性。这可能包括调整参数,如温度,甚至替换一个模型为另一个模型。为了使这种体验尽可能简单,我们定义了两种方法:

  • 一个 configurable_fields 方法。这允许您配置可运行特定字段。

    • 这与可运行的 .bind 方法相关,但允许您在运行时为链中的给定步骤指定参数,而不是事先指定。
  • 一个 configurable_alternatives 方法。通过此方法,您可以列出任何特定可运行的替代品,并在运行时将它们设置为这些指定的替代品。

可配置字段

让我们通过一个例子来运行时配置聊天模型字段,如温度:

# 安装或更新langchain和langchain-openai库
%pip install --upgrade --quiet langchain langchain-openai

# 导入必要的库
import os
from getpass import getpass

# 设置环境变量OPENAI_API_KEY
os.environ["OPENAI_API_KEY"] = getpass()

# 导入LangChain核心库中的PromptTemplate和ConfigurableField
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import ConfigurableField
# 导入langchain_openai库中的ChatOpenAI
from langchain_openai import ChatOpenAI

# 创建ChatOpenAI模型实例,初始温度设置为0
model = ChatOpenAI(temperature=0).configurable_fields(
    temperature=ConfigurableField(
        id="llm_temperature",
        name="LLM Temperature",
        description="The temperature of the LLM",  # LLM的温度
    )
)

# 使用模型生成随机数
model.invoke("pick a random number")
AIMessage(content='17', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba26a0da-0a69-4533-ab7f-21178a73d303-0')

以上,我们定义了 temperature 作为一个可以在运行时设置的 ConfigurableField。为此,我们使用 with_config 方法,如下所示:

# 使用with_config方法设置温度参数,并生成随机数
model.with_config(configurable={"llm_temperature": 0.9}).invoke("pick a random number")
AIMessage(content='12', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba8422ad-be77-4cb1-ac45-ad0aae74e3d9-0')

注意,传递给字典的 llm_temperature 条目与 ConfigurableFieldid 相同。

我们也可以将此应用于仅影响链中的一部分步骤:

# 从模板创建PromptTemplate实例
prompt = PromptTemplate.from_template("Pick a random number above {x}")
# 创建链,将prompt和model连接起来
chain = prompt | model

# 使用链生成一个大于0的随机数
chain.invoke({"x": 0})
AIMessage(content='27', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ecd4cadd-1b72-4f92-b9a0-15e08091f537-0')
# 使用with_config方法设置温度参数,并生成一个大于0的随机数
chain.with_config(configurable={"llm_temperature": 0.9}).invoke({"x": 0})
AIMessage(content='35', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-a916602b-3460-46d3-a4a8-7c926ec747c0-0')

使用HubRunnables

这适用于允许切换提示

# 导入HubRunnable
from langchain.runnables.hub import HubRunnable

# 创建HubRunnable实例,用于拉取提示
prompt = HubRunnable("rlm/rag-prompt").configurable_fields(
    owner_repo_commit=ConfigurableField(
        id="hub_commit",
        name="Hub Commit",
        description="The Hub commit to pull from",  # 从Hub拉取的提交
    )
)

# 使用prompt生成回答
prompt.invoke({"question": "foo", "context": "bar"})
ChatPromptValue(messages=[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: foo \nContext: bar \nAnswer:")])
# 使用with_config方法设置Hub的提交,并生成回答
prompt.with_config(configurable={"hub_commit": "rlm/rag-prompt-llama"}).invoke(
    {"question": "foo", "context": "bar"}
)
ChatPromptValue(messages=[HumanMessage(content="[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \nQuestion: foo \nContext: bar \nAnswer: [/INST]")])

可配置替代品

configurable_alternatives() 方法允许我们在链中用替代品替换步骤。下面,我们用另一个聊天模型替换一个:

# 安装langchain-anthropic库
%pip install --upgrade --quiet langchain-anthropic

# 导入必要的库并设置环境变量ANTHROPIC_API_KEY
import os
from getpass import getpass

os.environ["ANTHROPIC_API_KEY"] = getpass()

# 导入ChatAnthropic并创建实例
from langchain_anthropic import ChatAnthropic
# 创建ChatAnthropic模型实例,初始温度设置为0
llm = ChatAnthropic(
    model="claude-3-haiku-20240307", temperature=0
).configurable_alternatives(
    # 为这个字段设置一个id
    ConfigurableField(id="llm"),
    # 设置一个默认键
    default_key="anthropic",
    # 添加一个新的选项,名称为openai,等于ChatOpenAI()
    openai=ChatOpenAI(),
    # 添加一个新的选项,名称为gpt4,等于ChatOpenAI(model="gpt-4")
    gpt4=ChatOpenAI(model="gpt-4")
)

# 从模板创建PromptTemplate实例
prompt = PromptTemplate.from_template("Tell me a joke about {topic}")
# 创建链,将prompt和llm连接起来
chain = prompt | llm

# 默认情况下,将调用Anthropic
chain.invoke({"topic": "bears"})
AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!\n\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!", response_metadata={'id': 'msg_018edUHh5fUbWdiimhrC3dZD', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-775bc58c-28d7-4e6b-a268-48fa6661f02f-0')
# We can use `.with_config(configurable={"llm": "openai"})` to specify an llm to use
chain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})
AIMessage(content="Why don't bears like fast food?\n\nBecause they can't catch it!", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 13, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-7bdaa992-19c9-4f0d-9a0c-1f326bc992d4-0')
# If we use the `default_key` then it uses the default
chain.with_config(configurable={"llm": "anthropic"}).invoke({"topic": "bears"})
AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!\n\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!", response_metadata={'id': 'msg_01BZvbmnEPGBtcxRWETCHkct', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-59b6ee44-a1cd-41b8-a026-28ee67cdd718-0')

使用提示

我们可以做类似的事情,但在提示之间进行切换:

# 创建ChatAnthropic实例
llm = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)
# 从模板创建PromptTemplate实例并设置可配置的替代品
prompt = PromptTemplate.from_template(
    "Tell me a joke about {topic}"
).configurable_alternatives(
    ConfigurableField(id="prompt"),
    default_key="joke",
    poem=PromptTemplate.from_template("Write a short poem about {topic}")
)

# 创建链,将prompt和llm连接起来
chain = prompt | llm

# 默认情况下,将写一个笑话
chain.invoke({"topic": "bears"})
AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!", response_metadata={'id': 'msg_01DtM1cssjNFZYgeS3gMZ49H', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 28}}, id='run-8199af7d-ea31-443d-b064-483693f2e0a1-0')
# We can configure it write a poem
chain.with_config(configurable={"prompt": "poem"}).invoke({"topic": "bears"})
AIMessage(content="Here is a short poem about bears:\n\nMajestic bears, strong and true,\nRoaming the forests, wild and free.\nPowerful paws, fur soft and brown,\nCommanding respect, nature's crown.\n\nForaging for berries, fishing streams,\nProtecting their young, fierce and keen.\nMighty bears, a sight to behold,\nGuardians of the wilderness, untold.\n\nIn the wild they reign supreme,\nEmbodying nature's grand theme.\nBears, a symbol of strength and grace,\nCaptivating all who see their face.", response_metadata={'id': 'msg_01Wck3qPxrjURtutvtodaJFn', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 134}}, id='run-69414a1e-51d7-4bec-a307-b34b7d61025e-0')

使用提示和LLMs

我们也可以同时配置多个东西!

# 创建ChatAnthropic实例并设置可配置的替代品
llm = ChatAnthropic(
    model="claude-3-haiku-20240307", temperature=0
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI(),
    gpt4=ChatOpenAI(model="gpt-4")
)

# 创建PromptTemplate实例并设置可配置的替代品
prompt = PromptTemplate.from_template(
    "Tell me a joke about {topic}"
).configurable_alternatives(
    ConfigurableField(id="prompt"),
    default_key="joke",
    poem=PromptTemplate.from_template("Write a short poem about {topic}")
)

# 创建链,将prompt和llm连接起来
chain = prompt | llm

# 我们可以配置它用OpenAI写一首诗
chain.with_config(configurable={"prompt": "poem", "llm": "openai"}).invoke({"topic": "bears"})
AIMessage(content="In the forest deep and wide,\nBears roam with grace and pride.\nWith fur as dark as night,\nThey rule the land with all their might.\n\nIn winter's chill, they hibernate,\nIn spring they emerge, hungry and great.\nWith claws sharp and eyes so keen,\nThey hunt for food, fierce and lean.\n\nBut beneath their tough exterior,\nLies a gentle heart, warm and superior.\nThey love their cubs with all their might,\nProtecting them through day and night.\n\nSo let us admire these majestic creatures,\nIn awe of their strength and features.\nFor in the wild, they reign supreme,\nThe mighty bears, a timeless dream.", response_metadata={'token_usage': {'completion_tokens': 133, 'prompt_tokens': 13, 'total_tokens': 146}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-5eec0b96-d580-49fd-ac4e-e32a0803b49b-0')
# We can always just configure only one if we want
chain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})
AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 13, 'total_tokens': 26}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-c1b14c9c-4988-49b8-9363-15bfd479973a-0')

保存配置

我们也可以轻松地将配置过的链保存为它们自己的对象:

# 保存配置,使用OpenAI写笑话
openai_joke = chain.with_config(configurable={"llm": "openai"})

# 使用保存的配置调用链
openai_joke.invoke({"topic": "bears"})
AIMessage(content="Why did the bear break up with his girlfriend? \nBecause he couldn't bear the relationship anymore!", response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 13, 'total_tokens': 33}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-391ebd55-9137-458b-9a11-97acaff6a892-0')

总结:

本文介绍了如何在LangChain框架中配置运行时链的内部步骤。LangChain是一个用于构建和部署机器学习模型的Python库,特别适用于构建和配置复杂的处理链。通过configurable_fieldsconfigurable_alternatives方法,用户可以在运行时动态调整模型参数或替换链中的组件,从而提高灵活性和可定制性。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值