1. 引言
该系列前几篇原来在官方文档里是属于QuickStart的,结果改到了LangGraph基础中,并且还拆分成了6篇(实际是没啥意义):
LangGraph(一)——构建聊天机器人
LangGraph(二)——添加工具
LangGraph(三)——添加记忆
LangGraph(四)——加入人机交互控制
LangGraph(五)——自定义状态
LangGraph(六)——时间旅行
2. 初始化LLM
from langchain.chat_models import init_chat_models
llm = init_chat_models(
"deepseek:deepseek-chat"
)
3. 构建增强型LLM
LLM拥有支持构建工作流和智能体的增强功能,包括结构化输出和工具调用。
示例如下:
from pydantic import BaseModel, Field
class SearchQuery(BaseModel):
search_query: str = Field(None, description="Query that is optimized web search.")
justification: str = Field(None, description="Why this query is relevant to the user's request.")
structured_llm = llm.with_structured_output(SearchQuery)
output = structured_llm.invoke("How does Calcium CT score relate to high cholesterol?")
print(output)
def multiply(a: int, b: int) -> int:
return a * b
llm_with_tools = llm.bind_tools([multiply])
msg = llm_with_tools.invoke("What is 2 times 3?")
print(msg.tool_calls)
运行结果为:
search_query='Calcium CT score relationship with high cholesterol' justification='To find information on how a Calcium CT score (a measure of coronary artery calcium) is related to high cholesterol levels, which is a risk factor for cardiovascular disease.'
[{'name': 'multiply', 'args': {'a': 2, 'b': 3}, 'id': 'call_0_c8ab0424-0b42-412e-93e9-f71dabcb54d8', 'type': 'tool_call'}]
3. 提示链
在提示链中,每个LLM调用都会处理前一个调用的输出。
构建上图的代码:
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
from IPython.display import Image, display
class State(TypedDict):
topic: str
joke: str
improved_joke: str
final_joke: str
def generate_joke(state:State):
msg = llm.invoke(f"Write a short joke about {state['topic']}")
return { "joke": msg.content }
def check_punchline(state: State):
if "?" in state["joke"] or "!" in state["joke"]:
return "Pass"
return "Fail"
def improve_joke(state: State):
msg = llm.invoke(f"Make this joke funnier by adding wordplay: {state['joke']}")
return { "improved_joke": msg.content }
def polish_joke(state: State):
msg = llm.invoke(f"Add a surprising twist to this joke: {state['joke']}")
return { "final_joke": msg.content }
workflow = StateGraph(State)
workflow.add_node("generate_joke", generate_joke)
workflow.add_node("improve_joke", improve_joke)
workflow.add_node("polish_joke", polish_joke)
workflow.add_edge(START, "generate_joke")
workflow.add_conditional_edges(
"generate_joke", check_punchline, { "Fail": "improve_joke", "Pass": END }
)
workflow.add_edge("improve_joke", "polish_joke")
workflow.add_edge("polish_joke", END)
chain = workflow.compile()
display(Image(chain.get_graph().draw_mermaid_png()))
state = chain.invoke({ "topic": "cats" })
print("Initial joke:")
print(state["joke"])
print("\n--- --- ---\n")
if "improved_joke" in state:
print("Improved joke:")
print(state["improved_joke"])
print("\n--- --- ---\n")
print("Final joke")
print(state["final_joke"])
else:
print("Joke failed quality gate - no punchline detected!")
运行结果为:
Initial joke:
Sure! Here's a purr-fectly short cat joke for you:
**Why don’t cats play poker in the wild?**
*Because there are too many cheetahs!* 🐱♠️
Hope that gives you a *meow*-ment of laughter! 😸
--- --- ---
Joke failed quality gate - no punchline detected!
4. 并行化
通过并行化,LLM可以同时执行一项任务。
构建上图的代码:
class State(TypedDict):
topic: str
joke: str
story: str
poem: str
combined_output: str
def generate_joke(state: State):
msg = llm.invoke(f"Write a joke about {state['topic']}")
return { "joke": msg.content }
def generate_story(state: State):
msg = llm.invoke(f"Write a story about {state['topic']}")
return { "story": msg.content }
def generate_poem(state: State):
msg = llm.invoke(f"Write a poem about {state['topic']}")
return { "poem": msg.content }
def aggregator(state: State):
combined = f"Here's a story, joke, and poem about {state['topic']}!\n\n"
combined += f"Story:\n{state['story']}\n\n"
combined += f"Joke:\n{state['joke']}\n\n"
combined += f"Poem:\n{state['poem']}"
return { "combined_output": combined }
parallel_builder = StateGraph(State)
parallel_builder.add_node("generate_story", generate_story)
parallel_builder.add_node("generate_joke", generate_joke)
parallel_builder.add_node("generate_poem", generate_poem)
parallel_builder.add_node("aggregator", aggregator)
parallel_builder.add_edge(START, "generate_story")
parallel_builder.add_edge(START, "generate_joke")
parallel_builder.add_edge(START, "generate_poem")
parallel_builder.add_edge("generate_story", "aggregator")
parallel_builder.add_edge("generate_joke", "aggregator")
parallel_builder.add_edge("generate_poem", "aggregator")
parallel_builder.add_edge("aggregator", END)
parallel_workflow = parallel_builder.compile()
display(Image(parallel_workflow.get_graph().draw_mermaid_png()))
state = parallel_workflow.invoke({ "topic": "cats" })
print(state["combined_output"])
运行结果为:
Here's a story, joke, and poem about cats!
Story:
**The Secret Kingdom of the Moonlit Cats**
In the quiet town of Willowbrook, where the streets were lined with cobblestones and lanterns glowed softly at night, there existed a secret known only to the cats.
Every evening, when the moon rose high and humans drifted into dreams, the cats of Willowbrook gathered in the hidden garden behind Old Miss Hattie’s house. There, beneath the silver light, they whispered in a language only they understood—soft meows, flicking tails, and knowing glances.
Their leader was a sleek black tom named Orion, with eyes like twin embers. He had once been a house cat, but now he ruled over the Moonlit Cats, guardians of forgotten mysteries.
One night, a small, scruffy kitten named Pip stumbled upon their meeting. Wide-eyed and trembling, she had been abandoned by her humans and had nowhere to go.
“Who are you?” Orion demanded, his tail flicking.
“I-I’m Pip,” she squeaked. “I saw the lights… and heard the whispers.”
The other cats murmured, some hissing in suspicion. But an elderly tabby named Mistral stepped forward. “She is alone, Orion. The code of the Moonlit Cats says we protect our own.”
Orion studied Pip, then gave a slow nod. “Very well. But you must prove yourself. Tonight, we hunt the Shadow Mice—ghosts of the garden that steal our secrets.”
Pip’s heart pounded, but she lifted her chin. “I’ll do it.”
With Orion leading, the cats slipped through the garden, their paws silent on the dewy grass. The Shadow Mice were swift, darting between the flowers like smoke. Pip, though small, was quick. She pounced, her tiny claws snagging one by the tail. The mouse dissolved into mist, leaving behind a shimmering silver acorn—a lost memory.
Orion purred in approval. “You have the heart of a Moonlit Cat.”
From that night on, Pip trained with the others—learning to read the stars, to listen to the wind’s secrets, and to guard Willowbrook from unseen dangers. And though humans passed by the garden without a second glance, the cats knew the truth: they were the keepers of magic, the silent watchers of the night.
And so, beneath every full moon, if you listened very closely, you might hear the faintest chorus of purrs—a song of belonging, adventure, and the endless mysteries of the Moonlit Cats.
**The End.** 🐾🌙
Joke:
Sure! Here's a purr-fect cat joke for you:
**Why don’t cats play poker in the wild?**
*Because there are too many cheetahs!* �😹
(Get it? Cheetahs sound like "cheaters," and they're wild cats... okay, I'll see myself out.)
Poem:
**Whiskers and Grace**
Oh, little hunter, sleek and sly,
With golden eyes that pierce the sky.
You stretch and yawn, then pounce with glee—
A storm of paws, so wild, so free.
Your velvet ears twitch at the sound
Of rustling leaves or mice abound.
You arch your back, your tail stands high—
A regal pose, a king’s proud eye.
At night you prowl in shadows deep,
While mortals slumber, lost in sleep.
Yet come the dawn, you softly creep
To curl beside me, warm and sweet.
Oh, creature wrapped in mystery,
Both fierce and fragile, light as breeze—
You rule my heart with quiet art,
A tiny lion, soft of heart.
So purr and play, my feline friend,
On love and sunbeams we depend.
For in your gaze, the world is right—
A flick of tail, a blink of light.
5. 路由
路由会对输入进行分类,并将其定向到后续任务。
构建上图的代码:
print(state["combined_output"])
#%%
from typing import Literal
from langchain_core.messages import HumanMessage, SystemMessage
class Route(BaseModel):
step: Literal["poem", "story", "joke"] = Field(
None, description="The next step in the routing process"
)
router_llm = llm.with_structured_output(Route)
class State(TypedDict):
input: str
decision: str
output: str
def generate_story(state: State):
result = llm.invoke(state["input"])
return {"output": result.content}
def generate_joke(state: State):
result = llm.invoke(state["input"])
return {"output": result.content}
def generate_poem(state: State):
result = llm.invoke(state["input"])
return {"output": result.content}
def router(state: State):
decision = router_llm.invoke(
[
SystemMessage(
content="Route the input to story, joke, or poem based on the user's request."
),
HumanMessage(content=state["input"])
]
)
return {"decision": decision.step}
def route_decision(state: State):
match state["decision"]:
case "poem":
return "generate_poem"
case "story":
return "generate_story"
case "joke":
return "generate_joke"
return None
router_builder = StateGraph(State)
router_builder.add_node("generate_story", generate_story)
router_builder.add_node("generate_joke", generate_joke)
router_builder.add_node("generate_poem", generate_poem)
router_builder.add_node("router", router)
router_builder.add_edge(START, "router")
router_builder.add_conditional_edges(
"router",
route_decision,
{
"generate_story": "generate_story",
"generate_joke": "generate_joke",
"generate_poem": "generate_poem"
}
)
router_builder.add_edge("generate_story", END)
router_builder.add_edge("generate_joke", END)
router_builder.add_edge("generate_poem", END)
router_workflow = router_builder.compile()
display(Image(router_workflow.get_graph().draw_mermaid_png()))
state = router_workflow.invoke({"input": "Write me a joke about cats"})
print(state["output"])
运行结果为:
Sure! Here's a purr-fect cat joke for you:
**Why don’t cats play poker in the jungle?**
*Because there are too many cheetahs!* 🐆😹
Let me know if you want more—I've got a *litter* of them! 😸
6. 协调者-工人
在协调者-工人模式中,协调者将任务分解,并将每个子任务委派给工人。
6.1 构造协调者
构造协调者:
from typing import Annotated
import operator
class Section(BaseModel):
name: str = Field(description="Name of this section of the report")
description: str = Field(description="Brief overview of the main topics and concepts to be covered in this section.")
class Sections(BaseModel):
sections: list[Section] = Field(description="Sections of the report.")
planner = llm.with_structured_output(Sections)
6.2 构造工人
由于协调者-工人工作流很常见,LangGraph提供了Send API来支持这一点。它允许你动态创建工人节点,并向每个节点发送特定的输入。每个工人都有自己的状态,所有工人的输出都会写入一个共享状态键,协调者图可以访问这个共享状态键。这使得协调者可以访问所有工人的输出,并将它们综合成最终输出。下面的代码将sections列表中每一个section Send到一个工人节点。
from langgraph.constants import Send
from pathlib import Path
class State(TypedDict):
topic: str
sections: list[Section]
completed_sections: Annotated[list, operator.add]
final_report: str
class WorkerState(TypedDict):
section: Section
completed_sections: Annotated[list, operator.add]
def orchestrator(state: State):
report_sections = planner.invoke(
[
SystemMessage(content="Generate a plan for the report."),
HumanMessage(content=f"Here is the report topic: {state['topic']}")
]
)
return { "sections": report_sections.sections }
def worker(state: WorkerState):
section = llm.invoke(
[
SystemMessage(
content="Write a report section following the provided name and description. Include no preamble for each section. Use markdown formatting."
),
HumanMessage(
content=f"Here is the section name: {state['section'].name} and description: {state['section'].description}"
)
]
)
return { "completed_sections": [section.content] }
def synthesizer(state: State):
completed_sections = state["completed_sections"]
completed_report_sections = "\n\n---\n\n".join(completed_sections)
return { "final_report": completed_report_sections }
def assign_workers(state: State):
return [ Send("worker", {"section": s}) for s in state["sections"] ]
orchestrator_worker_builder = StateGraph(State)
orchestrator_worker_builder.add_node("orchestrator", orchestrator)
orchestrator_worker_builder.add_node("worker", worker)
orchestrator_worker_builder.add_node("synthesizer", synthesizer)
orchestrator_worker_builder.add_edge(START, "orchestrator")
orchestrator_worker_builder.add_conditional_edges(
"orchestrator", assign_workers, [ "worker" ]
)
orchestrator_worker_builder.add_edge("worker", "synthesizer")
orchestrator_worker_builder.add_edge("synthesizer", END)
orchestrator_worker = orchestrator_worker_builder.compile()
display(Image(orchestrator_worker.get_graph().draw_mermaid_png()))
state = orchestrator_worker.invoke({ "topic": "Create a report on LLM scaling laws" })
with Path("report.md").open("w", encoding="utf-8") as f:
f.write(state["final_report"])
运行结果为:
7. 评估-优化工作流
在评估-优化工作流中,一个LLM调用生成响应,另一个则在循环中提供评估和反馈:
构造上图的代码:
class State(TypedDict):
joke: str
topic: str
feedback: str
funny_or_not: str
class Feedback(BaseModel):
grade: Literal["funny", "not funny"] = Field(
description="Decide if the joke is funny or not."
)
feedback: str = Field(
description="If the joke is not funny, provide feedback on how to improve it."
)
evaluator = llm.with_structured_output(Feedback)
def generate_joke(state: State):
if state.get("feedback"):
msg = llm.invoke(
f"Write a joke about {state['topic']} but take into account the feedback: {state['feedback']}"
)
else:
msg = llm.invoke(f"Write a joke about {state['topic']}")
return { "joke": msg.content }
def generate_feedback(state: State):
grade = evaluator.invoke(f"Grade the joke {state['joke']}")
return { "funny_or_not": grade.grade, "feedback": grade.feedback }
def route_joke(state: State):
match state["funny_or_not"]:
case "funny":
return "Accepted"
case "not funny":
return "Rejected + Feedback"
return None
optimizer_builder = StateGraph(State)
optimizer_builder.add_node("generate_joke", generate_joke)
optimizer_builder.add_node("generate_feedback", generate_feedback)
optimizer_builder.add_edge(START, "generate_joke")
optimizer_builder.add_edge("generate_joke", "generate_feedback")
optimizer_builder.add_conditional_edges(
"generate_feedback",
route_joke,
{
"Accepted": END,
"Rejected + Feedback": "generate_joke"
}
)
optimizer_workflow = optimizer_builder.compile()
display(Image(optimizer_workflow.get_graph().draw_mermaid_png()))
state = optimizer_workflow.invoke({ "topic": "cats" })
print(state["joke"])
运行结果为:
Sure! Here's a purr-fect cat joke for you:
**Why don’t cats play poker in the wild?**
*Because too many cheetahs!* 🐆😹
(Get it? Like "cheaters," but... cheetahs? Okay, I'll see myself out.) 🚪🐾
参考
https://langchain-ai.github.io/langgraph/tutorials/workflows/