基于 LangChain+LangGraph 来实现一个翻译项目

相信大家在看文档的时候,有时会比较苦恼,比如 AI 相关的文档都是外文,中文文档比较少,看起来会比较吃力,有的时候会看不懂,翻译软件又翻得很乱,完全看不了,今天就基于 LangChain 和 LangGranph 将吴恩达博士的一个项目给搬过来。

一方面就是为了自己看文档方便点,另一方面也是练练手,用用 LangChain 和 LangGraph

什么是 LangGraph

LangGraph 是一个用于构建状态化的、多角色应用程序的库,用于创建代理和多代理工作流程。与其他LLM框架相比,它具备以下核心优势:循环、可控性和持久性。LangGraph 支持定义涉及循环的流程,这对大多数代理架构至关重要,区别于基于DAG的解决方案。作为一个极其底层的框架,它提供对应用程序流程和状态的精细控制,这对创建可靠的代理至关重要。此外,LangGraph 集成了持久性功能,实现高级的人机交互和记忆特性。

LangGraph 的特性

循环和分支:在您的应用中实现循环和条件语句。
状态持久化:在图中的每一步后自动保存状态。支持在任意点暂停和恢复图的执行,以便支持错误恢复、人在回路的工作流程、时间旅行等功能。
人在回路中控制:中断图的执行以批准或编辑代理计划的下一步行动。
流式支持:在每个节点产生输出时即时流式传输(包括令牌流)。
与LangChain的集成:LangGraph与LangChain和LangSmith无缝集成,但并不要求必须使用它们。

以上两段是翻译,官网地址:🦜🕸️LangGraph

安装方式:

pip install -U langgraph

示例:

LangGraph 的核心概念是状态。每一个 graph 执行会产生状态,可以在不同的 nodes 之间传递。每个 node 通过返回值更新内部状态。 graph 更新它的内部状态方式是由选择 graph 的类型或用户自定义函数定义的

导入依赖包
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate,HumanMessagePromptTemplate, SystemMessagePromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langgraph.graph import StateGraph, START, END
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv()) # 加载相关配置
配置文件

是项目的目录中 .env 文件

定义一个方法用于请求 OpenAI
def llm_invoke(prompt, temperature=0.3):
    llm = ChatOpenAI(temperature=temperature)
    chain = {"input": RunnablePassthrough()} | prompt | llm | StrOutputParser()
    return chain.invoke("")
定义节点间要使用的一个状态类
from typing import TypedDict, Optional
class State(TypedDict):
    source_lang:str
    target_lang:str
    source_text:str
    country:Optional[str] = None
    translation_1: Optional[str] = None
    reflection: Optional[str] = None
    translation_2: Optional[str] = None
初次翻译
def initial_translation(state):
    source_lang = state.get("source_lang")
    target_lang = state.get("target_lang")
    source_text = state.get("source_text")
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system",f"You are an expert linguist, specializing in translation from {source_lang} to {target_lang}."),
            ("user", f"""
                This is an {source_lang} to {target_lang} translation, please provide the {target_lang} translation for this text. \
                Do not provide any explanations or text apart from the translation.
                {source_lang}: {source_text}
                
                {target_lang}:
            """
            )
        ]
    )
    translation = llm_invoke(prompt)
    print("[初次翻译结果]:\n", translation)
    return {"translation_1": translation}
反思第一次翻译
def reflect_on_translation(state):
    source_lang = state.get("source_lang")
    target_lang = state.get("target_lang")
    source_text = state.get("source_text")
    country = state.get("country") or ""
    translation_1 = state.get("translation_1")
    additional_rule = (
        f"The final style and tone of the translation should match the style of {target_lang} colloquially spoken in {country}."
        if country != ""
        else ""
    )
    prompt = ChatPromptTemplate.from_messages([
        ("system", f"You are an expert linguist specializing in translation from {source_lang} to {target_lang}. \
            You will be provided with a source text and its translation and your goal is to improve the translation."
        ),
        ("user",
            f"""Your task is to carefully read a source text and a translation from {source_lang} to {target_lang}, and then give constructive criticism and helpful suggestions to improve the translation. \
            {additional_rule}
            
            The source text and initial translation, delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT> and <TRANSLATION></TRANSLATION>, are as follows:
            
            <SOURCE_TEXT>
            {source_text}
            </SOURCE_TEXT>
            
            <TRANSLATION>
            {translation_1}
            </TRANSLATION>
            
            When writing suggestions, pay attention to whether there are ways to improve the translation's \n\
            (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),\n\
            (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules, and ensuring there are no unnecessary repetitions),\n\
            (iii) style (by ensuring the translations reflect the style of the source text and takes into account any cultural context),\n\
            (iv) terminology (by ensuring terminology use is consistent and reflects the source text domain; and by only ensuring you use equivalent idioms {target_lang}).\n\
            
            Write a list of specific, helpful and constructive suggestions for improving the translation.
            Each suggestion should address one specific part of the translation.
            Output only the suggestions and nothing else."""
        )
    ])

    reflection = llm_invoke(prompt)
    print("[反思结果]:\n", reflection)
    return {"reflection": reflection}
根据反思改进翻译结果
def improve_translation(state):
    source_lang = state.get("source_lang")
    target_lang = state.get("target_lang")
    source_text = state.get("source_text")
    translation_1 = state.get("translation_1")
    reflection = state.get("reflection")

    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", f"You are an expert linguist, specializing in translation editing from {source_lang} to {target_lang}."),
            ("user", f"""Your task is to carefully read, then edit, a translation from {source_lang} to {target_lang}, taking into
                account a list of expert suggestions and constructive criticisms.
                
                The source text, the initial translation, and the expert linguist suggestions are delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT>, <TRANSLATION></TRANSLATION> and <EXPERT_SUGGESTIONS></EXPERT_SUGGESTIONS> \
                as follows:
                
                <SOURCE_TEXT>
                {source_text}
                </SOURCE_TEXT>
                
                <TRANSLATION>
                {translation_1}
                </TRANSLATION>
                
                <EXPERT_SUGGESTIONS>
                {reflection}
                </EXPERT_SUGGESTIONS>
                
                Please take into account the expert suggestions when editing the translation. Edit the translation by ensuring:
                
                (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),
                (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules and ensuring there are no unnecessary repetitions), \
                (iii) style (by ensuring the translations reflect the style of the source text)
                (iv) terminology (inappropriate for context, inconsistent use), or
                (v) other errors.
                
                Output only the new translation and nothing else."""
            )
        ]
    )
    translation_2 = llm_invoke(prompt)
    print("[最终翻译结果]:\n", translation_2)
    return {"translation_2": translation_2}
定义 workflow
workflow = StateGraph(State)

# 规划执行任务
## 节点注册
workflow.add_node("initial_translation", initial_translation)
workflow.add_node("reflect_on_translation", reflect_on_translation)
workflow.add_node("improve_translation", improve_translation)
## 连接节点
workflow.set_entry_point("initial_translation")
# 添加连接
workflow.add_edge("initial_translation", "reflect_on_translation")
workflow.add_edge("reflect_on_translation", "improve_translation")
workflow.add_edge("improve_translation", END)
开始执行
# 开始执行
app = workflow.compile()
result = app.invoke({
    "source_lang":"English",
    "target_lang":"中文",
    "source_text":"""
    LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures, differentiating it from DAG-based solutions. As a very low-level framework, it provides fine-grained control over both the flow and state of your application, crucial for creating reliable agents. Additionally, LangGraph includes built-in persistence, enabling advanced human-in-the-loop and memory features.
    """
})
输出:
[初次翻译结果]:
 LangGraph 是一个用于构建具有LLMs的有状态、多角色应用程序的库,用于创建代理和多代理工作流程。与其他LLM框架相比,它提供了这些核心优势:循环、可控性和持久性。LangGraph允许您定义涉及循环的流程,这对于大多数代理体系结构至关重要,使其与基于DAG的解决方案有所区别。作为一个非常低级别的框架,它可以对应用程序的流程和状态进行精细控制,这对于创建可靠的代理至关重要。此外,LangGraph还包括内置的持久性,可以实现高级的人在环和内存功能。
[反思结果]:
 1. 将 "LLMs" 翻译为 "有限状态机" 或 "有限状态机器",以更准确地反映原文中的意思。

2. 将 "flows" 翻译为 "流程" 而非 "流程",以更贴近原文的含义。

3. 将 "agent and multi-agent workflows" 翻译为 "代理和多代理工作流程" 而非 "代理和多代理工作流程",以提高翻译的流畅度。

4. 将 "fine-grained control" 翻译为 "细粒度控制" 而非 "精细控制",以更准确地表达原文的意思。

5. 将 "human-in-the-loop" 翻译为 "人在环" 而非 "人在环",以确保术语的一致性和准确性。
[最终翻译结果]:
 LangGraph 是一个用于构建具有有限状态机的有状态、多角色应用程序的库,用于创建代理和多代理工作流程。与其他有限状态机框架相比,它提供了这些核心优势:循环、可控性和持久性。LangGraph允许您定义涉及流程的流程,这对于大多数代理体系结构至关重要,使其与基于DAG的解决方案有所区别。作为一个非常低级别的框架,它可以对应用程序的流程和状态进行细粒度控制,这对于创建可靠的代理至关重要。此外,LangGraph还包括内置的持久性,可以实现高级的人在环和内存功能。
  • 21
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值