Langchain workaround for with_structured_output using ChatBedrock

题意:Langchain 的替代方案(或解决方案),用于通过 ChatBedrock 实现结构化输出(with_structured_output

问题背景:

I'm working with the langchain library to implement a document analysis application. Especifically I want to use the routing technique described in this documentation. i wanted to follow along the example, but my environment is restricted to AWS, and I am using ChatBedrock instead of ChatOpenAI due to limitations with my deployment.

我正在使用 langchain 库来实现一个文档分析应用。特别是,我想使用这篇文档中描述的路由技术。我想按照示例进行操作,但由于我的环境限制在 AWS 上,并且由于我的部署限制,我使用的是 ChatBedrock 而不是 ChatOpenAI。

According to this overview the with_structured_output method, which I need, is not (yet) implemented for models on AWS Bedrock, which is why I am looking for a workaround or any method to replicate this functionality.

根据这个概述,我需要的 with_structured_output 方法在 AWS Bedrock 上的模型中还(尚未)实现,因此我正在寻找一个替代方案或任何方法来复制这一功能。

The key functionality I am looking for is shown in this example:

我正在寻找的关键功能在以下示例中有所展示:

from typing import List
from typing import Literal

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI



class RouteQuery(BaseModel):
    """Route a user query to the most relevant datasource."""

    datasources: List[Literal["python_docs", "js_docs", "golang_docs"]] = Field(
        ...,
        description="Given a user question choose which datasources would be most relevant for answering their question",
    )

system = """You are an expert at routing a user question to the appropriate data source.

Based on the programming language the question is referring to, route it to the relevant data source."""
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "{question}"),
    ]
)

llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm = llm.with_structured_output(RouteQuery)
router = prompt | structured_llm
router.invoke(
    {
        "question": "is there feature parity between the Python and JS implementations of OpenAI chat models"
    }
)

The output would be:        

RouteQuery(datasources=['python_docs', 'js_docs'])

The most important fact for me is that it just selects items from the list without any additional overhead, which makes it possible to setup the right follow up questions.

对我来说,最重要的事实是,它只是从列表中选择项目,没有任何额外的开销,这使得设置正确的后续问题成为可能。

Did anyone find a workaround how to resolve this issue?

有人找到解决这个问题的替代方案了吗?

问题解决:

I found a solution in two blog posts. The key is to use the instructor package, which is a wrapper around pydantic. This means langchain is not necessary.

我在这两篇博客文章中找到了解决方案:这里和这里。关键是使用 instructor 包,它是 pydantic 的一个封装。这意味着并不需要使用 langchain。

Here is an example based on the blog posts:        以下是根据博客文章提供的一个示例:

from typing import List
import instructor
from anthropic import AnthropicBedrock
from loguru import logger
from pydantic import BaseModel
import enum

class User(BaseModel):
    name: str
    age: int

class MultiLabels(str, enum.Enum):
    TECH_ISSUE = "tech_issue"
    BILLING = "billing"
    GENERAL_QUERY = "general_query"

class MultiClassPrediction(BaseModel):
    """
    Class for a multi-class label prediction.
    """
    class_labels: List[MultiLabels]

if __name__ == "__main__":
    # Initialize the instructor client with AnthropicBedrock configuration
    client = instructor.from_anthropic(
        AnthropicBedrock(
            aws_region="eu-central-1",
        )
    )

    logger.info("Hello World Example")

    # Create a message and extract user data
    resp = client.messages.create(
        model="anthropic.claude-instant-v1",
        max_tokens=1024,
        messages=[
            {
                "role": "user",
                "content": "Extract Jason is 25 years old.",
            }
        ],
        response_model=User,
    )

    print(resp)
    logger.info("Classification Example")

    # Classify a support ticket
    text = "My account is locked and I can't access my billing info."

    _class = client.chat.completions.create(
        model="anthropic.claude-instant-v1",
        max_tokens=1024,
        response_model=MultiClassPrediction,
        messages=[
            {
                "role": "user",
                "content": f"Classify the following support ticket: {text}",
            },
        ],
    )

    print(_class)

  • 44
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

营赢盈英

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值