Models->Chat Models->OpenAI Chat

Spring AI支持来自OpenAI的各种AI语言模型,OpenAI是ChatGPT背后的公司,由于其创建了行业领先的文本生成模型和嵌入,它在激发人们对AI驱动的文本生成的兴趣方面发挥了重要作用。

Prerequisites

您需要使用OpenAI创建一个API来访问ChatGPT模型。
在OpenAI注册页面创建一个帐户,并在API密钥页面上生成令牌。
Spring AI项目定义了一个名为Spring.AI.openai.api-key的配置属性,您应该将其设置为从openai.com获得的api key的值。
您可以在application.properties文件中设置此配置属性:

spring.ai.openai.api-key=<your-openai-api-key>

为了在处理API密钥等敏感信息时增强安全性,可以使用Spring Expression Language(SpEL)引用自定义环境变量:

# In application.yml
spring:
  ai:
    openai:
      api-key: ${OPENAI_API_KEY}
# In your environment or .env file
export OPENAI_API_KEY=<your-openai-api-key>

您还可以在应用程序代码中以编程方式设置此配置:

// Retrieve API key from a secure source or environment variable
String apiKey = System.getenv("OPENAI_API_KEY");

Add Repositories and BOM

Spring AI工件发布在Maven Central和Spring Snapshot存储库中。请参阅工件存储库部分,将这些存储库添加到您的构建系统中。
为了帮助进行依赖关系管理,Spring AI提供了一个BOM(物料清单),以确保在整个项目中使用一致版本的Spring AI。请参阅依赖管理部分以添加Spring AI BOM to your build system

Auto-configuration

There has been a significant change in the Spring AI auto-configuration, starter modules' artifact names. Please refer to the upgrade notes for more information.

Spring AI 为 OpenAI 聊天客户端提供了 Spring Boot 自动配置功能。要启用它,请将以下依赖项添加到项目的 Maven pom.xml 或 Gradle build.gradle 构建文件中:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>
Refer to the Dependency Management section to add the Spring AI BOM to your build file.

Chat Properties

Retry Properties

前缀 spring.ai.retry 用作属性前缀,允许您为 OpenAI 聊天模型配置重试机制。

PropertyDescriptionDefault

spring.ai.retry.max-attempts

Maximum number of retry attempts.

10

spring.ai.retry.backoff.initial-interval

Initial sleep duration for the exponential backoff policy.

2 sec.

spring.ai.retry.backoff.multiplier

Backoff interval multiplier.

5

spring.ai.retry.backoff.max-interval

Maximum backoff duration.

3 min.

spring.ai.retry.on-client-errors

If false, throw a NonTransientAiException, and do not attempt retry for 4xx client error codes

false

spring.ai.retry.exclude-on-http-codes

List of HTTP status codes that should not trigger a retry (e.g. to throw NonTransientAiException).

empty

spring.ai.retry.on-http-codes

List of HTTP status codes that should trigger a retry (e.g. to throw TransientAiException).

empty

Connection Properties

前缀spring.ai.openai用作属性前缀,允许您连接到openai。

PropertyDescriptionDefault

spring.ai.openai.base-url

The URL to connect to

api.openai.com

spring.ai.openai.api-key

The API Key

-

spring.ai.openai.organization-id

Optionally, you can specify which organization to use for an API request.

-

spring.ai.openai.project-id

Optionally, you can specify which project to use for an API request.

-

For users that belong to multiple organizations (or are accessing their projects through their legacy user API key), you can optionally specify which organization and project is used for an API request. Usage from these API requests will count as usage for the specified organization and project.
Configuration Properties

Enabling and disabling of the chat auto-configurations are now configured via top level properties with the prefix spring.ai.model.chat.

To enable, spring.ai.model.chat=openai (It is enabled by default)

To disable, spring.ai.model.chat=none (or any value which doesn’t match openai)

This change is done to allow configuration of multiple models.

 前缀spring.ai.openai.chat是属性前缀,允许您为openai配置聊天模型实现。

PropertyDescriptionDefault

spring.ai.retry.max-attempts

Maximum number of retry attempts.

10

spring.ai.retry.backoff.initial-interval

Initial sleep duration for the exponential backoff policy.

2 sec.

spring.ai.retry.backoff.multiplier

Backoff interval multiplier.

5

spring.ai.retry.backoff.max-interval

Maximum backoff duration.

3 min.

spring.ai.retry.on-client-errors

If false, throw a NonTransientAiException, and do not attempt retry for 4xx client error codes

false

spring.ai.retry.exclude-on-http-codes

List of HTTP status codes that should not trigger a retry (e.g. to throw NonTransientAiException).

empty

spring.ai.retry.on-http-codes

List of HTTP status codes that should trigger a retry (e.g. to throw TransientAiException).

empty

Connection Properties

The prefix spring.ai.openai is used as the property prefix that lets you connect to OpenAI.

PropertyDescriptionDefault

spring.ai.openai.base-url

The URL to connect to

api.openai.com

spring.ai.openai.api-key

The API Key

-

spring.ai.openai.organization-id

Optionally, you can specify which organization to use for an API request.

-

spring.ai.openai.project-id

Optionally, you can specify which project to use for an API request.

-

For users that belong to multiple organizations (or are accessing their projects through their legacy user API key), you can optionally specify which organization and project is used for an API request. Usage from these API requests will count as usage for the specified organization and project.
Configuration Properties

Enabling and disabling of the chat auto-configurations are now configured via top level properties with the prefix spring.ai.model.chat.

To enable, spring.ai.model.chat=openai (It is enabled by default)

To disable, spring.ai.model.chat=none (or any value which doesn’t match openai)

This change is done to allow configuration of multiple models.

The prefix spring.ai.openai.chat is the property prefix that lets you configure the chat model implementation for OpenAI.

PropertyDescriptionDefault

spring.ai.openai.chat.enabled (Removed and no longer valid)

Enable OpenAI chat model.

true

spring.ai.model.chat

Enable OpenAI chat model.

openai

spring.ai.openai.chat.base-url

Optional override for the spring.ai.openai.base-url property to provide a chat-specific URL.

-

spring.ai.openai.chat.completions-path

The path to append to the base URL.

/v1/chat/completions

spring.ai.openai.chat.api-key

Optional override for the spring.ai.openai.api-key to provide a chat-specific API Key.

-

spring.ai.openai.chat.organization-id

Optionally, you can specify which organization to use for an API request.

-

spring.ai.openai.chat.project-id

Optionally, you can specify which project to use for an API request.

-

spring.ai.openai.chat.options.model

Name of the OpenAI chat model to use. You can select between models such as: gpt-4ogpt-4o-minigpt-4-turbogpt-3.5-turbo, and more. See the models page for more information.

gpt-4o-mini

spring.ai.openai.chat.options.temperature

The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict.

0.8

spring.ai.openai.chat.options.frequencyPenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

0.0f

spring.ai.openai.chat.options.logitBias

Modify the likelihood of specified tokens appearing in the completion.

-

spring.ai.openai.chat.options.maxTokens

(Deprecated in favour of maxCompletionTokens) The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length.

-

spring.ai.openai.chat.options.maxCompletionTokens

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

-

spring.ai.openai.chat.options.n

How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.

1

spring.ai.openai.chat.options.store

Whether to store the output of this chat completion request for use in our model

false

spring.ai.openai.chat.options.metadata

Developer-defined tags and values used for filtering completions in the chat completion dashboard

empty map

spring.ai.openai.chat.options.output-modalities

Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default. The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate both text and audio responses, you can use: textaudio. Not supported for streaming.

-

spring.ai.openai.chat.options.output-audio

Audio parameters for the audio generation. Required when audio output is requested with output-modalitiesaudio. Requires the gpt-4o-audio-preview model and is is not supported for streaming completions.

-

spring.ai.openai.chat.options.presencePenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

-

spring.ai.openai.chat.options.responseFormat.type

Compatible with GPT-4oGPT-4o miniGPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. The JSON_OBJECT type enables JSON mode, which guarantees the message the model generates is valid JSON. The JSON_SCHEMA type enables Structured Outputs which guarantees the model will match your supplied JSON schema. The JSON_SCHEMA type requires setting the responseFormat.schema property as well.

-

spring.ai.openai.chat.options.responseFormat.name

Response format schema name. Applicable only for responseFormat.type=JSON_SCHEMA

custom_schema

spring.ai.openai.chat.options.responseFormat.schema

Response format JSON schema. Applicable only for responseFormat.type=JSON_SCHEMA

-

spring.ai.openai.chat.options.responseFormat.strict

Response format JSON schema adherence strictness. Applicable only for responseFormat.type=JSON_SCHEMA

-

spring.ai.openai.chat.options.seed

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

-

spring.ai.openai.chat.options.stop

Up to 4 sequences where the API will stop generating further tokens.

-

spring.ai.openai.chat.options.topP

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

-

spring.ai.openai.chat.options.tools

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

-

spring.ai.openai.chat.options.toolChoice

Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.

-

spring.ai.openai.chat.options.user

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

-

spring.ai.openai.chat.options.functions

List of functions, identified by their names, to enable for function calling in a single prompt requests. Functions with those names must exist in the functionCallbacks registry.

-

spring.ai.openai.chat.options.stream-usage

(For streaming only) Set to add an additional chunk with token usage statistics for the entire request. The choices field for this chunk is an empty array and all other chunks will also include a usage field, but with a null value.

false

spring.ai.openai.chat.options.parallel-tool-calls

Whether to enable parallel function calling during tool use.

true

spring.ai.openai.chat.options.http-headers

Optional HTTP headers to be added to the chat completion request. To override the api-key you need to use an Authorization header key, and you have to prefix the key value with the Bearer prefix.

-

spring.ai.openai.chat.options.proxy-tool-calls

If true, the Spring AI will not handle the function calls internally, but will proxy them to the client. Then is the client’s responsibility to handle the function calls, dispatch them to the appropriate function, and return the results. If false (the default), the Spring AI will handle the function calls internally. Applicable only for chat models with function calling support

false

You can override the common spring.ai.openai.base-url and spring.ai.openai.api-key for the ChatModel and EmbeddingModel implementations. The spring.ai.openai.chat.base-url and spring.ai.openai.chat.api-key properties, if set, take precedence over the common properties. This is useful if you want to use different OpenAI accounts for different models and different model endpoints.
All properties prefixed with spring.ai.openai.chat.options can be overridden at runtime by adding request-specific Runtime Options to the Prompt call.

Runtime Options

 OpenAiChatOptions.java类提供模型配置,如要使用的模型、温度、频率惩罚等。
启动时,可以使用OpenAiChatModel(api,options)构造函数或spring.ai.openai.chat.options.*属性配置默认选项。
在运行时,您可以通过向Prompt调用添加新的、特定于请求的选项来覆盖默认选项。例如,要覆盖特定请求的默认型号和温度:

ChatResponse response = chatModel.call(
    new Prompt(
        "Generate the names of 5 famous pirates.",
        OpenAiChatOptions.builder()
            .model("gpt-4o")
            .temperature(0.4)
        .build()
    ));
In addition to the model specific OpenAiChatOptions you can use a portable ChatOptions instance, created with ChatOptionsBuilder#builder().

Function Calling

您可以使用OpenAiChatModel注册自定义Java函数,并让OpenAI模型智能地选择输出一个包含参数的JSON对象来调用一个或多个注册的函数。这是一种将LLM功能与外部工具和API连接起来的强大技术。阅读更多关于工具调用的信息。

Multimodal

多模态是指模型能够同时理解和处理来自各种来源的信息,包括文本、图像、音频和其他数据格式。OpenAI支持文本、视觉和音频输入方式。

Vision

提供视觉多模式支持的OpenAI模型包括gpt-4、gpt-4o和gpt-4o-mini。有关更多信息,请参阅愿景指南。
OpenAI用户消息API可以将base64编码图像或图像URL的列表与消息结合在一起。Spring AI的消息界面通过引入媒体类型来促进多模式AI模型。此类型包含有关消息中媒体附件的数据和详细信息,使用Spring的org.springframework.util。原始媒体数据的MimeType和org.springframework.core.io.资源。
下面是从OpenAiChatModelIT.java中摘录的代码示例,说明了使用gpt-4o模型将用户文本与图像融合。

var imageResource = new ClassPathResource("/multimodal.test.png");

var userMessage = new UserMessage("Explain what do you see on this picture?",
        new Media(MimeTypeUtils.IMAGE_PNG, this.imageResource));

ChatResponse response = chatModel.call(new Prompt(this.userMessage,
        OpenAiChatOptions.builder().model(OpenAiApi.ChatModel.GPT_4_O.getValue()).build()));
GPT_4_VISION_PREVIEW will continue to be available only to existing users of this model starting June 17, 2024. If you are not an existing user, please use the GPT_4_O or GPT_4_TURBO models. More details here

该示例显示了一个将multimodal.test.png图像作为输入的模型:

 

以及短信“解释一下你在这张图片上看到了什么?”,并生成这样的回复:

直到世界尽头

Audio

提供输入音频多模式支持的OpenAI模型包括gpt-4o-audio-preview。有关更多信息,请参阅音频指南。
OpenAI用户消息API可以将base64编码的音频文件列表与消息合并。Spring AI的消息界面通过引入媒体类型来促进多模式AI模型。此类型包含有关消息中媒体附件的数据和详细信息,使用Spring的org.springframework.util。原始媒体数据的MimeType和org.springframework.core.io.资源。目前,OpenAI仅支持以下媒体类型:audio/mp3和audio/wav。
下面是从OpenAiChatModelIT.java中摘录的代码示例,说明了使用gpt-4o-audio-preview模型将用户文本与音频文件融合。

var audioResource = new ClassPathResource("speech1.mp3");

var userMessage = new UserMessage("What is this recording about?",
        List.of(new Media(MimeTypeUtils.parseMimeType("audio/mp3"), audioResource)));

ChatResponse response = chatModel.call(new Prompt(List.of(userMessage),
        OpenAiChatOptions.builder().model(OpenAiApi.ChatModel.GPT_4_O_AUDIO_PREVIEW).build()));

Output Audio

提供输入音频多模式支持的OpenAI模型包括gpt-4o-audio-preview。有关更多信息,请参阅音频指南。
OpenAI Assstant Message API可以包含带有消息的base64编码音频文件列表。Spring AI的消息界面通过引入媒体类型来促进多模式AI模型。此类型包含有关消息中媒体附件的数据和详细信息,使用Spring的org.springframework.util。原始媒体数据的MimeType和org.springframework.core.io.资源。目前,OpenAI仅支持以下音频类型:audio/mp3和audio/wav。
下面是一个代码示例,说明了使用gpt-4o-audio-preview模型对用户文本和音频字节数组的响应:

var userMessage = new UserMessage("Tell me joke about Spring Framework");

ChatResponse response = chatModel.call(new Prompt(List.of(userMessage),
        OpenAiChatOptions.builder()
            .model(OpenAiApi.ChatModel.GPT_4_O_AUDIO_PREVIEW)
            .outputModalities(List.of("text", "audio"))
            .outputAudio(new AudioParameters(Voice.ALLOY, AudioResponseFormat.WAV))
            .build()));

String text = response.getResult().getOutput().getContent(); // audio transcript

byte[] waveAudio = response.getResult().getOutput().getMedia().get(0).getDataAsByteArray(); // audio data

您必须在OpenAiChatOptions中指定一种音频模式来生成音频输出。AudioParameters类为音频输出提供语音和音频格式。

Structured Outputs

OpenAI提供自定义的结构化输出API,确保您的模型生成的响应严格符合您提供的JSON模式。除了现有的Spring AI模型无关的结构化输出转换器外,这些API还提供了增强的控制和精度。

Configuration

Spring AI允许您使用OpenAiChatOptions构建器或通过应用程序属性以编程方式配置响应格式。

Using the Chat Options Builder

您可以使用OpenAiChatOptions构建器以编程方式设置响应格式,如下所示:

String jsonSchema = """
        {
            "type": "object",
            "properties": {
                "steps": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "explanation": { "type": "string" },
                            "output": { "type": "string" }
                        },
                        "required": ["explanation", "output"],
                        "additionalProperties": false
                    }
                },
                "final_answer": { "type": "string" }
            },
            "required": ["steps", "final_answer"],
            "additionalProperties": false
        }
        """;

Prompt prompt = new Prompt("how can I solve 8x + 7 = -23",
        OpenAiChatOptions.builder()
            .model(ChatModel.GPT_4_O_MINI)
            .responseFormat(new ResponseFormat(ResponseFormat.Type.JSON_SCHEMA, this.jsonSchema))
            .build());

ChatResponse response = this.openAiChatModel.call(this.prompt);
Integrating with BeanOutputConverter Utilities

您可以利用现有的BeanOutputConverter实用程序从域对象自动生成JSON模式,然后将结构化响应转换为特定于域的实例:

record MathReasoning(
    @JsonProperty(required = true, value = "steps") Steps steps,
    @JsonProperty(required = true, value = "final_answer") String finalAnswer) {

    record Steps(
        @JsonProperty(required = true, value = "items") Items[] items) {

        record Items(
            @JsonProperty(required = true, value = "explanation") String explanation,
            @JsonProperty(required = true, value = "output") String output) {
        }
    }
}

var outputConverter = new BeanOutputConverter<>(MathReasoning.class);

var jsonSchema = this.outputConverter.getJsonSchema();

Prompt prompt = new Prompt("how can I solve 8x + 7 = -23",
        OpenAiChatOptions.builder()
            .model(ChatModel.GPT_4_O_MINI)
            .responseFormat(new ResponseFormat(ResponseFormat.Type.JSON_SCHEMA, this.jsonSchema))
            .build());

ChatResponse response = this.openAiChatModel.call(this.prompt);
String content = this.response.getResult().getOutput().getContent();

MathReasoning mathReasoning = this.outputConverter.convert(this.content);
Configuring via Application Properties

或者,在使用OpenAI自动配置时,您可以通过以下应用程序属性配置所需的响应格式:

spring.ai.openai.api-key=YOUR_API_KEY
spring.ai.openai.chat.options.model=gpt-4o-mini

spring.ai.openai.chat.options.response-format.type=JSON_SCHEMA
spring.ai.openai.chat.options.response-format.name=MySchemaName
spring.ai.openai.chat.options.response-format.schema={"type":"object","properties":{"steps":{"type":"array","items":{"type":"object","properties":{"explanation":{"type":"string"},"output":{"type":"string"}},"required":["explanation","output"],"additionalProperties":false}},"final_answer":{"type":"string"}},"required":["steps","final_answer"],"additionalProperties":false}
spring.ai.openai.chat.options.response-format.strict=true

Sample Controller

创建一个新的Spring Boot项目,并将Spring ai starter模型openai添加到pom(或gradle)依赖项中。
在src/main/resources目录下添加一个application.properties文件,以启用和配置OpenAi聊天模型:

spring.ai.openai.api-key=YOUR_API_KEY
spring.ai.openai.chat.options.model=gpt-4o
spring.ai.openai.chat.options.temperature=0.7

这将创建一个OpenAiChatModel实现,您可以将其注入到您的类中。这是一个使用聊天模型生成文本的简单@RestController类的示例。

@RestController
public class ChatController {

    private final OpenAiChatModel chatModel;

    @Autowired
    public ChatController(OpenAiChatModel chatModel) {
        this.chatModel = chatModel;
    }

    @GetMapping("/ai/generate")
    public Map<String,String> generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        return Map.of("generation", this.chatModel.call(message));
    }

    @GetMapping("/ai/generateStream")
	public Flux<ChatResponse> generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        Prompt prompt = new Prompt(new UserMessage(message));
        return this.chatModel.stream(prompt);
    }
}

Manual Configuration

OpenAiChatModel实现了ChatModel和StreamingChatModel,并使用低级OpenAiApi客户端连接到OpenAI服务。
将spring-ai-openai依赖项添加到项目的Maven pom.xml文件中:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-openai</artifactId>
</dependency>

接下来,创建一个OpenAiChatModel并将其用于文本生成:

var openAiApi = OpenAiApi.builder()
            .apiKey(System.getenv("OPENAI_API_KEY"))
            .build();
var openAiChatOptions = OpenAiChatOptions.builder()
            .model("gpt-3.5-turbo")
            .temperature(0.4)
            .maxTokens(200)
            .build();
var chatModel = new OpenAiChatModel(this.openAiApi, this.openAiChatOptions);

ChatResponse response = this.chatModel.call(
    new Prompt("Generate the names of 5 famous pirates."));

// Or with streaming responses
Flux<ChatResponse> response = this.chatModel.stream(
    new Prompt("Generate the names of 5 famous pirates."));

OpenAiChatOptions为聊天请求提供配置信息。OpenAiApi。Builder和OpenAiChatOptions。构建器是流畅的选择-分别为API客户端和聊天配置构建器。

Low-level OpenAiApi Client

OpenAiApi为OpenAI聊天API提供了一个轻量级的Java客户端。
以下类图说明了OpenAiApi聊天界面和构建块:

下面是一个简单的代码片段,展示了如何以编程方式使用API:

OpenAiApi openAiApi = OpenAiApi.builder()
            .apiKey(System.getenv("OPENAI_API_KEY"))
            .build();

ChatCompletionMessage chatCompletionMessage =
    new ChatCompletionMessage("Hello world", Role.USER);

// Sync request
ResponseEntity<ChatCompletion> response = this.openAiApi.chatCompletionEntity(
    new ChatCompletionRequest(List.of(this.chatCompletionMessage), "gpt-3.5-turbo", 0.8, false));

// Streaming request
Flux<ChatCompletionChunk> streamResponse = this.openAiApi.chatCompletionStream(
        new ChatCompletionRequest(List.of(this.chatCompletionMessage), "gpt-3.5-turbo", 0.8, true));

 有关更多信息,请参阅OpenAiApi.java的JavaDoc。

Low-level API Examples

API Key Management

Spring AI通过ApiKey接口及其实现提供灵活的API密钥管理。默认实现SimpleApiKey适用于大多数用例,但您也可以为更复杂的场景创建自定义实现。

Default Configuration

默认情况下,Spring Boot自动配置将使用Spring.ai.openai.API-key属性创建一个API密钥bean:

spring.ai.openai.api-key=your-api-key-here

Copied!

Custom API Key Configuration

您可以使用构建器模式,使用自己的ApiKey实现创建OpenAiApi的自定义实例:

ApiKey customApiKey = new ApiKey() {
    @Override
    public String getValue() {
        // Custom logic to retrieve API key
        return "your-api-key-here";
    }
};

OpenAiApi openAiApi = OpenAiApi.builder()
    .apiKey(customApiKey)
    .build();

// Create a chat client with the custom OpenAiApi instance
OpenAiChatClient chatClient = new OpenAiChatClient(openAiApi);

当您需要时,这很有用:
从安全密钥存储中检索API密钥
动态旋转API键
实现自定义API密钥选择逻辑
拼音

### 解决 `ImportError` 问题并确认 LangChain 模块的版本兼容性 当尝试导入 `langchain_openai.chat_models.ChatOpenAI` 出现 `ImportError` 错误时,通常是由以下几个原因引起的:模块未正确安装、版本不兼容或路径错误。以下是针对该问题的专业分析和解决方案。 --- #### 1. **LangChain 主包及其子模块的版本依赖** ##### (a) **LangChain 主包** LangChain 的主包提供了核心功能接口,建议使用稳定的最新版本。例如,当前主流版本可能为 `0.8.x` 或更高版本[^1]。 安装命令: ```bash pip install langchain==0.8.* ``` ##### (b) **LangChain-Core** `langchain-core` 是 LangChain 的基础抽象层,其版本应严格匹配主包的版本号以确保兼容性。 安装命令: ```bash pip install langchain-core==0.8.* ``` ##### (c) **LangGraph** `langgraph` 是用于图结构数据处理的扩展模块。假设其版本与 LangChain 主包同步更新,则推荐安装对应的主要版本。 安装命令: ```bash pip install langgraph==0.8.* ``` ##### (d) **LangChain-OpenAI** 需要注意的是,官方并未定义名为 `langchain_openai` 的独立包。集成 OpenAI 功能的标准方式是通过安装主包 `langchain` 并额外引入 `openai` 库。两者的版本无需完全绑定,但需验证是否存在已知冲突。 安装命令: ```bash pip install langchain openai ``` --- #### 2. **引发 `ImportError` 的常见原因及解决方案** ##### (a) **模块未正确安装** 如果 `langchain-openai` 被误解为独立模块而手动安装失败,可能会导致无法找到相关路径。此时应回归标准流程,仅安装 `langchain` 和 `openai` 即可解决问题。 检查已安装模块的方法: ```python import pkg_resources print(pkg_resources.get_distribution("langchain").version) print(pkg_resources.get_distribution("openai").version) ``` ##### (b) **版本不兼容** 某些情况下,`ChatOpenAI` 类可能存在于较新版本中,而旧版 LangChain 中不存在此实现。因此需要升级至支持该特性的最低版本(如 `0.8.29` 或更高)。 强制重新安装指定版本: ```bash pip install --upgrade --force-reinstall langchain==0.8.29 openai ``` ##### (c) **路径拼写错误** Python 对大小写敏感,确保导入语句中的名称完全匹配实际定义。正确的导入形式如下: ```python from langchain.chat_models import ChatOpenAI ``` 而非其他变体(如 `langchain_openai` 或 `chatmodels` 等)。 --- #### 3. **验证 API 密钥配置** 即使解决了模块加载问题,缺少有效的 API 密钥也可能间接触发异常。请确保已在环境变量中设置了必要的键值对,例如: ```bash export OPENAI_API_KEY="your_api_key_here" export TAVILY_API_KEY="your_tavily_api_key_here" ``` 或者在脚本开头显式赋值: ```python import os os.environ["OPENAI_API_KEY"] = "your_api_key_here" os.environ["TAVILY_API_KEY"] = "your_tavily_api_key_here" ``` --- #### 4. **完整示例代码** 以下是一段完整的测试代码,演示如何正确初始化 `ChatOpenAI` 模型实例: ```python from langchain.chat_models import ChatOpenAI import os # 设置 API 密钥 os.environ["OPENAI_API_KEY"] = "your_api_key_here" # 初始化模型 model = ChatOpenAI(model_name="gpt-3.5-turbo") # 测试调用 response = model.invoke({"messages": [{"role": "user", "content": "Hello!"}]}) print(response.content) ``` --- ### 总结 - 确保所有 LangChain 相关模块的版本一致,优先选用稳定发布的最新次版本。 - 验证是否遗漏了必要依赖项(如 `openai`),并通过标准化命名空间访问所需功能。 - 排查是否有外部因素干扰程序执行流,包括但不限于 API 权限不足或网络连接中断等问题。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

虾条_花吹雪

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值