langchain4j实战:三种模型的使用(ChatLanguageModel、StreamingChatLanguageModel、ImageModel)

langchain4j版本用的是0.27.1,LLM大模型使用OpenAi
本文主要介绍聊天对话、流式对话、文生图三种模式的使用,只需要两步就可以实现和LLM交互

引入pom依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.2.1</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>langChain_demo</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>17</maven.compiler.source>
        <maven.compiler.target>17</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <langchain4j.version>0.27.1</langchain4j.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>dev.langchain4j</groupId>
            <artifactId>langchain4j</artifactId>
            <version>${langchain4j.version}</version>
        </dependency>
        <dependency>
            <groupId>dev.langchain4j</groupId>
            <artifactId>langchain4j-open-ai</artifactId>
            <version>${langchain4j.version}</version>
        </dependency>
    </dependencies>
</project>

聊天对话ChatLanguageModel

apikey为demo时,将url转成代理地址http://langchain4j.dev/demo/openai/v1,非demo时,则url为https://api.openai.com/v1。
简单聊天场景可以使用demo测试,更高级的场景如文生图等需要自己申请的openAi的令牌key。
后续示例中的常量OPEN_AI_BASE_URL 为 https://api.openai.com/v1,OPEN_AI_API_KEY需要设置自己的key

    public static void main(String[] args) {
        ChatLanguageModel model = OpenAiChatModel.builder()
                .apiKey("demo")
                .build();
        String result = model.generate("你是谁");
        System.out.println(result);
    }

在这里插入图片描述

流式对话StreamingChatLanguageModel

上面ChatLanguageModel模型,大模型会一下输出所有信息;如果需要像打字机一样,一个字一个字输出,则可以使用StreamingChatLanguageModel模型

    public static void main(String[] args) {
        StreamingChatLanguageModel model = OpenAiStreamingChatModel.builder()
                .baseUrl(OPEN_AI_BASE_URL)
                .apiKey(OPEN_AI_API_KEY)
                .build();
        model.generate("你好 我是小橘", new StreamingResponseHandler<AiMessage>() {
            @Override
            public void onNext(String token) {
                System.out.println(token);
                try {
                    TimeUnit.SECONDS.sleep(1);
                } catch (InterruptedException e) {
                    throw new RuntimeException(e);
                }
            }

            @Override
            public void onError(Throwable throwable) {
            }
        });
    }

在这里插入图片描述

文生图ImageModel

大模型可以根据你的信息生成对应的图片,下面链接就是生成的橘猫图片url;响应除了可以拿到url外,也提供下Base64的编码的文件字符串

https://oaidalleapiprodscus.blob.core.windows.net/private/org-ZLTuOuStHhQibWNJKgkpotO2/user-TFhEhsgVpt9L4c1u6Smt9gIu/img-Qb8hXLdi3NQDnhxO1DKu7NH6.png?st=2024-05-17T06%3A58%3A23Z&se=2024-05-17T08%3A58%3A23Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-05-16T13%3A52%3A40Z&ske=2024-05-17T13%3A52%3A40Z&sks=b&skv=2021-08-06&sig=yHC60xphZw1Zu94VKPnWx0yK0Ggq/BzUsspwQKFTAC8%3D

    public static void main(String[] args) {
        ImageModel imageModel = OpenAiImageModel.builder()
                .baseUrl(OPEN_AI_BASE_URL)
                .apiKey(OPEN_AI_API_KEY)
                .build();
        Response<Image> response = imageModel.generate("橘猫");
        System.out.println(response.content().url());
    }

在这里插入图片描述

好的,下面我为你提供三种不同的机器学习模型,用于编写猫狗识别系统代码。 ## 模型1:卷积神经网络(CNN) ```python import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D, Flatten, MaxPooling2D # 加载数据集 train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( 'train', target_size=(150, 150), batch_size=32, class_mode='binary') # 构建模型 model = Sequential([ Conv2D(16, 3, padding='same', activation='relu', input_shape=(150, 150, 3)), MaxPooling2D(), Conv2D(32, 3, padding='same', activation='relu'), MaxPooling2D(), Conv2D(64, 3, padding='same', activation='relu'), MaxPooling2D(), Flatten(), Dense(512, activation='relu'), Dense(1, activation='sigmoid') ]) # 编译模型 model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # 训练模型 model.fit( train_generator, steps_per_epoch=2000, epochs=10, validation_data=val_generator, validation_steps=800) ``` ## 模型2:支持向量机(SVM) ```python import numpy as np from sklearn.svm import SVC from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from PIL import Image # 加载数据集 X = [] y = [] for i in range(1000): img = Image.open(f"train/cat.{i}.jpg") img = img.resize((150, 150)) X.append(np.array(img)) y.append(0) for i in range(1000): img = Image.open(f"train/dog.{i}.jpg") img = img.resize((150, 150)) X.append(np.array(img)) y.append(1) X = np.array(X) y = np.array(y) # 划分训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # 将图像数据拉平 X_train = X_train.reshape(X_train.shape[0], -1) X_test = X_test.reshape(X_test.shape[0], -1) # 构建模型 model = SVC(kernel='linear') # 训练模型 model.fit(X_train, y_train) # 预测结果 y_pred = model.predict(X_test) # 计算准确率 acc = accuracy_score(y_test, y_pred) print(f"Accuracy: {acc}") ``` ## 模型3:深度神经网络(DNN) ```python import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.callbacks import EarlyStopping from PIL import Image # 加载数据集 X = [] y = [] for i in range(1000): img = Image.open(f"train/cat.{i}.jpg") img = img.resize((150, 150)) X.append(np.array(img)) y.append(0) for i in range(1000): img = Image.open(f"train/dog.{i}.jpg") img = img.resize((150, 150)) X.append(np.array(img)) y.append(1) X = np.array(X) y = np.array(y) # 划分训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # 构建模型 model = Sequential([ Flatten(input_shape=(150, 150, 3)), Dense(128, activation='relu'), Dense(1, activation='sigmoid') ]) # 编译模型 model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # 训练模型 early_stopping = EarlyStopping(monitor='val_loss', patience=3) model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test), callbacks=[early_stopping]) # 计算准确率 loss, acc = model.evaluate(X_test, y_test) print(f"Accuracy: {acc}") ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值