调用不同AI平台的API可以实现自然语言处理、图像识别、语音识别等功能。以下是Java调用一些常见AI平台的示例,包括OpenAI、Google Cloud AI、Microsoft Azure AI和IBM Watson等。

目录

  1. 准备工作
  2. 调用OpenAI API
  3. 调用Google Cloud AI API
  4. 调用Microsoft Azure AI API
  5. 调用IBM Watson API
  6. 总结

一、准备工作

在开始之前,需要完成以下准备工作:

  1. 创建相应平台的账户。
  2. 获取API密钥或访问令牌。
  3. 导入所需的第三方库(如HttpClient和JSON解析库)。

示例中将使用Apache HttpClient库进行HTTP请求,并使用org.json库解析JSON响应。可以通过Maven引入这些依赖:

<dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpclient</artifactId>
    <version>4.5.13</version>
</dependency>
<dependency>
    <groupId>org.json</groupId>
    <artifactId>json</artifactId>
    <version>20210307</version>
</dependency>
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.

二、调用OpenAI API

生成文本示例
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
import org.json.JSONObject;

public class OpenAIExample {

    private static final String API_KEY = "";
    private static final String ENDPOINT = "https://api.openai.com/v1/completions";

    public static void main(String[] args) {
        try (CloseableHttpClient httpClient = HttpClients.createDefault()) {
            HttpPost request = new HttpPost(ENDPOINT);
            request.addHeader("Content-Type", "application/json");
            request.addHeader("Authorization", "Bearer " + API_KEY);

            JSONObject json = new JSONObject();
            json.put("model", "text-davinci-003");
            json.put("prompt", "Write a poem about the sea");
            json.put("max_tokens", 100);

            StringEntity entity = new StringEntity(json.toString());
            request.setEntity(entity);

            try (CloseableHttpResponse response = httpClient.execute(request)) {
                String responseString = EntityUtils.toString(response.getEntity());
                JSONObject responseJson = new JSONObject(responseString);
                System.out.println(responseJson.toString(2));
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.

三、调用Google Cloud AI API

语音识别示例
import com.google.auth.oauth2.GoogleCredentials;
import com.google.cloud.speech.v1.*;
import com.google.protobuf.ByteString;

import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class GoogleCloudAIExample {

    public static void main(String[] args) throws Exception {
        // 设置Google Cloud凭据文件
        GoogleCredentials credentials = GoogleCredentials.fromStream(Files.newInputStream(Paths.get("path/to/credentials.json")));
        SpeechSettings settings = SpeechSettings.newBuilder().setCredentialsProvider(() -> credentials).build();

        try (SpeechClient speechClient = SpeechClient.create(settings)) {
            Path path = Paths.get("path/to/audio.raw");
            byte[] data = Files.readAllBytes(path);
            ByteString audioBytes = ByteString.copyFrom(data);

            RecognitionConfig config = RecognitionConfig.newBuilder()
                    .setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
                    .setSampleRateHertz(16000)
                    .setLanguageCode("en-US")
                    .build();

            RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(audio(ByteString).build());

            RecognizeResponse response = speechClient.recognize(config, audio);
            for (SpeechRecognitionResult result : response.getResultsList()) {
                SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
                System.out.printf("Transcript: %s%n", alternative.getTranscript());
            }
        }
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.

四、调用Microsoft Azure AI API

文本翻译示例
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
import org.json.JSONArray;
import org.json.JSONObject;

public class AzureAIExample {

    private static final String API_KEY = "your_azure_api_key";
    private static final String ENDPOINT = "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es";

    public static void main(String[] args) {
        try (CloseableHttpClient httpClient = HttpClients.createDefault()) {
            HttpPost request = new HttpPost(ENDPOINT);
            request.setHeader("Content-Type", "application/json");
            request.setHeader("Ocp-Apim-Subscription-Key", API_KEY);
            request.setHeader("Ocp-Apim-Subscription-Region", "your_region");

            JSONArray jsonArray = new JSONArray();
            JSONObject json = new JSONObject();
            json.put("Text", "Hello, how are you?");
            jsonArray.put(json);

            StringEntity entity = new StringEntity(jsonArray.toString());
            request.setEntity(entity);

            try (CloseableHttpResponse response = httpClient.execute(request)) {
                String responseString = EntityUtils.toString(response.getEntity());
                JSONArray responseJsonArray = new JSONArray(responseString);
                System.out.println(responseJsonArray.toString(2));
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.

五、调用IBM Watson API

自然语言理解示例
import com.ibm.cloud.sdk.core.security.IamAuthenticator;
import com.ibm.watson.natural_language_understanding.v1.NaturalLanguageUnderstanding;
import com.ibm.watson.natural_language_understanding.v1.model.AnalysisResults;
import com.ibm.watson.natural_language_understanding.v1.model.AnalyzeOptions;
import com.ibm.watson.natural_language_understanding.v1.model.EntitiesOptions;
import com.ibm.watson.natural_language_understanding.v1.model.EntityAnalysis;

public class IBMWatsonExample {

    public static void main(String[] args) {
        IamAuthenticator authenticator = new IamAuthenticator("your_ibm_watson_api_key");
        NaturalLanguageUnderstanding naturalLanguageUnderstanding = new NaturalLanguageUnderstanding("2021-03-25", authenticator);
        naturalLanguageUnderstanding.setServiceUrl("your_service_url");

        EntitiesOptions entitiesOptions = new EntitiesOptions.Builder()
                .sentiment(true)
                .limit(1)
                .build();

        AnalyzeOptions parameters = new AnalyzeOptions.Builder()
                .text("IBM is an American multinational technology company headquartered in Armonk, New York, with operations in over 170 countries.")
                .entities(entitiesOptions)
                .build();

        AnalysisResults response = naturalLanguageUnderstanding.analyze(parameters).execute().getResult();
        for (EntityAnalysis entity : response.getEntities()) {
            System.out.printf("Entity: %s, Sentiment: %s%n", entity.getText(), entity.getSentiment().getScore());
        }
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.

调用AI平台的步骤大致如下:

  1. 获取API Key和Endpoint:注册相应AI平台账户并获取API访问密钥和服务地址。
  2. 配置HTTP请求:使用HttpClient配置HTTP请求头、请求方法(如GET、POST)和请求体。
  3. 解析响应:接收并解析API返回的响应数据,可以使用JSON库解析响应以获取所需信息。

六、更多AI平台调用示例

除了上述的几个主流平台,还有其他一些热门的AI服务,如Amazon Web Services (AWS) 的人工智能服务和百度的AI服务。以下是这些平台的一些使用示例。

调用Amazon AWS AI服务

使用Amazon Comprehend进行情感分析
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.comprehend.AmazonComprehend;
import com.amazonaws.services.comprehend.AmazonComprehendClientBuilder;
import com.amazonaws.services.comprehend.model.DetectSentimentRequest;
import com.amazonaws.services.comprehend.model.DetectSentimentResult;

public class AWSExample {

    public static void main(String[] args) {
        AWSCredentials awsCredentials = new BasicAWSCredentials("your_aws_access_key", "your_aws_secret_key");
        AmazonComprehend comprehendClient = AmazonComprehendClientBuilder.standard()
                .withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
                .withRegion(Regions.US_EAST_1).build();

        String text = "I am so happy to use AWS services!";
        DetectSentimentRequest detectSentimentRequest = new DetectSentimentRequest().withText(text).withLanguageCode("en");

        DetectSentimentResult detectSentimentResult = comprehendClient.detectSentiment(detectSentimentRequest);
        System.out.println("Sentiment: " + detectSentimentResult.getSentiment());
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
使用Amazon Rekognition进行图像识别
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
import com.amazonaws.services.rekognition.model.DetectLabelsRequest;
import com.amazonaws.services.rekognition.model.DetectLabelsResult;
import com.amazonaws.services.rekognition.model.Image;
import com.amazonaws.services.rekognition.model.Label;
import com.amazonaws.util.IOUtils;

import java.io.FileInputStream;
import java.nio.ByteBuffer;

public class AWSImageExample {

    public static void main(String[] args) throws Exception {
        AWSCredentials awsCredentials = new BasicAWSCredentials("your_aws_access_key", "your_aws_secret_key");
        AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.standard()
                .withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
                .withRegion(Regions.US_EAST_1).build();

        try (FileInputStream inputStream = new FileInputStream("path/to/your/image.jpg")) {
            ByteBuffer imageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));

            DetectLabelsRequest request = new DetectLabelsRequest()
                    .withImage(new Image().withBytes(imageBytes))
                    .withMaxLabels(10)
                    .withMinConfidence(75F);

            DetectLabelsResult result = rekognitionClient.detectLabels(request);
            for (Label label : result.getLabels()) {
                System.out.println("Label: " + label.getName() + ", Confidence: " + label.getConfidence().toString());
            }
        }
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.

调用百度AI服务

使用百度AI进行语音合成
import com.baidu.aip.speech.AipSpeech;
import org.json.JSONObject;
import java.io.FileOutputStream;

public class BaiduAIExample {

    public static final String APP_ID = "your_app_id";
    public static final String API_KEY = "your_api_key";
    public static final String SECRET_KEY = "your_secret_key";

    public static void main(String[] args) {
        AipSpeech client = new AipSpeech(APP_ID, API_KEY, SECRET_KEY);
        
        // 语音合成
        JSONObject res = client.synthesis("欢迎使用百度AI服务", "zh", 1, null);
        if (res.has("error_code")) {
            System.err.println("Error: " + res.getString("error_msg"));
        } else {
            try (FileOutputStream out = new FileOutputStream("output.mp3")) {
                out.write(res.getAsByteArray());
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.

七、总结

本文详细介绍了如何在Java中调用多家主流AI平台的API,通过示例展示了从文本生成、语音识别到图像识别等不同AI技术的应用场景。使用这些AI平台,可以大幅提升应用的智能化水平,提供更优质的用户体验。

总结下来,调用各大AI平台API的通用步骤包括:

  1. 注册和配置:在相应的平台注册账号,获取API Key或Access Token,并根据需要完成初始配置。
  2. 引入依赖:通过构建工具(如Maven或Gradle)引入必要的依赖库,如HTTP客户端、JSON解析库等。
  3. 编写请求代码:设置请求参数和头信息,通过HTTP方法(如GET、POST)发送请求。
  4. 处理响应:接收并解析API返回的数据,根据具体需求做进一步处理。

通过这些步骤,开发者可以在Java应用中充分利用各个平台的AI能力,从而实现更丰富、智能的功能。在实际开发中,请自行参考平台文档以获取最新的接口信息和最佳实践。此外,考虑到实际项目中会涉及到性能、安全和成本等问题,在正式上线前建议进行充分的测试和优化。