Flink学习笔记(三):flink读取kafka数据并写入elasticsearch

上篇记录了flink如何读取kafka的数据,我们都知道flink有许多自带的连接器,那么如何把读取的数写入到相关容器中呢?本篇记录下flink 的 elasticsearch 连接器的相关操作。

一、连接器

flink提供了很多连接器,如图所示,我们可以在官网上查到详细的说明

官网连接

在这里插入图片描述

我们在前面介绍了kafka的连接器,本篇主要介绍下elasticsearch的连接器。
首先我们需要注意对应的版本

在这里插入图片描述

二、5.x版本和6.x/7.x版本连接方法

读取kafka参考上一篇文章的实现

1、5.x版本

我们可以看到在5.x版本中使用的连接器为
flink-connector-elasticsearch5_2.11
需要flink版本1.3.0版本以上

添加pom

<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-elasticsearch5_2.11</artifactId>
    <version>1.5.4</version>
</dependency>

官方代码实现

import org.apache.flink.api.common.functions.RuntimeContext;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction;
import org.apache.flink.streaming.connectors.elasticsearch.RequestIndexer;
import org.apache.flink.streaming.connectors.elasticsearch5.ElasticsearchSink;

import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.Requests;

import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

//参考上一篇kafka的读取获取kafka数据流
DataStream<String> input = ...;

Map<String, String> config = new HashMap<>();
config.put("cluster.name", "my-cluster-name");
// This instructs the sink to emit after every element, otherwise they would be buffered
config.put("bulk.flush.max.actions", "1");

List<InetSocketAddress> transportAddresses = new ArrayList<>();
transportAddresses.add(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 9300));
transportAddresses.add(new InetSocketAddress(InetAddress.getByName("10.2.3.1"), 9300));

input.addSink(new ElasticsearchSink<>(config, transportAddresses, new ElasticsearchSinkFunction<String>() {
    public IndexRequest createIndexRequest(String element) {
        Map<String, String> json = new HashMap<>();
        json.put("data", element);

        return Requests.indexRequest()
                .index("my-index")
                .type("my-type")
                .source(json);
    }

    @Override
    public void process(String element, RuntimeContext ctx, RequestIndexer indexer) {
        indexer.add(createIndexRequest(element));
    }
}));

2、6.x,7.x版本

添加pom

<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-elasticsearch6_2.11</artifactId>
    <version>1.8.3</version>
</dependency>

官方代码实现

import org.apache.flink.api.common.functions.RuntimeContext;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction;
import org.apache.flink.streaming.connectors.elasticsearch.RequestIndexer;
import org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink;

import org.apache.http.HttpHost;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.Requests;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

DataStream<String> input = ...;

List<HttpHost> httpHosts = new ArrayList<>();
httpHosts.add(new HttpHost("127.0.0.1", 9200, "http"));
httpHosts.add(new HttpHost("10.2.3.1", 9200, "http"));

// use a ElasticsearchSink.Builder to create an ElasticsearchSink
ElasticsearchSink.Builder<String> esSinkBuilder = new ElasticsearchSink.Builder<>(
    httpHosts,
    new ElasticsearchSinkFunction<String>() {
        public IndexRequest createIndexRequest(String element) {
            Map<String, String> json = new HashMap<>();
            json.put("data", element);

            return Requests.indexRequest()
                    .index("my-index")
                    .type("my-type")
                    .source(json);
        }

        @Override
        public void process(String element, RuntimeContext ctx, RequestIndexer indexer) {
            indexer.add(createIndexRequest(element));
        }
    }
);

// configuration for the bulk requests; this instructs the sink to emit after every element, otherwise they would be buffered
esSinkBuilder.setBulkFlushMaxActions(1);

// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory(
  restClientBuilder -> {
    restClientBuilder.setDefaultHeaders(...)
    restClientBuilder.setMaxRetryTimeoutMillis(...)
    restClientBuilder.setPathPrefix(...)
    restClientBuilder.setHttpClientConfigCallback(...)
  }
);

// finally, build and add the sink to the job's pipeline
input.addSink(esSinkBuilder.build());

参考实现

public static void write2es(List<HttpHost> httpHosts, DataStream<Object> dataStream, String index, String type) {
    ElasticsearchSink.Builder<Object> esSinkBuilder = new ElasticsearchSink.Builder<Object>(httpHosts, new ElasticsearchSinkFunction<Object>() {
        public List<IndexRequest> createIndexRequest(Object event) {
            List<IndexRequest> indexRequestList = new ArrayList<>();
            Map<String, String> map =CustomElasticSearchMap.getObjectToEsMap(event);
            indexRequestList.add(Requests.indexRequest()
                    .index(index)
                    .type("_doc")
                    .source(map));

            return indexRequestList;
        }

        @Override
        public void process(Object event, RuntimeContext runtimeContext, RequestIndexer requestIndexer) {
            List<IndexRequest> indexRequestList = createIndexRequest(event);
            for (int i = 0; i < indexRequestList.size(); i++) {
                requestIndexer.add(indexRequestList.get(i));
            }
        }
    });

    esSinkBuilder.setRestClientFactory(new RestClientFactory() {
        @Override
        public void configureRestClientBuilder(RestClientBuilder restClientBuilder) {
            Header[] headers = new BasicHeader[]{new BasicHeader("Content-Type","application/json")};
            restClientBuilder.setDefaultHeaders(headers); //以数组的形式可以添加多个header
        }
    });
    esSinkBuilder.setBulkFlushMaxActions(1);
    esSinkBuilder.setFailureHandler(new RetryRejectedExecutionFailureHandler());
    dataStream.addSink(esSinkBuilder.build());
}
  • 1
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
如果您想使用Flink 1.12将Kafka数据写入Elasticsearch中,可以按照以下步骤操作: 1. 首先,您需要在项目中添加FlinkKafkaElasticsearch依赖,例如: ```xml <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kafka_2.11</artifactId> <version>1.12.0</version> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-elasticsearch7_2.11</artifactId> <version>1.12.0</version> </dependency> ``` 2. 创建一个Flink Streaming Job,并使用Kafka作为数据源,例如: ```java Properties props = new Properties(); props.setProperty("bootstrap.servers", "localhost:9092"); props.setProperty("group.id", "test"); DataStream<String> stream = env.addSource(new FlinkKafkaConsumer<>("my-topic", new SimpleStringSchema(), props)); ``` 3. 将数据转换为Elasticsearch数据格式,并将其写入Elasticsearch中,例如: ```java List<HttpHost> httpHosts = new ArrayList<>(); httpHosts.add(new HttpHost("localhost", 9200, "http")); stream.map(new MapFunction<String, Map<String, Object>>() { @Override public Map<String, Object> map(String value) throws Exception { // 将数据转换为Elasticsearch数据格式 Map<String, Object> data = new HashMap<>(); data.put("message", value); data.put("@timestamp", new Date()); return data; } }).addSink(new ElasticsearchSink.Builder<>(httpHosts, new ElasticsearchSinkFunction<Map<String, Object>>() { @Override public void process(Map<String, Object> element, RuntimeContext ctx, RequestIndexer indexer) { // 将数据写入Elasticsearch中 IndexRequest request = Requests.indexRequest() .index("my-index") .source(element); indexer.add(request); } }).build()); ``` 上述代码中,我们将Kafka中的数据转换为Elasticsearch数据格式,然后使用ElasticsearchSinkFunction将数据写入Elasticsearch中。 希望这些能够帮到您!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值