前言
目前flink 是没有es source flink 通过sql方式读取es中的数据需要自定义es source

参考
1.官网:User-defined Sources & Sinks | Apache Flink
2.博客:Flink自定义实现ElasticSearch Table Source_不会心跳的博客-CSDN博客
flink1.13 elasticsearch 7.5.1
步骤
1.自定义Dynamic Table Factories ESSqlFactory 继承DynamicTableSourceFactory类
自定义source 名字为:elasticsearch-source factoryIdentifier 方法中定义
package com.tang.elasticsearch.source;
/**
* @Description: 工厂类
* @author tang
* @date 2021/11/14 22:05
*/
public class ESSqlFactory implements DynamicTableSourceFactory {
public static final ConfigOption<String> HOSTS= ConfigOptions.key("hosts").stringType().noDefaultValue();
public static final ConfigOption<String> USERNAME = ConfigOptions.key("username").stringType().noDefaultValue();
public static final ConfigOption<String> PASSWORD = ConfigOptions.key("password").stringType().noDefaultValue();
public static final ConfigOption<String> INDEX = ConfigOptions.key("index").stringType().noDefaultValue();
public static final ConfigOption<String> DOCUMENT_TYPE = ConfigOptions.key("document-type").stringType().noDefaultValue();
public static final ConfigOption<String> FORMAT = ConfigOptions.key("format").stringType().noDefaultValue();
public static final ConfigOption<Integer> FETCH_SIZE = ConfigOptions.key("fetch_size").intType().noDefaultValue();
/**
* @Description: 连接器名称
* @author tang
* @date 2021/11/1 23:15
*/
@Override
public String factoryIdentifier() {
return "elasticsearch-source";
}
/**
* @Description: 必须参数
* @author tang
* @date 2021/11/14 22:05
*/
@Override
public Set<ConfigOption<?>> requiredOptions() {
final Set<ConfigOption<?>> options = new HashSet<>();
options.add(HOSTS);
options.add(INDEX);
options.add(USERNAME);
options.add(PASSWORD);
options.add(FactoryUtil.FORMAT);
// use pre-defined option for format
return options;
}
/**
* @Description: 可选参数
* @author tang
* @date 2021/11/14 22:04
*/
@Override
public Set<ConfigOption<?>> optionalOptions() {
final Set<ConfigOption<?>> options = new HashSet<>();
options.add(FORMAT);
options.add(FETCH_SIZE);
options.add(DOCUMENT_TYPE);
return options;
}
public DynamicTableSource createDynamicTableSource(Context context) {
final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);
// 获取解码器
final DecodingFormat<DeserializationSchema<RowData>> valueFormat =
(DecodingFormat)helper.discoverOptionalDecodingFormat(
DeserializationFormatFactory.class, FactoryUtil.FORMAT).orElseGet(() -> {
return helper.discoverDecodingFormat(DeserializationFormatFactory.class, KafkaOptions.VALUE_FORMAT);
});
final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(
DeserializationFormatFactory.class, FactoryUtil.FORMAT);
helper.validate();
final ReadableConfig options = helper.getOptions();
final String hosts = options.get(HOSTS);
final String username = options.get(USERNAME);
final String password = options.get(PASSWORD);
final String index = options.get(INDEX);
final String document_type = options.get(DOCUMENT_TYPE);
Integer fetch_si

最低0.47元/天 解锁文章
234

被折叠的 条评论
为什么被折叠?



