Flink比较完美的支持kerberos读写Kafka.
1 flink-conf.yaml , 将如下涉及kerberos的配置注解去除
2 代码读写kafka示例如下:
def main(args: Array[String]): Unit = { val params: ParameterTool = ParameterTool.fromArgs(args) // set up execution environment val env = StreamExecutionEnvironment.getExecutionEnvironment // make parameters available in the web interface env.getConfig.setGlobalJobParameters(params) env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime) env.enableCheckpointing(10*1000,CheckpointingMode.EXACTLY_ONCE) val properties = new Properties() properties.setProperty("bootstrap.servers", "192.168.142.13:9092,192.168.142.11:9092,192.168.142.12 properties.setProperty("group.id", "test") properties.setProperty("security.protocol","SASL_PLAINTEXT") properties.setProperty("sasl.mechanism","GSSAPI") properties.setProperty("sasl.kerberos.service.name","kafka") //读取kafka -source val stream = env .addSource(new FlinkKafkaConsumer[String]("test", new SimpleStringSchema(), properties)) // .print() // 写入kafka -sink val myProducer = new FlinkKafkaProducer("sink-topic",new SimpleStringSchema(),properties) stream.addSink(myProducer).setParallelism(1) env.execute() }
3 在kafka consolei新建2各topic: test 用于消费 sink-topic用于写入。
在控制台手动生产数据,手动启动消费sink-topic控制台。部署flink后观察sink-topic是否写入成功即可。