Flink1.9.1,scala2.12连接kafka2.11_2.40实例

1.添加相关依赖

 <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-connector-kafka_2.12</artifactId>
        <version>1.9.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-scala -->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-scala_2.12</artifactId>
        <version>1.9.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala -->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-streaming-scala_2.12</artifactId>
        <version>1.9.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-core -->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-core</artifactId>
        <version>1.9.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.13</artifactId>
        <version>2.4.0</version>
    </dependency>

2.创建scala类,并开发代码

package com.vincer

import java.util.Properties

import org.apache.flink.api.common.serialization.SimpleStringSchema

import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
// flatMap和Map需要引用的隐式转换
import org.apache.flink.api.scala._

/**
 * @Package com.vincer
 * @ClassName conkafka
 * @Author Vincer
 * @Date 2020/01/13 22:31
 * @ProjectName kafka-flink 
 */
object conkafka {

  def main(args: Array[String]): Unit = {
    val kafkaProps = new Properties()
    //kafka的一些属性
    kafkaProps.setProperty("bootstrap.servers", "hadoop100:9092")
    //所在的消费组
    kafkaProps.setProperty("group.id", "group_test")
    //获取当前的执行环境
    val evn = StreamExecutionEnvironment.getExecutionEnvironment
    //kafka的consumer,test1是要消费的topic
    val kafkaSource = new FlinkKafkaConsumer[String]("test1", new SimpleStringSchema, kafkaProps)
    //设置从最新的offset开始消费
    //kafkaSource.setStartFromLatest()
    //自动提交offset
    kafkaSource.setCommitOffsetsOnCheckpoints(true)

    //flink的checkpoint的时间间隔
    evn.enableCheckpointing(5000)
    //添加consumer
    val stream = evn.addSource(kafkaSource)
    stream.setParallelism(3)
    val text = stream


    text.print()
    //启动执行
    evn.execute("kafkawd")
  }
}

3.启动zookeeper,kafka

(过程免)

4.启动kafka的Client生产数据

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test1

5.运行代码

6.在kafka上生产数据,打印到IDEA的控制台

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值