Kafka的使用

[root@node2 ~]# kafka-topics.sh --zookeeper master:2181,node1:2181,node2:2181 --replication-factor 3 --partitions 3 --topic test_topic1 --create
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "test_topic1".
[root@node2 ~]# kafka-topics.sh --zookeeper master:2181,node1:2181,node2:2181 --list
test_topic1
[root@node2 ~]# kafka-topics.sh --zookeeper master:2181,node1:2181,node2:2181 --topic test_topic1 --describe 
Topic:test_topic1	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test_topic1	Partition: 0	Leader: 2    Replicas: 2,1,0	Isr: 2,1,0
	Topic: test_topic1	Partition: 1	Leader: 0    Replicas: 0,2,1	Isr: 0,2,1
	Topic: test_topic1	Partition: 2	Leader: 1    Replicas: 1,0,2	Isr: 1,0,2
[root@node2 ~]# kafka-console-producer.sh --broker-list master:9092,node1:9092,node2:9092 --topic test_topic1
>a
>s
>as
>d
>s
>ddff
>ddff
>we
[root@node1 ~]# kafka-console-consumer.sh --bootstrap-server master:9092,node1:9092,node2:9092 --topic test_topic1 --from-beginning
a
d
ddff
as
ddff
s
s
we
[root@node2 ~]# zkCli.sh
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 1] ls /brokers
[ids, topics, seqid]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/topics
[test_topic1, __consumer_offsets]
[zk: localhost:2181(CONNECTED) 3] ls /brokers/topics/__consumer_offsets
[partitions]
[zk: localhost:2181(CONNECTED) 4] ls /brokers/topics/__consumer_offsets/partitions
[44, 45, 46, 47, 48, 49, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
[zk: localhost:2181(CONNECTED) 5] ls /
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 6] ls /config
[changes, clients, topics]
[zk: localhost:2181(CONNECTED) 7] ls /config/clients
[]
[zk: localhost:2181(CONNECTED) 8] ls /consumers
[]
[root@master ~]# kafka-server-stop.sh
[root@master ~]# jps
10484 QuorumPeerMain
11109 Jps

重置kafka

关闭kafka

kafka-server-stop.sh
或者
kill -9 进程号

删除元数据 zk

zkCli.sh
# 删除与kafka有关的所有信息
ls /
rmr /config
rmr /brokers

 删除kafka的数据 所有节点都要删除

rm -rf /usr/local/soft/kafka_2.11-1.0.0/data

 检查配置文件并重启

kafka-server-start.sh -daemon  /usr/local/soft/kafka_2.11-1.0.0/config/server.propertie
       <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
package com.shujia.kafka

import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer

import java.util.Properties

object Demo01KafkaSource {
  def main(args: Array[String]): Unit = {
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

    val properties = new Properties()
    // 指定Kafka Broker集群的地址
    properties.setProperty("bootstrap.servers", "master:9092,node1:9092,node2:9092")
    // 指定消费者组ID
    properties.setProperty("group.id", "test")

    val flinkKafkaConsumer: FlinkKafkaConsumer[String] = new FlinkKafkaConsumer[String]("test_topic1", new SimpleStringSchema(), properties)

    flinkKafkaConsumer.setStartFromEarliest() // 从头开始消费
    //    flinkKafkaConsumer.setStartFromLatest()        // 从最新的数据开始消费,如果当前组是第一次消费 也会从头开始消费数据
    //    flinkKafkaConsumer.setStartFromTimestamp(...)  // 从某个时间点开始消费
    //    flinkKafkaConsumer.setStartFromGroupOffsets()  // 默认的 使用组的偏移量进行消费

    // 将Kafka的Consumer注册为Source -- 无界流
    val kafkaDS: DataStream[String] = env.addSource(flinkKafkaConsumer)

    // 基于Kafka数据统计单词数量
    kafkaDS
      .flatMap(_.split(","))
      .map((_, 1))
      .keyBy(_._1)
      .sum(1)
      .print()

    env.execute()
  }
}

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值