kafuka学习之路(一)kafuka安装和简单使用

一,安装环境与软件版本

linuxcentOs6 64
jdk     jdk-8u191-linux-x64.tar.gz
zookeeperzookeeper-3.4.10.tar.gz
kafukakafka_2.11-0.11.0.2

二,安装

##解压
-rwxrw-rw-.  1 root root 42136632 Jun 11 01:55 kafka_2.11-0.11.0.2.tgz
drwxr-xr-x. 12 1001 1001     4096 Jun 11 05:35 zookeeper-3.4.10
[root@localhost module]# tar -xvf kafka_2.11-0.11.0.2.tgz 
[root@localhost kafka_2.11-0.11.0.2]# ll
total 56
drwxr-xr-x. 3 root root  4096 Nov 10  2017 bin
drwxr-xr-x. 2 root root  4096 Nov 10  2017 config
drwxr-xr-x. 2 root root  4096 Jun 11 20:09 libs
-rw-r--r--. 1 root root 28824 Nov 10  2017 LICENSE
drwxr-xr-x. 2 root root  4096 Jun 11 20:10 logs
-rw-r--r--. 1 root root   336 Nov 10  2017 NOTICE
drwxr-xr-x. 2 root root  4096 Nov 10  2017 site-docs

##添加日志文件夹
[root@localhost kafka_2.11-0.11.0.2]# mkdir logs
[root@localhost kafka_2.11-0.11.0.2]# ll

#修改配文件
[root@localhost kafka_2.11-0.11.0.2]# cd config/
[root@localhost config]# vim server.properties 
broker.id=1  #broker的全局唯一编号,不能重复(我的是跟zk的myid一样)
delete.topic.enable=true
listeners=PLAINTEXT://192.168.8.132:9092
log.dirs=/opt/module/kafka_2.11-0.11.0.2/logs
zookeeper.connect=192.168.8.129:2181,192.168.8.132:2181,192.168.8.133:2181

三,启动和创建分区使用

     注:zookeeper集群启动正常的前提下

#启动
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-server-start.sh config/server.properties &

#关闭
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-server-stop.sh stop

##创建topic
#topic 定义topic名
#replication-factor  定义副本数
#partitions  定义分区数
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic test 

bin/kafka-topics.sh  --zookeeper localhost:2181 --create --replication-factor 3 --partitions 3 --topic test 
Created topic "test".  
##查看topic 列表
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-topics.sh --zookeeper localhost:2181 --list
test

##查看详情
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
Topic:test      PartitionCount:3        ReplicationFactor:3     Configs:        MarkedForDeletion:true
        Topic: test     Partition: 0    Leader: -1      Replicas: 0,1,2 Isr: 2
        Topic: test     Partition: 1    Leader: -1      Replicas: 1,2,0 Isr: 2
        Topic: test     Partition: 2    Leader: -1      Replicas: 2,0,1 Isr: 2

##删除topic
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
Topic test is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
#如果集群里有某个kafuka没有设置 delete.topic.enable=true ,
#则不会删除,需要全部重新启动后,再删除才可
#删除成功后的标记
[root@localhost kafka_2.11-0.11.0.2]#  bin/kafka-topics.sh --zookeeper localhost:2181 --list
test - marked for deletion
#再zk里删除注册的节点
rmr /brokers/topics/【topic name】

四,简单使用

##发送消息(localhost 必须是本机的ip)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topicTest

##消费消息(localhost 必须是本机的ip)
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic topicTest

#生产
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-console-producer.sh --broker-list 192.168.8.129:9092 --topic test1
>123
>123
>123
>123
>123
>123
>123
>hool^H^H
>holl
>hello
>hello
>

#消费1
[root@localhost kafka_2.11-0.11.0.2]#  bin/kafka-console-consumer.sh --zookeeper 192.168.8.132:2181 --from-beginning --topic test1
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
123
123
123
123
123
123
123
hool
holl
hello
hello

#消费2,中途退出后在进来,前期的消息会乱序
[root@localhost kafka_2.11-0.11.0.2]# bin/kafka-console-consumer.sh --zookeeper 192.168.8.133:2181 --from-beginning --topic test1
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
123
123
123
hello
123
123
hool
123
123
holl
hello

 

Apache Kafka是一个分布式流处理平台,它主要用于构建实时数据管道和流应用程序。它具有高性能、可扩展性和可靠性等特点。在Java中使用Kafka,通常需要以下几个步骤: 1. 添加依赖:在Java项目中,需要添加Kafka客户端库的依赖。如果你使用Maven,可以在pom.xml中添加如下依赖: ```xml <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>最新版本号</version> </dependency> ``` 2. 创建Kafka生产者:生产者负责将消息发送到Kafka集群中的主题。 ```java Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); // Kafka服务器地址 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); Producer<String, String> producer = new KafkaProducer<>(props); ProducerRecord<String, String> record = new ProducerRecord<>("your_topic", "key", "value"); producer.send(record); producer.close(); ``` 3. 创建Kafka消费者:消费者负责从Kafka集群中的主题拉取消息。 ```java Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); // Kafka服务器地址 props.put("group.id", "your_group_id"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); Consumer<String, String> consumer = new KafkaConsumer<>(props); consumer.subscribe(Arrays.asList("your_topic")); try { while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) { System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value()); } } } finally { consumer.close(); } ``` 4. 配置和管理:Kafka允许通过配置文件来设置各种参数,如副本数量、分区数量、副本因子、消息保留策略等。 5. 生产者和消费者API:Kafka提供了丰富的API来控制生产者和消费者的行为,包括消息的确认、批量发送、异步发送、事务处理等高级功能。 6. 错误处理和监控:在使用Kafka时,需要处理可能发生的错误,并且监控生产者和消费者的运行状况,确保消息能够成功地生产和消费。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值