2019-04-17笔记—kafka集群

zookeeper 集群搭建

提前规划准备工作

主机名IP
linux2019_01192.168.85.129
linux2019_02192.168.85.128
linux2019_03192.168.85.130

设置主机名,并设置hosts,关闭Selinux、firewalld并安装JDK

  1. 安装部署ZooKeeper
[root@linux2019_01 ~]# cd /usr/local/src
[root@linux2019_01 src]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/stable/zookeeper-3.4.14.tar.gz
[root@linux2019_01 src]# tar zxvf zookeeper-3.4.14.tar.gz 
[root@linux2019_01 src]# mv zookeeper-3.4.14 /usr/local/zookeeper
[root@linux2019_01 src]# cd /usr/local/zookeeper/
[root@linux2019_01 zookeeper]# mkdir data ;mkdir dataLog
[root@linux2019_01 zookeeper]# echo "1" > data/myid
[root@linux2019_01 zookeeper]# cp conf/zoo_sample.cfg conf/kafka_zk.cfg
[root@linux2019_01 zookeeper]# vi conf/kafka_zk.cfg     #更改为如下配置
 tickTime=2000
 # 数据文件存放位置
 dataDir=/usr/local/zookeeper/data
 dataLogDir=/usr/local/zookeeper/dataLog
 #服务监听端口
 clientPort=2181
 #选举等待时间
 initLimit=5
 syncLimit=2
 #集群节点信息
 server.1=linux2019_01:2888:3888
 server.2=linux2019_02:2888:3888
 server.3=linux2019_03:2888:3888
  1. 分发文件
[root@linux2019_01 zookeeper]# scp -r /usr/local/zookeeper linux2019_02:/usr/local/
[root@linux2019_01 zookeeper]# scp -r /usr/local/zookeeper linux2019_03:/usr/local/

#同时修改对应节点主机上的myid:Linux2019_02上“echo "2" > /usr/local/zookeeper/data/myid” ; Linux2019_03上“echo "3" > /usr/local/zookeeper/data/myid”
  1. 添加同步时间的任务计划啊(三台机器都执行)
[root@linux2019_01 zookeeper]# yum install -y ntpdate
[root@linux2019_01 zookeeper]# ntpdate ntp1.aliyun.com
17 Apr 15:59:11 ntpdate[30287]: adjust time server 120.25.115.20 offset -0.001865 sec
[root@linux2019_01 zookeeper]# echo "5/* * * * * ntpdate ntp1.aliyun.com" >> /var/spool/cron/root 
  1. 启动集群(三台都执行)
[root@linux2019_01 zookeeper]# /usr/local/zookeeper/bin/zkServer.sh start /usr/local/zookeeper/conf/kafka_zk.cfg
ZooKeeper JMX enabled by default    # 启动集群
Using config: /usr/local/zookeeper/conf/kafka_zk.cfg
Starting zookeeper ... STARTED
[root@linux2019_01 zookeeper]# /usr/local/zookeeper/bin/zkServer.sh status /usr/local/zookeeper/conf/kafka_zk.cfg
ZooKeeper JMX enabled by default    #查看集群状态
Using config: /usr/local/zookeeper/conf/kafka_zk.cfg
Mode: follower
[root@linux2019_01 zookeeper]# /usr/local/zookeeper/bin/zkServer.sh stop /usr/local/zookeeper/conf/kafka_zk.cfg 
ZooKeeper JMX enabled by default    #关闭集群
Using config: /usr/local/zookeeper/conf/kafka_zk.cfg
Stopping zookeeper ... STOPPED

/usr/local/zookeeper/bin/zkCli.sh -server linux2019_01:2181     #连接ZooKeeper

ZooKeeper常见用法

  • 查询节点(ls)
[zk: linux2019_01:2181(CONNECTED) 2] ls /
[zookeeper]
  • 创建节点(create)
[zk: linux2019_01:2181(CONNECTED) 3] create /test-node 'abc'
Created /test-node
说明:节点名称必须以/开头,test-node为节点名称,'abc'为具体数据
  • 创建临时节点(-e)
[zk: linux2019_01:2181(CONNECTED) 1] create -e /test-node2 '123123'
Created /test-node2
  • 创建序列节点,它会自动加上一堆数字(-s)
[zk: linux2019_01:2181(CONNECTED) 2] create -s  /s-node 'alfkaof'   
Created /s-node0000000002
[zk: linux2019_01:2181(CONNECTED) 3] create -s  /s-node '123123123'
Created /s-node0000000003
  • 创建临时序列节点(-e -s)
[zk: linux2019_01:2181(CONNECTED) 4] create -e -s  /e-s-node 'what a nice day!' 
Created /e-s-node0000000004
  • 查看节点状态(stat)
[zk: linux2019_01:2181(CONNECTED) 5] stat /test-node
cZxid = 0x100000004
ctime = Wed Apr 17 16:09:56 CST 2019
mZxid = 0x100000004
mtime = Wed Apr 17 16:09:56 CST 2019
pZxid = 0x100000004
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 0
  • 查看节点数据内容(get)
[zk: linux2019_01:2181(CONNECTED) 6] ls /           
[test-node, test-node2, s-node0000000003, s-node0000000002, zookeeper, e-s-node0000000004]
[zk: linux2019_01:2181(CONNECTED) 7] get /e-s-node0000000004
what a nice da!
cZxid = 0x10000000a
ctime = Wed Apr 17 16:14:43 CST 2019
mZxid = 0x10000000a
mtime = Wed Apr 17 16:14:43 CST 2019
pZxid = 0x10000000a
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x100016f91330001
dataLength = 15
numChildren = 0
  • 设置节点数据(set)
[zk: linux2019_01:2181(CONNECTED) 9] get /test-node2
123123
cZxid = 0x100000007
ctime = Wed Apr 17 16:12:29 CST 2019
mZxid = 0x100000007
mtime = Wed Apr 17 16:12:29 CST 2019
pZxid = 0x100000007
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x100016f91330001
dataLength = 6
numChildren = 0
[zk: linux2019_01:2181(CONNECTED) 10] set /test-node

test-node    test-node2
[zk: linux2019_01:2181(CONNECTED) 10] set /test-node2 'linuxos'  
cZxid = 0x100000007
ctime = Wed Apr 17 16:12:29 CST 2019
mZxid = 0x10000000b
mtime = Wed Apr 17 16:17:37 CST 2019
pZxid = 0x100000007
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x100016f91330001
dataLength = 7
numChildren = 0
[zk: linux2019_01:2181(CONNECTED) 11] get /test-node

test-node    test-node2
[zk: linux2019_01:2181(CONNECTED) 11] get /test-node2
linuxos
cZxid = 0x100000007
ctime = Wed Apr 17 16:12:29 CST 2019
mZxid = 0x10000000b
mtime = Wed Apr 17 16:17:37 CST 2019
pZxid = 0x100000007
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x100016f91330001
dataLength = 7
numChildren = 0
  • 删除节点(delete)
[zk: linux2019_01:2181(CONNECTED) 13] delete /test-node2
[zk: linux2019_01:2181(CONNECTED) 14] ls /
[test-node, s-node0000000003, s-node0000000002, zookeeper, e-s-node0000000004]
[zk: linux2019_01:2181(CONNECTED) 19] create /test-node 'abcba'
Created /test-node
[zk: linux2019_01:2181(CONNECTED) 20] create /test-node/abc 'aaewffewf'
Created /test-node/abc
[zk: linux2019_01:2181(CONNECTED) 21] delete /test-node
Node not empty: /test-node
#如果/test_node下有子节点,则删除会报错,此时可以使用递归删除命令rmr
[zk: linux2019_01:2181(CONNECTED) 22] rmr /test-node
[zk: linux2019_01:2181(CONNECTED) 23] ls /
[s-node0000000003, s-node0000000002, zookeeper, e-s-node0000000004]
  • 设置节点Acl
[zk: linux2019_01:2181(CONNECTED) 27] setAcl /test-node ip:192.168.85.129:rcdwa
cZxid = 0x100000013
ctime = Wed Apr 17 16:27:03 CST 2019
mZxid = 0x100000013
mtime = Wed Apr 17 16:27:03 CST 2019
pZxid = 0x100000013
cversion = 0
dataVersion = 0
aclVersion = 1
ephemeralOwner = 0x0
dataLength = 3
numChildren = 0
  1. CREATE®:创建子节点的权限
  2. DELETE(d):删除节点的权限
  3. READ®:读取节点数据的权限
  4. WRITE(w):修改节点数据的权限
  5. ADMIN(a):设置子节点权限的权限
  • 获取节点Acl
[zk: linux2019_01:2181(CONNECTED) 28] getAcl /test-node
'ip,'192.168.85.129
: cdrwa

更多指令参考

  1. https://blog.csdn.net/xyang81/article/details/53053642
  2. https://blog.csdn.net/xyang81/article/details/53147894
Kafka集群搭建
[root@linux2019_01 ~]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.2.0/kafka_2.12-2.2.0.tgz
[root@linux2019_01 ~]# tar zxf kafka_2.12-2.2.0.tgz
[root@linux2019_01 ~]# mv kafka__2.12-2.2.0 /usr/local/kafka
[root@linux2019_01 ~]# cd /usr/local/kafka
[root@linux2019_01 kafka]# mkdir logs

[root@linux2019_01 kafka]#  vim config/server.properties #按如下方法配置
broker.id=1  
#当前机器在集群中的唯一标识,和zookeeper的myid性质一样
port=9092 
#当前kafka对外提供服务的端口默认是9092
host.name=192.168.85.129
#本机IP
num.network.threads=3 
#这个是borker进行网络处理的线程数
num.io.threads=8 
#这个是borker进行I/O处理的线程数
log.dirs=/usr/local/kafka/logs #消息存放的目录,这个目录可以配置为“,”逗号分割的表达式,上面的num.io.threads要大于这个目录的个数这个目录,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个
socket.send.buffer.bytes=102400 
#发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能
socket.receive.buffer.bytes=102400 
#kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘
socket.request.max.bytes=104857600 
#这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小
num.partitions=1 
#默认的分区数,一个topic默认1个分区数
log.retention.hours=168 
#默认消息的最大持久化时间,168小时,7天
message.max.byte=5242880  
#消息保存的最大值5M
default.replication.factor=2  
#kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务
replica.fetch.max.bytes=5242880  
#取消息的最大直接数
log.segment.bytes=1073741824 
#这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件
log.retention.check.interval.ms=300000 
#每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除
log.cleaner.enable=false 
#是否启用log压缩,一般不用启用,启用的话可以提高性能
zookeeper.connect=linux2019_01:2181,linux2019_02:2181,linux2019_03:2181 
#设置zookeeper的连接端口

分发文件

[root@linux2019_01 kafka]# scp -r /usr/local/kafka/ linux2019_02:/usr/local/kafka/
[root@linux2019_01 kafka]# scp -r /usr/local/kafka/ linux2019_03:/usr/local/kafka/

#并修改对用的broker.id和host.name
[root@linux2019_01 kafka]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Kafka 中,Producer 是用来发送消息到 Kafka 集群的组件。在本篇文章中,我们将介绍如何使用 Kafka 的 Java 客户端 API 来编写一个简单的 Producer。 1. 引入 Kafka 依赖 首先,需要在 Maven 或 Gradle 构建中引入 Kafka 客户端依赖: ```xml <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>2.8.0</version> </dependency> ``` 2. 创建 Producer 实例 接下来,在 Java 代码中创建一个 KafkaProducer 实例: ```java Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); Producer<String, String> producer = new KafkaProducer<>(props); ``` 其中,bootstrap.servers 是必须设置的属性,用于指定 Kafka 集群中至少一个 Broker 的地址。key.serializer 和 value.serializer 用于指定消息的键和值的序列化器。这里我们使用的是 StringSerializer,也可以使用其他序列化器实现自定义序列化逻辑。 3. 发送消息 一旦创建了 KafkaProducer 实例,就可以使用它来发送消息到指定的 Kafka 主题: ```java ProducerRecord<String, String> record = new ProducerRecord<>("test-topic", "key", "value"); producer.send(record); ``` 这里的 ProducerRecord 构造函数中,第一个参数是要发送消息的主题名称,第二个参数是消息的键,第三个参数是消息的值。send() 方法用于将 ProducerRecord 发送到 Kafka 集群。 4. 关闭 Producer 在使用完 Producer 后,需要关闭它以释放资源: ```java producer.close(); ``` 完整代码示例: ```java import org.apache.kafka.clients.producer.*; import java.util.Properties; public class KafkaProducerExample { public static void main(String[] args) { Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); Producer<String, String> producer = new KafkaProducer<>(props); ProducerRecord<String, String> record = new ProducerRecord<>("test-topic", "key", "value"); producer.send(record); producer.close(); } } ``` 这就是一个简单的 Kafka Producer 的使用示例。在实际应用中,还可以根据需要设置其他属性,例如消息的分区策略、消息的压缩方式等。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值