一、基本概念
- Topic:一组消息数据的标记符
- Producer:生产者,用于生产数据,可将生产后的消息送入指定的Topic
- Consumer:消费者,获取数据,可消费指定的Topic
- Group:消费者组,同一个group可以有多个消费者,一条消息在一个group中,只会被一个消费者获取
- Partition:分区,为了保证kafka的吞吐量,一个Topic可以设置多个分区,一个分区只能被一个消费者订阅
二、环境搭建
查看本地IP
ifconfig en0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=400<CHANNEL_IO>
ether a4:83:e7:93:0e:e0
inet6 fe80::84f:6e17:90c6:8bb%en0 prefixlen 64 secured scopeid 0x6
inet 172.20.10.2 netmask 0xfffffff0 broadcast 172.20.10.15
inet6 2408:841b:5301:2f69:1ce6:433:bb97:8c67 prefixlen 64 autoconf secured
inet6 2408:841b:5301:2f69:343e:9d40:e8b7:b2d5 prefixlen 64 autoconf temporary
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
注意:为了多环境环境切换,建议使用ihost文件,配置多组IP和域名对照
下载 zookeeper 镜像与 kafka 镜像
docker pull wurstmeister/zookeeper
docker pull wurstmeister/kafka
启动 zookeeper 和 kafka 服务
docker run -d --name zookeeper -p 2181:2181 -it wurstmeister/zookeeper /bin/bash
注意:如果相关服务部署在服务器上,一定要将相应的位置改成服务器的IP,具体详情可查看(五、样例)
# 第一种方式
docker run -d --name kafka -p 9092:9092 --link zookeeper -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.20.10.2:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka
# 第二种方式
docker run -d --name kafka --publish 9092:9092 --link zookeeper \
--env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
--env KAFKA_ADVERTISED_HOST_NAME=localhost \
--env KAFKA_ADVERTISED_PORT=9092 \
wurstmeister/kafka
docker run -d --name kafka1 -p 9093:9093 --link zookeeper -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.20.10.2:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -t wurstmeister/kafka
docker run -d --name kafka2 -p 9094:9094 --link zookeeper -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.20.10.2:9094 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9094 -t wurstmeister/kafka
- --name kafka 指定不同的docker容器名称,不可重复
- -e KAFKA_BROKER_ID 指定kafka的Broker序号,不可重复
- -e KAFKA_ZOOKEEPER_CONNECT 指定所用zookeeper的地址端口
- -e KAFKA_ADVERTISED_LISTENERS 指定kafka的监听端口
- -e KAFKA_ADVERTISED_HOST_NAME 指定kafka发布的主机
- -e KAFKA_ADVERTISED_PORT 指定kafka的发布端口
- -e KAFKA_LISTENERS 指定kafka的监听端口
- -t wurstmeister/kafka 使用的Docker镜像
检查启动情况
netstat -an | grep 2181
netstat -an | grep 9092
netstat -an | grep 9093
netstat -an | grep 9094
如果Docker容器启动发生错误,可查看容器运行日志
docker logs e3de6f136ca7 --tail 1000
三、环境测试
安装 kafka-python
pip install kafka-python
简易 Demo
import json
import traceback
from secrets import choice
from numpy import partition
from sqlalchemy import null
from ast import comprehension
from async_timeout import timeout
from kafka.errors import kafka_errors
from kafka import KafkaProducer, KafkaConsumer
def producer_demo():
# 假设生产的消息为键值对(不是一定要键值对),且序列化方式为json
# 若消息过大,可压缩消息发送,选值为gzip
producer = KafkaProducer(
bootstrap_servers=['localhost:9093'],
key_serializer=lambda k: json.dumps(k).encode(),
value_serializer=lambda v: json.dumps(v).encode(),
compression_type='gzip')
topic = null
for i in range(3):
choice = input()
if choice == '1':
topic = producer.send(
'cos-pull', key='count_num', value='END', partition=0)
elif choice == '2':
topic = producer.send('model-process', key="key",
value='END', partition=0)
try:
topic.get(timeout=10)
except kafka_errors:
traceback.format_exc()
if __name__ == '__main__':
producer_demo()
import json
import traceback
from sqlalchemy import null
from kafka.errors import kafka_errors
from kafka import KafkaProducer, KafkaConsumer
def consumer_demo():
topics = ['cos-pull', 'model-process']
consumer = KafkaConsumer(
bootstrap_servers='localhost:9093',
group_id='cosTest'
)
consumer.subscribe(topics=topics)
for message in consumer:
print("receive, key: {}, value: {}".format(
json.loads(message.key.decode()),
json.loads(message.value.decode())
))
if eval(message.value.decode()) == "END" and message.topic == topics[0]:
print(1)
elif eval(message.value.decode()) == "END" and message.topic == topics[1]:
print(2)
def producer_demo():
# 假设生产的消息为键值对(不是一定要键值对),且序列化方式为json
producer = KafkaProducer(
bootstrap_servers=['localhost:9092'],
key_serializer=lambda k: json.dumps(k).encode(),
value_serializer=lambda v: json.dumps(v).encode())
# 发送三条消息
for i in range(0, 3):
future = producer.send(
'kafka_demo',
key='count_num', # 同一个key值,会被送至同一个分区
value='model' + str(i),
partition=0) # 向分区1发送消息
print("host localhost:9092 produces {}".format('model' + str(i)))
try:
future.get(timeout=10) # 监控是否发送成功
except kafka_errors: # 发送失败抛出kafka_errors
traceback.format_exc()
if __name__ == '__main__':
consumer_demo()
import json
from kafka.errors import kafka_errors
from kafka import KafkaProducer, KafkaConsumer
def consumer_demo():
consumer = KafkaConsumer(
'kafka_demo',
bootstrap_servers='localhost:9092',
group_id='test'
)
for message in consumer:
print("receive, key: {}, value: {}".format(
json.loads(message.key.decode()),
json.loads(message.value.decode()))
)
if eval(message.value.decode()) == "END":
print(1)
if __name__ == '__main__':
consumer_demo()
四、可视化界面
下载 kafka-manager 管理镜像
docker pull sheepkiller/kafka-manager
运行 kafka-manager
docker run -itd --restart=always --name=kafka-manager -p 9000:9000 -e ZK_HOSTS=172.20.10.2:2181 sheepkiller/kafka-manager
访问 kafka-manager
五、样例
docker run -d --name kafka --publish 10002:9092 --link zookeeper --env KAFKA_ZOOKEEPER_CONNECT=ip:port --env KAFKA_ADVERTISED_HOST_NAME=ip --env KAFKA_ADVERTISED_PORT=10002 wurstmeister/kafka