wget https://archive.apache.org/dist/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz 下载
kafka_2.11-0.9.0.0.tgz 和 logstash-2.4.1.tar.gz 整合 版本一定要一致
tar -zxvf kafka_2.11-0.9.0.0.tgz -C /usr/local
修改参数 kafka目录下的config中
server.properties
listeners=PLAINTEXT://:9092
port=9092
host.name=172.16.1.231
log.dirs=/usr/local/kafka_2.11-0.9.0.0/tmp/kafka-logs
zookeeper.connect=172.16.1.231:2181
启动 ./kafka-server-start.sh -daemon ../config/server.properties
创建topic
kafka-topics.sh --create --zookeeper 172.16.1.231:2181 --replication-factor 1 --partitions 1 --topic test_topic
查看topic
kafka-topics.sh --list --zookeeper 172.16.1.231:2181
发送消息
kafka-console-producer.sh --broker-list 172.16.1.231:9092 --topic test_topic
消费消息
kafka-console-consumer.sh --zookeeper 172.16.1.231:2181 --topic test_topic --from-beginning
kafka-console-consumer.sh --zookeeper 172.16.1.231:2181 --topic test_topic
单节点多broker
复制server.properties
server-1.properties
broker.id=1
listeners=PLAINTEXT://:9093
port=9093
log.dirs=/usr/local/kafka_2.11-0.9.0.0/tmp/kafka-logs-1
server-2.properties
broker.id=2
listeners=PLAINTEXT://:9094
port=9094
log.dirs=/usr/local/kafka_2.11-0.9.0.0/tmp/kafka-logs-2
server-3.properties
broker.id=3
listeners=PLAINTEXT://:9095
port=9095
log.dirs=/usr/local/kafka_2.11-0.9.0.0/tmp/kafka-logs-3
启动
./kafka-server-start.sh -daemon ../config/server-1.properties &
./kafka-server-start.sh -daemon ../config/server-2.properties &
./kafka-server-start.sh -daemon ../config/server-3.properties &
创建topic
kafka-topics.sh --create --zookeeper 172.16.1.231:2181 --replication-factor 3 --partitions 1 --topic my_replication_topic
查看topic
kafka-topics.sh --list --zookeeper 172.16.1.231:2181
查看详细信息
kafka-topics.sh --describe --zookeeper 172.16.1.231:2181
kafka-topics.sh --describe --zookeeper 172.16.1.231:2181 --my_replication_topic
发送消息
kafka-console-producer.sh --broker-list 172.16.1.231:9093,172.16.1.231:9094,172.16.1.231:9095 --topic my_replication_topic
kafka-console-producer.sh --broker-list 172.16.1.231:9093,172.16.1.231:9094,172.16.1.231:9095 --topic my_replication_topic
消费消息
kafka-console-consumer.sh --zookeeper 172.16.1.231:2181 --topic my_replication_topic
多节点多broker
省略
整合logstash
kafka-topics.sh --create --zookeeper 172.16.1.231:2181 --replication-factor 1 --partitions 1 --topic logstash_topic
kafka-topics.sh --list --zookeeper 172.16.1.231:2181
kafka-console-producer.sh --broker-list 172.16.1.231:9092 --topic logstash_topic
kafka-console-consumer.sh --zookeeper 172.16.1.231:2181 --topic logstash_topic
echo "helloword" >> logstash.txt 往文件中写数据
/usr/local/logstash-2.4.1新建一个file_kafka.conf文件
input{
file {
path => "/usr/local/logstash-2.4.1/logstash.txt"
}
}
output{
kafka{
topic_id => "logstash_topic"
bootstrap_servers => "172.16.1.231:9092"
batch_size => 1
}
}
---batch_size可以不用配置
bootstrap_servers => "172.16.1.231:9092" kafka的broker
topic_id 创建的topicID