docker搭建zk+kafka+flume+es+kibana日志处理系统

1 篇文章 0 订阅
1 篇文章 0 订阅

使用三台服务器

zookeeper集群搭建

镜像获取
docker pull zookeeper:latest
每台服务器创建挂载目录
mkdir -p /home/docker/zk-test/data
mkdir -p /home/docker/zk-test/logs
分别启动三台服务器zk
docker run --name zk-test --net=host --restart always -v /home/docker/zk-test:/data/zookeeper -e ZOO_PORT=2181 -e ZOO_DATA_DIR=/data/zookeeper/data -e ZOO_DATA_LOG_DIR=/data/zookeeper/logs -e ZOO_MY_ID=1 -e ZOO_SERVERS="server.1=192.168.0.100:2888:3888 server.2=192.168.0.101:2888:3888 server.3=192.168.0.102:2888:3888" -d zookeeper:latest 
docker run --name zk-test --net=host --restart always -v /home/docker/zk-test:/data/zookeeper -e ZOO_PORT=2181 -e ZOO_DATA_DIR=/data/zookeeper/data -e ZOO_DATA_LOG_DIR=/data/zookeeper/logs -e ZOO_MY_ID=2 -e ZOO_SERVERS="server.1=192.168.0.100:2888:3888 server.2=192.168.0.101:2888:3888 server.3=192.168.0.102:2888:3888" -d zookeeper:latest
docker run --name zk-test --net=host --restart always -v /home/docker/zk-test:/data/zookeeper -e ZOO_PORT=2181 -e ZOO_DATA_DIR=/data/zookeeper/data -e ZOO_DATA_LOG_DIR=/data/zookeeper/logs -e ZOO_MY_ID=3 -e ZOO_SERVERS="server.1=192.168.0.100:2888:3888 server.2=192.168.0.101:2888:3888 server.3=192.168.0.102:2888:3888" -d zookeeper:latest 
  • 其它都一样,注意ZOO_MY_ID=[1,2,3]参数。
验证
  • 进入其中一个容器:
docker exec -it zk-test /bin/bash
  • 执行:
zkServer.sh status

如下结果即为成功:

bash-4.4# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

kakfa集群搭建

镜像获取
docker pull wurstmeister/kafka:latest
分别启动三台服务器kafka
docker run --name kafka-test --net=host --restart always --volume /home/docker/kafka-test/data:/data -e KAFKA_BROKER_ID=1 -e KAFKA_PORT=9092 -e KAFKA_HEAP_OPTS="-Xms2g -Xmx4g" -e KAFKA_HOST_NAME=192.168.0.100 -e KAFKA_ADVERTISED_HOST_NAME=192.168.0.100 -e KAFKA_LOG_DIRS=/data/kafka -e KAFKA_ZOOKEEPER_CONNECT="192.168.0.100:2181,192.168.0.101:2181,192.168.0.102:2181" -d wurstmeister/kafka:latest
docker run --name kafka-test --net=host --restart always --volume /home/docker/kafka-test/data:/data -e KAFKA_BROKER_ID=2 -e KAFKA_PORT=9092 -e KAFKA_HEAP_OPTS="-Xms2g -Xmx4g" -e KAFKA_HOST_NAME=192.168.0.101 -e KAFKA_ADVERTISED_HOST_NAME=192.168.0.101 -e KAFKA_LOG_DIRS=/data/kafka -e KAFKA_ZOOKEEPER_CONNECT="192.168.0.100:2181,192.168.0.101:2181,192.168.0.102:2181" -d wurstmeister/kafka:latest
docker run --name kafka-test --net=host --restart always --volume /home/docker/kafka-test/data:/data -e KAFKA_BROKER_ID=3 -e KAFKA_PORT=9092 -e KAFKA_HEAP_OPTS="-Xms2g -Xmx4g" -e KAFKA_HOST_NAME=192.168.0.102 -e KAFKA_ADVERTISED_HOST_NAME=192.168.0.102 -e KAFKA_LOG_DIRS=/data/kafka -e KAFKA_ZOOKEEPER_CONNECT="192.168.0.100:2181,192.168.0.101:2181,192.168.0.102:2181" -d wurstmeister/kafka:latest
验证
  • 随便一台服务器进入容器
docker exec -it kafka-test /bin/bash
  • 创建topic
/opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.0.100:2181,192.168.0.101:2181,192.168.0.102:2181 --topic test --partitions 3 --replication-factor 2
  • 开启消费
/opt/kafka/bin/kafka-console-consumer.sh  --bootstrap-server 192.168.0.101:9092  --topic test --from-beginning
  • 重新开启终端进入容器后生产数据
/opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.0.101:9092 --topic test
  • 只要能消费到数据即集群创建成功

flume安装

镜像获取
docker pull probablyfine/flume:latest
创建挂在目录存放flume.conf自定义配置
mkdir -p /home/docker/flume-test
cd home/docker/flume-test
vim flume.conf

复制以下内容到flume.conf中

# Name the components on this agent
datahub101.sources = r1 r2 r3
datahub101.sinks = k1
datahub101.channels = c1
# Describe/configure the source
datahub101.sources.r1.type = exec
datahub101.sources.r1.command = tail -F /var/tmp/flume_log/test1/test1.log
datahub101.sources.r1.interceptors = i1
datahub101.sources.r1.interceptors.i1.type = static
datahub101.sources.r1.interceptors.i1.key = type
datahub101.sources.r1.interceptors.i1.value = test1
datahub101.sources.r2.type = exec
datahub101.sources.r2.command = tail -F /var/tmp/flume_log/test2/test2.log
datahub101.sources.r2.interceptors = i2
datahub101.sources.r2.interceptors.i2.type = static
datahub101.sources.r2.interceptors.i2.key = type
datahub101.sources.r2.interceptors.i2.value = test2
datahub101.sources.r3.type = exec
datahub101.sources.r3.command = tail -F /var/tmp/flume_log/test3/test3.log
datahub101.sources.r3.interceptors = i3
datahub101.sources.r3.interceptors.i3.type = static
datahub101.sources.r3.interceptors.i3.key = type
datahub101.sources.r3.interceptors.i3.value = test3
# Describe the sink
datahub101.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
datahub101.sinks.k1.kafka.topic =test
datahub101.sinks.k1.kafka.bootstrap.servers = 192.168.0.101:9092
datahub101.sinks.k1.kafka.flumeBatchSize = 20
datahub101.sinks.k1.kafka.producer.acks = 1
datahub101.sinks.k1.kafka.producer.linger.ms = 1
datahub101.sinks.ki.kafka.producer.compression.type = snappy
# Use a channel which buffers events in memory
datahub101.channels.c1.type = memory
datahub101.channels.c1.capacity = 1000
datahub101.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
datahub101.sources.r1.channels = c1
datahub101.sources.r2.channels = c1
datahub101.sources.r3.channels = c1
datahub101.sinks.k1.channel = c1
启动flume
docker run --name flume-test --restart always --net=host -v /home/docker/flume-test/flume.conf:/opt/flume-config/flume.conf -v /home/docker/flume-test/flume_log:/var/tmp/flume_log -e FLUME_AGENT_NAME="AGENT-101" -d probablyfine/flume:latest
验证
  • 开启kafka消费test topic。
echo "test msg" >> test1.log

ElasticSearch安装

镜像获取
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.0.0
创建挂载目录及启动参数
mkdir -p /home/docker/es-test/data
chmod 755 /home/docker/es-test/data
cd /home/docker/es-test
vim elasticsearch.yml

将下列参数加到elasticsearch.yml中

#集群名称,多个节点用一个名称
cluster.name: test
# es 1.0版本的默认配置是 "0.0.0.0",所以不绑定ip也可访问
network.host: 0.0.0.0
node.name: master
#跨域设置
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
镜像运行
docker run --name es-test -p 9200:9200 -p 9300:9300 -p 5601:5601 -e "discovery.type=single-node" -v /home/docker/es-test/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/docker/es-test/data:/usr/share/elasticsearch/data -d docker.elastic.co/elasticsearch/elasticsearch:7.0.0

kibana安装与es同服务器

镜像获取
docker pull docker.elastic.co/kibana/kibana:7.0.0
启动
docker run -d -e ELASTICSEARCH_URL=http://192.168.0.100:9200 --name kibana-test --network=container:es-test docker.elastic.co/kibana/kibana:7.0.0
  • 这里没有把配置文件挂载出来,需要进入容器中的/config/kibana.yml用下方配置替换一下,再重启容器,es的host用的是服务es-test的容器内的ip,可以docker logs es-test查看es启动的容器内的ip。
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://172.17.0.2:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
验证

访问192.168.0.100:5601即可。

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值