环境准备
节点名称 | ip地址 |
---|---|
node1 | 192.168.130.20 |
node2 | 192.168.130.19 |
node2 | 192.168.130.21 |
安装docker
略
创建挂载目录和配置
启动容器
docker run -d -p 5044:5044 --name logstash \
logstash:7.4.1
拷贝配置
mkdir -p /root/logstash/data && chmod 777 /root/logstash/data
docker cp logstash:/usr/share/logstash/config /root/logstash/
docker cp logstash:/usr/share/logstash/pipeline /root/logstash/
删除(只是为了拿到原始配置)
docker rm -f logstash
修改logstash.yml
vi /root/logstash/config/logstash.yml
logstash.yml内容如下
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.130.20:9200" ]
修改pipelines.yml
vi /root/logstash/config/pipelines.yml
pipelines.yml内容如下
#- pipeline.id: main
# path.config: "/usr/share/logstash/pipeline/logstash.conf"
- pipeline.id: kafkatoes
path.config: "/usr/share/logstash/pipeline/kafka-ls-es.conf"
pipeline.workers: 4
创建一个新的pipeline配置文件kafka-ls-es.conf,用于从kafka接受数据经过过滤后写入es
vi /root/logstash/pipeline/kafka-ls-es.conf
内容如下(根据实际情况配置)
# kafka -> Logstash -> Elasticsearch pipeline.
input {
kafka {
bootstrap_servers => ["192.168.130.20:9092,192.168.130.19:9092,192.168.130.21:9092"]
group_id => "hello"
client_id => "ls-node1"
consumer_threads => "4"
topics => ["hello-elk"]
codec => json { charset => "UTF-8" }
}
}
filter{
json{
source=>"message"
}
}
output {
elasticsearch {
hosts => ["192.168.130.20:9200","192.168.130.19:9200","192.168.130.21:9200"]
index => "hello-elk-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme
}
}
启动
node1
docker run -d --user root \
--name ls-node1 \
-p 5044:5044 \
-v /root/logstash/config:/usr/share/logstash/config \
-v /root/logstash/pipeline:/usr/share/logstash/pipeline \
-v /root/logstash/data:/usr/share/logstash/data \
-e TZ=Asia/Shanghai \
logstash:7.4.1
node2
如果有多个logstash实例, kafka-ls-es.conf中的clientid要不一样
docker run -d --user root \
--name ls-node2 \
-p 5044:5044 \
-v /root/logstash/config:/usr/share/logstash/config \
-v /root/logstash/pipeline:/usr/share/logstash/pipeline \
-v /root/logstash/data:/usr/share/logstash/data \
-e TZ=Asia/Shanghai \
logstash:7.4.1