https://blog.csdn.net/weixin_45623111/article/details/137143863?spm=1001.2014.3001.5502
在构建 ELK (Elasticsearch, Logstash, Kibana) + Filebeat + Kafka + Zookeeper 的大数据日志分析平台中,数据流向 filebeat -> logstash -> kafka -> logstash -> elasticsearch 需要配置多个组件。下面是每个组件的配置文件示例和步骤。
1. Filebeat 配置
Filebeat 从日志文件读取数据并发送到 Logstash。
filebeat.yml:
复制代码
filebeat.inputs:
- type: log
enabled: true
paths:
- /path/to/your/logfile.log
output.logstash:
hosts: ["logstash_host:5044"]
2. Logstash 配置 (第一个 Logstash)
第一个 Logstash 接收 Filebeat 的数据,进行预处理,然后发送到 Kafka。
logstash.conf:
input {
beats {
port => 5044
}
}
filter {
# 添加你需要的过滤器,例如 grok, mutate 等
# grok {
# match => { "message" => "%{COMMONAPACHELOG}" }
# }
}
output {
kafka {
bootstrap_servers => "kafka_host:9092"
topic_id => "your_log_topic"
}
}
3. Kafka 配置
Kafka 和 Zookeeper 配置,确保 Kafka 集群正常工作。
server.properties (Kafka):
broker.id=0
log.dirs=/path/to/kafka-logs
zookeeper.connect=zookeeper_host:2181
zookeeper.properties (Zookeeper):
dataDir=/path/to/zookeeper-data
clientPort=2181
启动 Zookeeper 和 Kafka:
# 启动 Zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
# 启动 Kafka
bin/kafka-server-start.sh config/server.properties
4. Logstash 配置 (第二个 Logstash)
第二个 Logstash 从 Kafka 读取数据,进行进一步处理,然后发送到 Elasticsearch。
logstash.conf:
input {
kafka {
bootstrap_servers => "kafka_host:9092"
topics => ["your_log_topic"]
group_id => "logstash_group"
}
}
filter {
# 添加你需要的过滤器,例如 grok, mutate 等
# grok {
# match => { "message" => "%{COMMONAPACHELOG}" }
# }
}
output {
elasticsearch {
hosts => ["elasticsearch_host:9200"]
index => "your_index_name-%{+YYYY.MM.dd}"
}
}
5. Elasticsearch 配置
确保 Elasticsearch 集群正常工作。
elasticsearch.yml:
cluster.name: your_cluster_name
node.name: your_node_name
path.data: /path/to/elasticsearch-data
path.logs: /path/to/elasticsearch-logs
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["elasticsearch_host1", "elasticsearch_host2"]
cluster.initial_master_nodes: ["elasticsearch_host1", "elasticsearch_host2"]
启动 Elasticsearch:
bin/elasticsearch
6. Kibana 配置
Kibana 用于可视化 Elasticsearch 中的数据。
kibana.yml:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch_host:9200"]
启动 Kibana:
bin/kibana
启动顺序
启动 Zookeeper:bin/zookeeper-server-start.sh config/zookeeper.properties
启动 Kafka:bin/kafka-server-start.sh config/server.properties
启动 Elasticsearch:bin/elasticsearch
启动第一个 Logstash:bin/logstash -f /path/to/first/logstash.conf
启动第二个 Logstash:bin/logstash -f /path/to/second/logstash.conf
启动 Filebeat:./filebeat -e -c filebeat.yml
启动 Kibana:bin/kibana
这样配置和启动后,你的数据流向会是 filebeat -> logstash -> kafka -> logstash -> elasticsearch,并可以通过 Kibana 进行可视化分析。