elk+filebeat+kafka集群部署

 

3台es+file,3台kafka ,1台nginx

192.168.124.10 es

192.168.124.20 es

192.168.124.30 kibana

192.168.124.50 kafka

192.168.124.51 kafka

192.168.124.60 kafka

192.168.124.40 nginx filebeat

安装nginx服务

vim /usr/local/nginx/html/index.html

this is nginx 到浏览器测试一下页面访问

vim filebeat.yml进入配置文件中

- type: log
 enabled: true
 paths:
  - /usr/local/nginx/logs/access.log
  - /usr/local/nginx/logs/error.log
 tags: ["nginx"]
 fields:
  service_name: 20.0.0.10_nginx
  log_type: nginx
  from: 20.0.0.10

output.kafka:
 enabled: true
 hosts: ["20.0.0.11:9092","20.0.0.12:9092","20.0.0.13:9092"]
 topic: "nginx"

运行filebeat.yml

nohup ./filebeat -e -c filebeat.yml > filebeat.out &

 在主机192.168.124.30上配置文件logstash

cd /opt/log     vim kafka.conf

input {
 kafka {
  bootstrap_servers => "192.168.124.50:9092,192.168.124.51:9092,192.168.124.60:9092"
  topics => "nginx"
  type => "nginx_kafka"
  codec => "json"
  \#解析json格式的代码
  auto_offset_reset => "earliest"
  \#从头拉取,latest
  decorate_events => true
  \#传递给es实例中的信息包含kafka的属性数据
  }
 }
output {
 if "access" in [tags] {
  elasticsearch {
   hosts => ["192.168.124.10:9200","192.168.124.20:9200"]
    index => "nginx_access-%{+YYYY.MM.dd}"
}
}
 
  if "error" in [tags] {
    elasticsearch {
      hosts => ["192.168.124.10:9200","192.168.124.20:9200"]
      index => "nginx_error-%{+YYYY.MM.dd}"

  }
 }
}

 

浏览器访问 http://192.168.124.30:5601 登录 Kibana,
单击“Create Index Pattern”按钮添加索引“filebeat_test-*”,单击 “create” 按钮创建,
单击 “Discover” 按钮可查看图表信息及日志信息。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值