elk安装配置

ELK安装配置

 

作者

版本

创作日期

备注

hanson

0.1

2019-07-04

草稿

 

ELK简介

ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具,另外也加入了kafka做消息队列,避免logstash和es由于大量日志处理不及时导致的拥塞和日志丢失。

 

架构介绍:

 

Filebeat+kafka+logstash+elasticsearch+kibana

 

版本介绍:

Filebeat:5.4  rpm安装,版本可随意

Kafka:kafka_2.11-2.2.0 编译安装

zookeeper:zookeeper-3.4.14 编译安装

Logstash:logstash-7.1.1  rpm安装

Kibana: kibana-7.1.1-x86_64 rpm安装

 

安装步骤:

  1. 把相关文件拷贝到对应服务器

其中kafka+logstash+elasticsearch+kibana安装到elk服务器上,filebeat安装到需要搜集日志的客户端即可

  1. filebeat配置

vim /etc/filebeat/filebeat.yml

Filebeat配置参考

filebeat:

  prospectors:

  - input_type: log

    tail_files: true

    backoff: "1s"

    paths:

        - /usr/local/nginx/logs/access.log

    tags: ["nginx-dev-access"]

  - input_type: log

    tail_files: true

    backoff: "1s"

    paths:

        - /opt/deploy/tomcat-joyhr/logs/catalina.out   #对应日志目录

    tags: ["xlc-dev-b"]                             #每一份日志对应tag

  - input_type: log

    tail_files: true

    backoff: "1s"

    paths:

        - /opt/deploy/tomcat-cxy/logs/catalina.out

tags: ["cxy-dev-b"]

output.kafka:

  enabled: true

  hosts: ['172.17.6.105:9092']   #kafka地址和端口

  topic: 'elk'                  #kafka topic名称

 

  1. logstash 配置

vim /etc/logstash/conf.d/logstash.conf

#从kafka输入到logstash

   input {

  kafka {

        enable_auto_commit => true

        auto_commit_interval_ms => "1000"

        codec => "json"

        bootstrap_servers => "172.17.6.105:9092"

        topics => ["elk"]

  }

}

#通过标签对日志进行过滤

filter {

   if "nginx-dev-access" in[tags]{

    grok {

#180.154.132.13 - - [06/Dec/2018:11:34:43 +0800] "GET /dest/lib/ztree/js/jquery.ztree.core.min.js HTTP/1.1" 200 31981 "https://39.105.184.195/dest/index.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36"

        match => {

            "message" => '(?<source_ip> - - \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) - [a-zA-Z0-9-]+ \[(?<nginx_time>[^ ]+) \+\d+\] "(?<method>[A-Z]+) (?<request_url>[^ ]+) HTTP/\d.\d" (?<status>\d+) \d+ "(?<referer>[^"]+)" "(?<agent>[^"]+)" ".*"'

        }

        }

     mutate {  

        # remove_field => "message"

         remove_field => "@version"

         remove_field => "input_type"

         remove_field => "tags"

         remove_field => "offset"

         remove_field => "beat"

         }

    date {

        match => ["nginx_time", "dd/MMM/yyyy:HH:mm:ss.SSS"]

        target => "date"

    }

  }

  if "xlc-dev-b" in[tags]{

    date {

        match => ["logdate", "yyyy-MM-dd HH:mm:ss.SSS"]

        target => "@timestamp"

        remove_field => ["logdate"]

      }

     multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

     grok {

        match => [ "message", "%{NOTSPACE:day} %{NOTSPACE:datetime}  %{NOTSPACE:level} %{GREEDYDATA:msginfo} " ]

    # mutate {

    #     remove_field => "message"

    #     remove_field => "@version"

    #     remove_field => "beat"

    #     }

     }

    }else if "xlc-dev-c" in[tags]{

     multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

    }else  if "cxy-dev-c" in[tags]{

      multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

    }else if "cxy-dev-b" in[tags]{

      multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

    }else if "cxy-test-b" in[tags]{

      multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

    }else if "xlc-dev-b" in[tags]{

      multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

    }else if "xlc-dev-c" in[tags]{

       multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

    }else if "cxy-test-c" in[tags]{

       multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

   }else if "xlc-test-quartz" in[tags]{

      multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

   }else if "xlc-dev-quartz" in[tags]{

       multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

   }else if "xlc-dev-remind" in[tags]{

       multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

     #grok {

     #   match => {

     #      "message" => "(?<date>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3})\]\[(?<level>[A-Z]{4,5})\]\[(?<thread>[A-Za-z0-9/-]{4,40})\]\[(?<class>[A-Za-z0-9/.]{4,40})\]\[(?<msg>.*)"        }

    # }

   }else if "xlc-test-remind" in[tags]{

      multiline {

        pattern =>  "^\d{4}-\d{2}-\d{2}"

        negate => true

        what => "previous"

        }

    }

}

 

#过滤后的日志输出到es

output {

  if "nginx-dev-access" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "nginx-dev-access-%{+YYY.MM.dd}"

  }

 }

 else if "xlc-dev-b" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-dev-b-%{+YYY.MM.dd}"

   }

   stdout {

       codec => rubydebug

    }

 }

 else if "xlc-dev-c" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-dev-c-%{+YYY.MM.dd}"

   }

 }

 else if "cxy-dev-c" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "cxy-dev-c-%{+YYY.MM.dd}"

   }

 }

 else if "cxy-dev-b" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "cxy-dev-b-%{+YYY.MM.dd}"

   }

 }

 else if "cxy-test-b" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "cxy-test-b-%{+YYY.MM.dd}"

   }

  }

 else if "xlc-test-b" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-test-b-%{+YYY.MM.dd}"

   }

 }

 else if "xlc-test-c" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-test-c-%{+YYY.MM.dd}"

   }

 }

 else if "cxy-test-c" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "cxy-test-c-%{+YYY.MM.dd}"

   }

 }

 else if "xlc-test-quartz" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-test-quartz-%{+YYY.MM.dd}"

   }

 }

 else if "xlc-dev-quartz" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-dev-quartz-%{+YYY.MM.dd}"

   }

 }

 else if "xlc-dev-crmq" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-dev-crmq-%{+YYY.MM.dd}"

   }

 }

 else if "xlc-test-crmq" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-test-crmq-%{+YYY.MM.dd}"

   }

 }

else if "xlc-dev-remind" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-dev-remind-%{+YYY.MM.dd}"

   }

 }

else if "xlc-test-remind" in[tags]{

   elasticsearch {

    hosts => ["http://172.17.6.105:9200"]

    index => "xlc-test-remind-%{+YYY.MM.dd}"

   }

 }

 

}

修改vim /etc/logstash/logstash.yml

pipeline.workers =1

启动logstash:

nohup /usr/share/logstash/bin/logstash -f conf.d/kafka-log.conf &

 

eses-head安装配置请见个人博客:

https://blog.csdn.net/qq_32154655/article/details/94601081

 

es配置文件:

vim /etc/elasticsearch/elasticsearch.yml

cluster.name: my-elk

node.name: node105

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

network.host: 0.0.0.0

http.port: 9200

cluster.initial_master_nodes: ["node105"]

http.cors.enabled: true

http.cors.allow-origin: "*"

启动es:

 /etc/init.d/elasticsearch start

 

  1. zookeeper和kafka安装配置

kafka文件名称:kafka_2.11-2.2.0.tgz

zookeeper文件名称:zookeeper-3.4.14.tar.gz

zookeeper安装配置:

  解压文件:tar –xf zookeeper-3.4.14.tar.gz

            cd zookeeper-3.4.14/conf

            cp conf/zoo_sample.cfg conf/zoo.cfg

            vim zoo.cfg

            dataDir=/opt/deploy/zookeeper-3.4.14/data

dataLogDir=/opt/deploy/zookeeper-3.4.14/logs

             修改此两处对应路径即可

启动zookeeper:

        ./bin/zkServer.sh start conf/zoo.cfg

注:先启动zookeeper再启动kafka

Kafka安装配置:

解压文件:tar –xf kafka_2.11-2.2.0.tgz

cd  kafka_2.11-2.2.0

  vim server.properties

     zookeeper.connect=172.17.6.105:2181

修改zookeeper连接地址即可

启动kafka:

      ./bin/kafka-server-start.sh config/server.properties &

  新建topic:

       ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic elk

 

此处有坑:如果是集群请务必每一个topic对应一个logstash,并且每一个topic只能由一个主题和一个副本即 --partitions --replication-factor参数都为1,否则会导致日志时间排序错乱

 

注:kafka详细配置参数说明请见:https://blog.csdn.net/lizhitao/article/details/25667831

 

  1. kibana安装:

rpm –ivh  kibana-7.1.1-x86_64.rpm

更改配置文件:

vim /etc/kibana/ kibana.yml

elasticsearch.hosts: ["http://172.17.6.105:9200"] #从es获取信息

i18n.locale: "zh-CN"  #中文界面

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值