elk日志系统安装部署

elk是什么?

ELK是ElasticSearch、Logstash、Kibana三个应用的缩写。 ElasticSearch简称ES,主要用来存储和检索数据。 Logstash主要用来往ES中写入数据。 Kibana主要用来展示数据

elk系统架构图

image

ElasticSearch

Elasticsearch 是一个分布式,实时,全文搜索引擎。所有操作都是通过 RESTful 接口实现; 其底层实现是基于 Lucene全文搜索引擎。数据以JSON文档的格式存储索引,不需要预先规定范式

  • ElasticSearch和传统数据库的术语对比
    image
  • 节点(node)是一个运行着的Elasticsearch实例。集群(cluster)是一组具有相同cluster.name的节点集合,他们协同工作,共享数据并提供故障转移和扩展功能,当然一个节点也可以组成一个集群
  • 集群中一个节点会被选举为主节点(master),它将临时管理集群级别的一些变更,例如新建或删除索引、增加或移除节点等。主节点不参与文档级别的变更或搜索,这意味着在流量增长的时候,该主节点不会成为集群的瓶颈。
Logstash

Logstash 是非常灵活的日志收集工具,不局限于向 Elasticsearch 导入数据,可以定制多种输入,输出,及过滤转换规则。

redis传输

Redis 服务器通常都是用作 NoSQL 数据库,不过 logstash 只是用来做消息队列

Kibana

Kibana 实时数据分析的工具

elk配置安装

  • jdk安装,需要安装jdk-1.8.0以上版本,不然运行logstash会报错

    • 使用yum进行安装

      yum -y install java-1.8.0-openjdk
      
      vim /etc/profile
      JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.91-1.b14.el6.x86_64/jre
      export JAVA_HOME 
      
      source /etc/profile
  • elasticsearch安装配置

    wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpm
    rpm -ivh elasticsearch-1.7.1.noarch.rpm
    
    启动:/etc/init.d/elasticsearch start
    
    
    安装插件:
    1./usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
    2./usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
    报错时:Failed: SSLException[java.security.ProviderException: java.security.KeyException]; nested: ProviderException[java.security.KeyException]; nested: KeyException;
    解决: yum upgrade nss
    
    • 配置elasticsearch.yml,并对/etc/init.d/elasticsearch下的LOG_DIR和DATA_DIR路径进行相应的修改

      cluster.name: elk-local
      node.name: node-1
      path.data: /file2/elasticsearch/data
      path.logs: /file2/elasticsearch/logs
      bootstrap.mlockall: true
      network.host: 0.0.0.0
      http.port: 9200
      discovery.zen.ping.unicast.hosts: ["192.168.1.16"]
      
  • logstash安装配置

    • rpm安装,下载地址

      wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.4-1.noarch.rpm
      安装:rpm -ivh logstash-2.3.4-1.noarch.rpm 
      启动:/etc/init.d/logstash start
      ln -s /opt/logstash/bin/logstash /usr/bin/logstash
    • logstash配置

      • elk日志系统配置部署的核心在于日志的收集和整理,就是logstash这一部分,需要配置最多的也是这里。尤其是filter{}部分里面的grok正则匹配,根据自己日志格式和所需数据进行分隔提取
      • 这两个链接会对你写grok有帮助: grok正则语法教程 grokdebug

      • 路径:/etc/logstash/conf.d/下以.conf结尾

      • 配置主要分为三部分:input(输入),filter(过滤),output(输出)
        (ps:如果以%{type}作为索引名称,则type里不应含有特殊字符)
    • logstash配置实例

      • 角色:shipper为192.168.1.13;broker,indexer,search&storage都为192.168.1.16
      • broker:首先安装好redis,并启动
      • shipper:只负责收集数据,不用处理,配置较简单,其他台配置同理

        input {
            file {
                path => "/web/nginx/logs/www.log"
                type => "nginx-log"
                start_position => "beginning"
            }
        
        }
        output {
                if [type] == "nginx-log" {
                redis {
                        host => "192.168.1.16"
                        port  => "6379"
                        data_type => "list"
                        key => "nginx:log"
                 }
            }
        }
      • indexer,search&storage:收集shipper传来的日志,处理好格式后输出到es

        input {
                redis {
                        host => "192.168.1.16"
                        port => 6379
                        data_type => "list"
                        key => "nginx:log"
                        type => "nginx-log"
                }
        }
        filter {
            grok {
                match => {
                    "message" => "%{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\]\ \"(?:%{WORD:method} %{NOTSPACE:request}(?: %{URIPROTO:proto}/%{NUMBER:httpversion})?|%
        {DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} (%{WORD:x_forword}|-)- (%{NUMBER:request_time}) -- (%{NUMBER:upstream_response_time}) -- %{IPORHOST
        :domain} -- (%{WORD:upstream_cache_status}|-)"
                }
            }
        }
        output {
                if [type] == "nginx-log" {
                        elasticsearch {
                                hosts => ["192.168.1.16:9200"]
                                index => "nginx-%{+YYYY.MM.dd}"
                        }
                }
        }
        • ps:当以%{type}作为索引名称的时候,tpye里不能有特殊字符
    • 测试配置正确性/启动

      测试:/opt/logstash/bin/logstash -f /etc/logstash/conf.d/xx.conf -t
      
      启动:service logstash start
  • Kibana 安装

    wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
    tar zxvf https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
    • 配置启动项(/etc/init.d/kibana )

      
      #!/bin/bash     
      
      
      ### BEGIN INIT INFO
      
      
      # Provides:          kibana
      
      
      # Default-Start:     2 3 4 5
      
      
      # Default-Stop:      0 1 6
      
      
      # Short-Description: Runs kibana daemon
      
      
      # Description: Runs the kibana daemon as a non-root user
      
      
      ### END INIT INFO
      
      
      
      # Process name
      
      NAME=kibana
      DESC="Kibana4"
      PROG="/etc/init.d/kibana"
      
      
      # Configure location of Kibana bin
      
      KIBANA_BIN=/vagrant/elk/kibana-4.1.1-linux-x64/bin               #注意路径
      
      
      # PID Info
      
      PID_FOLDER=/var/run/kibana/
      PID_FILE=/var/run/kibana/$NAME.pid
      LOCK_FILE=/var/lock/subsys/$NAME
      PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
      DAEMON=$KIBANA_BIN/$NAME
      
      
      # Configure User to run daemon process
      
      DAEMON_USER=root
      
      # Configure logging location
      
      KIBANA_LOG=/var/log/kibana.log
      
      
      # Begin Script
      
      RETVAL=0
      
      if [ `id -u` -ne 0 ]; then
              echo "You need root privileges to run this script"
              exit 1
      fi
      
      
      # Function library
      
      . /etc/init.d/functions
      
      start() {
              echo -n "Starting $DESC : "
      
      
      pid=`pidofproc -p $PID_FILE kibana`
              if [ -n "$pid" ] ; then
                      echo "Already running."
                      exit 0
              else
              # Start Daemon
      if [ ! -d "$PID_FOLDER" ] ; then
                              mkdir $PID_FOLDER
                      fi
      daemon --user=$DAEMON_USER --pidfile=$PID_FILE $DAEMON 1>"$KIBANA_LOG" 2>&1 &
                      sleep 2
                      pidofproc node > $PID_FILE
                      RETVAL=$?
                      [[ $? -eq 0 ]] && success || failure
      echo
                      [ $RETVAL = 0 ] && touch $LOCK_FILE
                      return $RETVAL
              fi
      }
      
      
      reload()
      {
          echo "Reload command is not implemented for this service."
          return $RETVAL
      }
      
      
      stop() {
              echo -n "Stopping $DESC : "
              killproc -p $PID_FILE $DAEMON
              RETVAL=$?
      echo
              [ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE
      }
      
      case "$1" in
        start)
              start
      ;;
        stop)
              stop
              ;;
        status)
              status -p $PID_FILE $DAEMON
              RETVAL=$?
              ;;
        restart)
              stop
              start
              ;;
        reload)
      reload
      ;;
        *)
      
      # Invalid Arguments, print the following message.
      
              echo "Usage: $0 {start|stop|status|restart}" >&2
      exit 2
              ;;
      esac
    • 由于kibana存放kibana添加验证(nginx下实现)

      1.yum install -y httpd  #如果已安装此步骤可忽略
      2.确定htpasswd位置( whereis htpasswd)
      htpasswd: /usr/bin/htpasswd /usr/share/man/man1/htpasswd.1.gz
      3.生成密码文件
      /usr/bin/htpasswd -c /web/nginx/conf/elk/authdb elk
      New password:根据提示输入两次密码即可,密码存放在authdb里
      4.nginx添加elk配置/web/nginx/conf/elk/elk.conf
      server {
              listen          80;
              server_name     www.elk.com;
              charset         utf8;
      
              location / {
                      proxy_pass http://192.168.1.16$request_uri;
                      proxy_set_header   Host   $host;
                      proxy_set_header   X-Real-IP   $remote_addr;
                      proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
                      auth_basic "Authorized users only";
                      auth_basic_user_file /web/nginx/conf/elk/authdb;
               }
      }
      server {
              listen          80;
              server_name     www.es.com;
              charset         utf8;
      
              location / {
                      proxy_pass http://192.168.1.16:9200$request_uri;
                      proxy_set_header   Host   $host;
                      proxy_set_header   X-Real-IP   $remote_addr;
                      proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
                      auth_basic "Authorized users only";
                      auth_basic_user_file /web/nginx/conf/elk/authdb;
               }
      }
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值