elk日志平台部署记录

ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。

Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。日志收集后就存储在这里

Logstash是用来收集日志,过滤并发送给Elasticsearch的,但是Elasticsearch非常占资源,新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。

Kibana 相当于一个展示层,提供的日志分析友好的 Web 界面,可以帮助汇总、分析和搜索重要数据日志。

 

首先安装Elasticsearch,安装过程可以看这里传送门

安装filebeat

wget命令下载wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-linux-x86_64.tar.gz,解压。打开根目录下的filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  tail_files: true
  backoff: "1s"
  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - /usr/local/wxDevProductLog/*/*.txt
    #- c:\programdata\elasticsearch\logs\*
  # 向输出的每一条日志添加额外的信息,方便后续对日志进行分组统计。
  tags: ["wxdev"]
  
  # 可以指定Filebeat忽略指定时间段以外修改的日志内容,比如2h(两个小时)或者5m(5分钟)。
  ignore_older: 1h
  # 如果一个文件在某个时间段内没有发生过更新,则关闭监控的文件handle。默认1h
  close_older: 1h
  #如果在指定时间没有被读取,将关闭文件句柄  看自己的需要配置
  close_inactive: 1h
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options
  # 适用于日志中每一条日志占据多行的情况,比如各种语言的报错信息调用栈
  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  # 多行日志开始的那一行匹配的pattern
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  # 是否需要对pattern条件转置使用,不翻转设为true,反转设置为false。  【建议设置为true】  
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
 # 匹配pattern后,与前面(before)还是后面(after)的内容合并为一条日志
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  hosts: ["localhost:5044"]
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

上面我配置了一个

tags: ["wxdev"]

后续输出到logstash,可以按这个来分组统计日志。

启动Filebeat

前台启动: /usr/local/filebeat-6.7.0/filebeat  -e -c /usr/local/filebeat-6.7.0/filebeat.yml
后台启动:nohup /usr/local/filebeat-6.7.0/filebeat  -e -c /usr/local/filebeat-6.6.0/filebeat.yml >/tmp/filebeat.log 2>&1 &

现在启动会失败,因为要连接到logstash.

 

安装logstash

logstash依赖于java环境,需要先装jdk,有就不用管了。我比较喜欢用wget命令安装,去官网https://www.elastic.co/cn/downloads/logstash复制下载连接  wget http://url就完事了。

贴一下logstash配置

#输入
input {
    beats {
		host => '0.0.0.0'
        port => "5044"  #监听filebeats端口
    }
}

#过滤
filter {

    grok {
        match => { "message" => "%{TIMESTAMP_ISO8601:requesttime}\s(?<msg>.*)"}
    }

    #将发送到logstash的时间替换为日志生成时间,这样之前生成的日志请求时间戳也能在kibana清晰的展示
	date {
        match => ["requesttime", "yyyy-MM-dd HH:mm:ss"]
        target => "@timestamp"
    }
}

#输出
output {
	stdout{
	codec => dots{}
	}
	if "eleccar" in [tags]{
		elasticsearch {
			index => "eleccar-%{+YYYY-MM-dd}"
			hosts => [ "http://localhost:9200" ]
		}
	}
	if "wxdev" in [tags]{
		elasticsearch {
			index => "wxdev-%{+YYYY-MM-dd}"
			hosts => [ "http://localhost:9200" ]
		}
	} 	
	
    
}

关于filter中grok过滤语法可参考这篇文章传送门

启动logstash

前台启动:/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf
后台启动:nohup /usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf >/tmp/logstash.log 2>/tmp/logstash.log &

Kibana安装与配置

wget命令安装完成后,在配置文件中config/kibana.yml添加如下内容

server.port: 5601
server.host: "0.0.0.0" #配置外网可以访问
#elasticsearch.url: "http://localhost:9200"
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

后台启动 nohup ./kibana >/tmp/kibana.log 2>/tmp/kibana.log &

Kibana默认无密码,谁都可以访问。如果使用云厂商,可以在安全组控制某个IP的访问。建议使用nginx实现用户名密码登陆。

访问 http://xxxx:5601

完事。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值