ELK

1. ELK Stack架构概述


在这里插入图片描述
Elasticsearch官网点击此处

2. Elasticsearch


在这里插入图片描述
个人解读:比如收集/var/log/message里面的日志,这个文件名就是index 名,文件中每一行就是一个document,每行日志输出出来,一定是syslog或者是auth类型,所以type这块就可以定义。

2.1 集群部署

  • 部署前:禁用selinux、关闭firewalld、每个节点做时间同步(这点对于日志系统来说很重要)
  • 安装说明:es有许多中安装方式,在Linux或MacOS上从归档安装Elasticsearch;使用RPM安装Elasticsearch;使用Docker安装Elasticsearch;并且es还可以安装在windows上,详见官方说明文档,本文采取从RPM安装elasticsearch。
2.1.1 Java安装配置
  • yum源中自带的Java
# yum 安装
[root@MiWiFi-R4CM-srv yum.repos.d]# yum install java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel.x86_64 java-1.8.0-openjdk-headless.x86_64

# java -version验证输出以下内容即可
[root@MiWiFi-R4CM-srv yum.repos.d]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
  • 官网下载安装配置
#官网下载 jdk-8u211-linux-x64.tar.gz
#上传至服务器并解压缩 tar -zxvf jdk-8u211-linux-x64.tar.gz
#配置环境变量
vim /etc/profile
export JAVA_HOME=/opt/jdk1.8.0_211
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin
#加载环境变量并验证
source /etc/profile
[root@localhost ~]# java -version
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
2.1.2 ES Install

es有很多种安装方式,yum、rpm、源码编译
点击查看官方文档

2.1.2.1 yum安装

集群各节点均做下面操作

  • 下载并安装公共签名密钥:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  • 安装
[root@MiWiFi-R4CM-srv yum.repos.d]# yum install elasticsearch -y
2.1.2.2 源码编译安装
  • 配置

/etc/sysconfig/elasticsearch ES系统配置文件(此处有详细说明
/etc/elasticsearch/elasticsearch.yml 默认情况下,Elasticsearch从文件加载其配置

  • 初学者建议保持/etc/sysconfig/elasticsearch不变
  • 对/etc/elasticsearch/elasticsearch.yml做简单修改即可运行ES
[root@MiWiFi-R4CM-srv plugins]# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml 
cluster.name: ELK-Cluster   #ELK集群名称,集群各节点统一设置此名称
node.name: node2					#本节点在集群中的名称,设置主机名或ip即可
path.data: /var/lib/elasticsearch		#ES数据目录
path.logs: /var/log/elasticsearch		#ES日志目录
network.host: 192.168.31.101			#监听地址,填写本节点名称
http.port: 9200			#ES监听端口,默认使用此端口
discovery.zen.ping.unicast.hosts: ["192.168.31.200", "192.168.31.101"]  			#集群中所有ES节点
discovery.zen.minimum_master_nodes: 2				#最少两个节点起来后,集群才正常,调用接口显示拒绝refuse

将/etc/elasticsearch/elasticsearch.yml配置文件复制其他节点,只需修改以下几处,即可运行ES:node.name、network.host

2.1.3 启动
[root@MiWiFi-R4CM-srv plugins]# systemctl start elasticsearch
[root@MiWiFi-R4CM-srv plugins]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-09-15 23:01:02 CST; 7s ago
........
#设置开机自启
[root@MiWiFi-R4CM-srv plugins]# systemctl enable elasticsearch
2.1.4 验证
  • 通过ELK官网提供的api接口进行集群状态查询:curl -X GET "192.168.31.200:9200/_cat/health?v&pretty
    在这里插入图片描述

  • 这也是一种方式,curl -X GET “192.168.31.200:9200/_cluster/health?pretty”
    在这里插入图片描述

  • 通过ELK官网提供的api接口进行集群中节点数量以及状态查询:curl -X GET “192.168.31.200:9200/_cat/nodes?v”
    在这里插入图片描述
    至此,ES集群安装成功。

2.2 数据操作

有一篇博文,参考https://www.cnblogs.com/Dev0ps/p/9493576.html

  • ES数据操作可参考官方文档
  • http中post、get、put、delete点击这里
  • Elasticsearch 支持RestFul API操作,具体格式如下:
    在这里插入图片描述
2.2.1 创建索引并查看
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X PUT "192.168.31.200:9200/logs_2019-9-15"
{"acknowledged":true,"shards_acknowledged":true,"index":"logs_2019-9-15"}
# 加上pretty参数,能够格式化成json输出,便于查看
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X PUT "192.168.31.200:9200/logs_2019-9-16?pretty"
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "index" : "logs_2019-9-16"
}
# 查看索引
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X GET "192.168.31.200:9200/_cat/indices?v"
health status index          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   logs_2019-9-16 v-fcFFimSle_MX60IrRkRw   5   1          0            0        2kb          1.1kb
green  open   logs_2019-9-15 MUNv5X1zT5SEfaZJWFb0jw   5   1          0            0      2.2kb          1.1kb
2.2.2 索引中创建文档并插入字段数据
root@MiWiFi-R4CM-srv elasticsearch]# curl -X PUT "192.168.31.200:9200/logs_2019-9-15/_doc/1?pretty" -H 'Content-Type:application/json' -d '{"name":"sunwei01"}'
{
  "_index" : "logs_2019-9-15",
  "_type" : "_doc",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "_seq_no" : 0,
  "_primary_term" : 1
}
# 查看
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X GET "192.168.31.200:9200/logs_2019-9-15/_doc/1?pretty"
{
  "_index" : "logs_2019-9-15",
  "_type" : "_doc",
  "_id" : "1",
  "_version" : 3,
  "found" : true,
  "_source" : {
    "name" : "sunwei01",
    "age" : 23
  }
}
# 删除
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X DELETE "192.168.31.200:9200/logs_2019-9-15/_doc/3?pretty"
{
  "_index" : "logs_2019-9-15",
  "_type" : "_doc",
  "_id" : "3",
  "_version" : 2,
  "result" : "deleted",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "_seq_no" : 1,
  "_primary_term" : 1
}
#删除后再次查看
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X GET "192.168.31.200:9200/logs_2019-9-15/_doc/3?pretty"
{
  "_index" : "logs_2019-9-15",
  "_type" : "_doc",
  "_id" : "3",
  "found" : false
}
2.2.2.3 修改字段&更新字段
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X PUT "192.168.31.200:9200/logs_2019-9-15/_doc/1?pretty" -H 'Content-Type:application/json' -d '{"name":"sunwei","age":25}'
{
  "_index" : "logs_2019-9-15",
  "_type" : "_doc",
  "_id" : "1",
  "_version" : 5,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "_seq_no" : 4,
  "_primary_term" : 1
}
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X GET "192.168.31.200:9200/logs_2019-9-15/_doc/1?pretty"
{
  "_index" : "logs_2019-9-15",
  "_type" : "_doc",
  "_id" : "1",
  "_version" : 5,
  "found" : true,
  "_source" : {
    "name" : "sunwei",
    "age" : 25
  }
}
[root@MiWiFi-R4CM-srv elasticsearch]# curl -X POST "192.168.31.200:9200/logs_2019-9-15/_doc/1?pretty" -H 'Content-Type:application/json' -d '{"name":"sunwei","age":26}'
{
  "_index" : "logs_2019-9-15",
  "_type" : "_doc",
  "_id" : "1",
  "_version" : 6,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "_seq_no" : 5,
  "_primary_term" : 1
}
2.2.2.4 批量操作
[root@MiWiFi-R4CM-srv ~]# curl -X POST "192.168.31.200:9200/_bulk?pretty" -H 'Content-Type: application/json' -d'
> { "index" : { "_index" : "test", "_type" : "_doc", "_id" : "1" } }
> { "field1" : "value1" }
> { "delete" : { "_index" : "test", "_type" : "_doc", "_id" : "2" } }
> { "create" : { "_index" : "test", "_type" : "_doc", "_id" : "3" } }
> { "field1" : "value3" }
> { "update" : {"_id" : "1", "_type" : "_doc", "_index" : "test"} }
> { "doc" : {"field2" : "value2"} }
> '
2.2.2.5 批量操作

Elasticsearch 最常用的是查询,下面是几种常用的查询方法:

  • match_all 查询所有,默认输出十条记录
  • from size 能够使用size:1指定输出一条
  • match 根据某个字段进行查询
  • bool and的关系
  • range

2.3 Head插件图形管理Elasticsearch

方式一:下面如果github无法访问,请点击此处下载node和npm
方式二:我是采用下面这种安装方式:
直接下载node安装包,里面包含npm和node,点击下载

2.3.1 配置node环境
# 下载安装包
wget https://npm.taobao.org/mirrors/node/v10.14.1/node-v10.14.1-linux-x64.tar.gz
# 解压缩并进行环境变量配置
[root@MiWiFi-R4CM-srv ~]# tar -zxvf https://npm.taobao.org/mirrors/node/v10.14.1/node-v10.14.1-linux-x64.tar.gz -C /usr/local/src
[root@MiWiFi-R4CM-srv ~]# cd /usr/local/src && mv node-v10.14.1-linux-x64  node-v10.14.1
[root@MiWiFi-R4CM-srv ~]# vim /etc/profile   #末尾添加如下三行
NODE_HOME=/usr/local/src/node-v10.14.1
PATH=$NODE_HOME/bin:$PATH
export NODE_HOME PATH
#验证
[root@es1 src]# node -v
v10.14.1
[root@es1 src]# npm -v
6.4.1
2.3.2 Running elasticsearch-head
  • 第一步
[root@MiWiFi-R4CM-srv ~]# git clone git://github.com/mobz/elasticsearch-head.git
[root@MiWiFi-R4CM-srv ~]# cd elasticsearch-head
[root@MiWiFi-R4CM-srv ~]# npm install grunt --save-dev  && npm install 
执行上面操作,有可能会报错,如下:
[root@es1 elasticsearch-head-master]# npm install
npm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead
npm WARN deprecated json3@3.3.2: Please use the native JSON object instead of JSON 3
npm WARN deprecated json3@3.2.6: Please use the native JSON object instead of JSON 3
npm WARN deprecated phantomjs-prebuilt@2.1.16: this package is now deprecated

> phantomjs-prebuilt@2.1.16 install /root/elasticsearch-head-master/node_modules/phantomjs-prebuilt
> node install.js
PhantomJS not found on PATH
Downloading https://github.com/Medium/phantomjs/releases/download/v2.1.1/phantomjs-2.1.1-linux-x86_64.tar.bz2
Saving to /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2
Receiving...
  [----------------------------------------] 0%
解决:手动下载phantomjs-2.1.1-linux-x86_64.tar.bz2放到/tmp目录下,在执行npm install
[root@es1 elasticsearch-head-master]# npm install
npm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead
npm WARN deprecated phantomjs-prebuilt@2.1.16: this package is now deprecated
npm WARN deprecated json3@3.2.6: Please use the native JSON object instead of JSON 3
npm WARN deprecated json3@3.3.2: Please use the native JSON object instead of JSON 3
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression

audited 1768 packages in 31.501s
found 40 vulnerabilities (19 low, 2 moderate, 19 high)
  run `npm audit fix` to fix them, or `npm audit` for details
解决:执行npm audit fix 或者npm audit fix --force

#npm install执行成功后
[root@es1 elasticsearch-head-master]# npm install
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression
audited 2067 packages in 4.906s
found 0 vulnerabilities
# /root/elasticsearch-head/Gruntfile.js文件99行下面添加一行
hostname: "*"
#修改head插件的连接地址,vim /root/elasticsearch-head/_site/app.js 文件:修改head的连接地址
#修改配置,elasticsearch.yml,添加如下两行(这个配置主要是允许插件访问):
http.cors.enabled: true
http.cors.allow-origin: "*"

在这里插入图片描述
在这里插入图片描述

  • 第二步
# 重启elasticsearch
[root@MiWiFi-R4CM-srv ~] systemctl restart elasticsearch
# 启动elasticsearch-head
[root@MiWiFi-R4CM-srv elasticsearch-head]# npm run start

> elasticsearch-head@0.0.0 start /root/elasticsearch-head
> grunt server

>> Local Npm module "grunt-contrib-jasmine" not found. Is it installed?

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100

web界面访问如下图:
在这里插入图片描述

3. Logstash

3.1 安装logstash

注意点:logstash中,输入插件,一行就是一个事件,当出现java堆栈那种错误日志时,需要用到多行匹配,因为java堆栈好几行都是一个事件。

logstash的安装非常简单,可以使用官方的yum源安装,也可以使用rpm包或者源码包安装,这里列出yum源文件,本此我是使用r在官网下载的pm包进行安装的。

#yum源
[root@jenkins2 ~]# cat /etc/yum.repos.d/elasticsearch.repo 
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
  • rpm安装
rpm -ivh logstash-7.3.2.rpm

3.2 input常用插件演示

  • 使用logstash中input插件中的stdin(标准输入)
    示例如下:
#配置文件
[root@jenkins2 ~]# cat /etc/logstash/conf.d/test.conf 
input{
  stdin{
  }
}
filter{
}
output{
  stdout{
    codec => rubydebug
  }
}
#热启动logstash
[root@jenkins2 ~]# logstash -f /etc/logstash/conf.d/test.conf -r
[INFO ] 2019-09-28 22:59:47.878 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
#此时在屏幕输入中随便敲,logstash会进行记录,并使用output插件打印到屏幕中
{
       "message" => "ddd",
          "host" => "jenkins2.121.1",
      "@version" => "1",
    "@timestamp" => 2019-09-28T14:40:07.835Z
}
dd
{
       "message" => "dd",
          "host" => "jenkins2.121.1",
      "@version" => "1",
    "@timestamp" => 2019-09-28T14:40:13.688Z
}
备注:同类型的输入插件,比如在stdin中修改一个字段或者在file插件中修改一个字段都可以,但是不支持不同类型的修改,热启动不支持。
  • 使用logstash中input插件中的file(从文件中读入)
    示例如下:
因为热加载的原因,所以直接修改input_stdin.conf,将其中的内容替换如下:
[root@jenkins2 conf.d]# cat input_file.conf
input{
  file{
    path =>"/var/log/messages"
    tags =>"system log"
    type =>"syslog"
    start_position =>"end"  #此处值可以是,startting 或者是end,当为startting时,会将文件重新输入到logstash中,并输出到屏幕。end是直接将文件最新的写入内容输出至屏幕。
  }
}
filter{
}
output{
  stdout{
    codec => rubydebug
  }
}
  • 使用logstash中input插件中的tcp(从tcp中读入)
    示例如下:
[root@jenkins2 conf.d]# cat input_tcp.conf 
input{
  tcp{
    port =>12345
    type =>"nc"
}
}
filter{
}
output{
  stdout{
    codec => rubydebug
  }
}
此时logstash主机将会多出一个12345端口,用于监听tcp连接
#机器安装nc工具,向logstash主机发出请求,nc host  port
logstash将会输出至屏幕
  • 使用logstash codec常用插件
    示例如下:
input{
  stdin{
    codec =>json{
      charset => ["UTF-8"]
    }
  }
}
filter{
}
output{
  stdout{
    codec => rubydebug
  }
}
#输入必须是json格式,才能够解析,否则解析失败:
afafaf
[WARN ] 2019-09-29 00:04:03.923 [[main]<stdin] jsonlines - JSON parse error, original data now in message field {:error=>#<LogStash::Json::ParserError: Unrecognized token 'afafaf': was expecting ('true', 'false' or 'null')
 at [Source: (String)"afafaf"; line: 1, column: 13]>, :data=>"afafaf"}
{
      "@version" => "1",
    "@timestamp" => 2019-09-28T16:04:03.925Z,
          "tags" => [
        [0] "_jsonparsefailure"
    ],
          "host" => "jenkins2.121.1",
       "message" => "afafaf"
}
#如下才正确
#屏幕上健入json  {"name":"sunwei","age":12}
{
      "@version" => "1",
    "@timestamp" => 2019-09-28T16:14:27.987Z,
          "host" => "jenkins2.121.1",
           "age" => 12,
          "name" => "sunwei"
}

#例如jiava日志中会抛出堆栈异常
[root@jenkins2 conf.d]# cat input_file.conf
input{
  stdin{
    codec => multiline {
      pattern => "^\s"
      what => "previous"   #将敲入的字符或字符串与上一行合并
    }
  }
}
filter{
}
output{
  stdout{
    codec => rubydebug
  }
}

3.3 filter常用插件演示

  • filter的json插件
[root@jenkins2 conf.d]# cat test.conf 
input{
  stdin{
    }
  }
filter{
  json{
    source => "message"
    target => "content"
  }
}
output{
  stdout{
    codec => rubydebug
  }
}
标准输入:{"name":"sunwei","age":12}
结果如下:将message中的json的key value存储在content中
{
    "@timestamp" => 2019-10-01T00:06:37.262Z,
       "message" => "{\"name\":\"sunwei\",\"age\":12}",
       "content" => {
         "age" => 12,
        "name" => "sunwei"
    },
          "host" => "jenkins2.121.1",
      "@version" => "1"
}
#如果将target注释(热加载不生效,不知原因,需要重新启动logstash),如下:
input{
  stdin{
    }
  }
filter{
  json{
    source => "message"
    #target => "content"
  }
}
output{
  stdout{
    codec => rubydebug
  }
}
输出如下(字段与内置字段同级输出):
{"name":"sunwei","age":12}
{
          "host" => "jenkins2.121.1",
       "message" => "{\"name\":\"sunwei\",\"age\":12}",
    "@timestamp" => 2019-10-01T00:14:50.016Z,
           "age" => 12,
          "name" => "sunwei",
      "@version" => "1"
}
  • filter的k v插件(按照key=value,分隔符支持正则)
input{
  stdin{
    }
  }
filter{
  kv{
    #以0个或者一个&为分隔符
    field_split => "&?"
  }
}
output{
  stdout{
    codec => rubydebug
  }
}
输出如下:
ping=sunwei123&&&&age1=12
{
      "@version" => "1",
    "@timestamp" => 2019-10-01T00:24:29.426Z,
          "host" => "jenkins2.121.1",
          "age1" => "12",
       "message" => "ping=sunwei123&&&&age1=12",
          "ping" => "sunwei123"
}
ping=sunwei123
{
          "host" => "jenkins2.121.1",
      "@version" => "1",
       "message" => "ping=sunwei123",
    "@timestamp" => 2019-10-01T00:24:12.087Z,
          "ping" => "sunwei123"
}
  • filter的geoip插件
配置文件如下:
input{
  stdin{
    }
  }
filter{
  grok{
    match => {
    "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"
    }
  }
  geoip {
    source => "client"
    database => "/root/GeoLite2-City_20190924/GeoLite2-City.mmdb"
  }
}
output{
  stdout{
    codec => rubydebug
  }
}
输出如下,能够给出ip所在的城市,以及经纬度,后续绘图需要:
114.114.114.114 GET /index.html 234 0.4
{
       "request" => "/index.html",
         "bytes" => "234",
      "@version" => "1",
        "client" => "114.114.114.114",
       "message" => "114.114.114.114 GET /index.html 234 0.4",
    "@timestamp" => 2019-10-01T00:48:15.894Z,
          "host" => "jenkins2.121.1",
        "method" => "GET",
      "duration" => "0.4",
         "geoip" => {
         "country_code2" => "CN",
              "location" => {
            "lat" => 34.7725,
            "lon" => 113.7266
        },
        "continent_code" => "AS",
             "longitude" => 113.7266,
              "timezone" => "Asia/Shanghai",
                    "ip" => "114.114.114.114",
         "country_code3" => "CN",
          "country_name" => "China",
              "latitude" => 34.7725
    }
}
  • filter的grok

grok相当于一种匹配模式,有logstash自带的文件提供正则解析,也可以自定义解析模式。
内置匹配文件如下:
rpm -ql logstash|grep grok-patterns
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns

logstash过滤文件配置如下:
[root@jenkins2 ~]# cat /etc/logstash/conf.d/test.conf 
input{
  stdin{
    }
  }
filter{
  grok{
    #先将自己自定义的匹配模式的文件注释起来,看一下
    #patterns_dir => "/opt/patterns"
    match => {
    #message此时定义的字段必须与标准输入的需要匹配的日志字段一样多,否则解析失败
    "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"
    }
  }
}
output{
  stdout{
    codec => rubydebug
  }
}
正确解析如下:
192.168.31.2 GET /index.php 333 0.4
{
       "request" => "/index.php",
    "@timestamp" => 2019-10-01T06:41:52.016Z,
        "client" => "192.168.31.2",
        "method" => "GET",
         "bytes" => "333",
      "duration" => "0.4",
      "@version" => "1",
          "host" => "jenkins2.121.1",
       "message" => "192.168.31.2 GET /index.php 333 0.4"
}
加上自定义的模式匹配文件试试吧...
[root@jenkins2 ~]# cat /root/patterns
ID [0-9a-zA-Z]{10,11}
[root@jenkins2 ~]# cat /etc/logstash/conf.d/test.conf 
input{
  stdin{
    }
  }
filter{
  grok{
    #此处定义我们自定义的匹配模式文件
    patterns_dir => "/opt/patterns"
    match => {
    "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration} %{ID idd}"  #结尾加上一个ID字段
    }
  }
}
output{
  stdout{
    codec => rubydebug
  }
}
因为配置文件中,我们加上了自定义的匹配模式并且在匹配中也加上了这个字段,所以我们在日志字段中也需要加上这个ID
192.168.31.2 GET /index.php 333 0.4 1111111111A
{
       "request" => "/index.php",
        "client" => "192.168.31.2",
         "bytes" => "333",
    "@timestamp" => 2019-10-01T06:52:48.133Z,
       "message" => "192.168.31.2 GET /index.php 333 0.4 1111111111A",
          "host" => "jenkins2.121.1",
           "idd" => "1111111111A",
      "duration" => "0.4",
        "method" => "GET",
      "@version" => "1"
}
我们可以看到,新加的字段ID以变量名为idd匹配到了
此处介绍一个官网的匹配器(很好用),http://grokdebug.herokuapp.com/

#下面看一下grok的多模式匹配
[root@jenkins2 ~]# cat /etc/logstash/conf.d/test.conf 
input{
  stdin{
    }
  }
filter{
  grok {
    patterns_dir => "/root/patterns"
    #写法与上面单模式不一样,需要注意
    match => [
      "message", "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration} %{ID:idd}",
      "message", "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration} %{TAG:tag}"
    ]
  }
}
output{
  stdout{
    codec => rubydebug
  }
}
输出如下:
192.168.31.2 GET /index.php 333 0.4 1111111111A
192.168.31.2 GET /index.php 333 0.4 syslog
{
       "request" => "/index.php",
        "method" => "GET",
         "bytes" => "333",
       "message" => "192.168.31.2 GET /index.php 333 0.4 1111111111A",
        "client" => "192.168.31.2",
    "@timestamp" => 2019-10-01T07:44:03.321Z,
      "duration" => "0.4",
           "idd" => "1111111111A",
          "host" => "jenkins2.121.1",
      "@version" => "1"
}

{
       "request" => "/index.php",
        "method" => "GET",
         "bytes" => "333",
       "message" => "192.168.31.2 GET /index.php 333 0.4 syslog",
        "client" => "192.168.31.2",
    "@timestamp" => 2019-10-01T07:44:05.640Z,
      "duration" => "0.4",
           "tag" => "syslog",
          "host" => "jenkins2.121.1",
      "@version" => "1"
}

3.4 output插件

  • ES
范例:
output{
	elasticsearch{
			hosts => "localhost:9200"
			index => "log-admin-%{+YYYY.MM.dd}"
	}
}
看一下一个配置文件:
output {
  #使用if语句进行判断
  if [type] == "system" {
    if [tags][0] == "syslog"{
      #符合上面条件则将output输入至ES插件中
      elasticsearch {
        hosts => ["http://192.168.31.183:9200"]
        index => "logstash-system-syslog-%{+YYYY.MM.dd}"
      }
      #于此同时输出至标准输出
      stdout { codec=>rubydebug }
    }
    else if [tags][0] == "auth" {
      elasticsearch {
        hosts => ["http://192.168.31.183:9200"]
        index => "logstash-system-auth-%{+YYYY.MM.dd}"
      }
      stdout { codec=>rubydebug }
    }
  }
  else if [type] == "sunweitest"{
    elasticsearch {
      hosts => ["http://192.168.31.183:9200"]
      index => "sunweitest-testlog"
    }
    stdout {codec=>rubydebug}
  }
  else if [type] == "httpd"{
    elasticsearch {
      hosts => ["http://192.168.31.183:9200"]
      index => "logstash-httpd-%{+YYYY.MM.dd}"
    }
    stdout {codec=>rubydebug}
  }
}

4. Kibana

4.1 安装&配置&启动

建议kibana与es的版本一致,否则打开kibana web界面报错,我就出现了这种情况,所以最好版本一致,并且在启动kibana前,检查一下es集群状态,确保green。

  • 下载rpm包
    官网下载kibana-6.2.3-x86_64.rpm
  • 进行rpm安装
[root@jenkins2 ~]# rpm -ivh kibana-6.2.3-x86_64.rpm 
warning: kibana-6.2.3-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
	package kibana-6.2.3-1.x86_64 is already installed
  • 配置kibana连接es
[root@jenkins2 ~]# cat /etc/kibana/kibana.yml|grep -v "^#"|grep -v "^$"
#kibana监听端口
server.port: 5601  
 #监听本机所有地址
server.host: "0.0.0.0"  
 #es地址,es如果是集群,["es1","es2"]
elasticsearch.url: "http://192.168.31.183:9200" 
  • 启动kibana
    [root@jenkins2 ~]# systemctl start kibana

4.2 elasticsearch + logstsh +kibana完整应用案列展示

logstash中完整配置文件如下:

input {
 file {
    path => ["/var/log/messages"]
    type => "system"
    tags => ["syslog","sunweitest"]
    start_position => "beginning"    
  }
  file {
    path => ["/etc/httpd/logs/access_log"]
    type => "httpd"
    tags => ["http"]
    start_position => "beginning"    
  }
  file {
    path => ["/var/log/audit/audit.log"]
    type => "system"
    tags => ["auth","sunweitest"]
    start_position => "beginning"    
  } 
  file {
    path => ["/root/sunweitest"]
    type => "sunweitest"
    tags => ["sunweitest"]
    start_position => "end"
  }
}
filter {

} 
output {
  if [type] == "system" {
    if [tags][0] == "syslog"{
      elasticsearch {
        hosts => ["http://192.168.31.183:9200"]
        index => "logstash-system-syslog-%{+YYYY.MM.dd}"
      }
      stdout { codec=>rubydebug }
    }
    else if [tags][0] == "auth" {
      elasticsearch {
        hosts => ["http://192.168.31.183:9200"]
        index => "logstash-system-auth-%{+YYYY.MM.dd}"
      }
      stdout { codec=>rubydebug }
    }
  }
  else if [type] == "sunweitest"{
    elasticsearch {
      hosts => ["http://192.168.31.183:9200"]
      index => "sunweitest-testlog"
    }
    stdout {codec=>rubydebug}
  }
  else if [type] == "httpd"{
    elasticsearch {
      hosts => ["http://192.168.31.183:9200"]
      index => "logstash-httpd-%{+YYYY.MM.dd}"
    }
    stdout {codec=>rubydebug}
  }
}
#启动logstash
[root@jenkins2 ~]# logstash -f /etc/logstash/conf.d/logtoelastic.conf -r
  • 登录es查看是否有index生成
#已经有index生成了,忽略yellow状态,后面在排查问题
[root@elk_lnmp ~]# curl -X GET http://192.168.31.183:9200/_cat/indices?v
health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   logstash-system-syslog-2019.10.01 LLp2M9mSQVezBM6k2ZKxbQ   5   1          1            0      7.8kb          7.8kb
green  open   .kibana                           tpLcqbEwSPOYiBeeMi1Wag   1   0          6            1     31.3kb         31.3kb
yellow open   logstash-httpd-2019.10.02         c5nd4eQvRiuRM5-FC2iZiQ   5   1         11            0     47.6kb         47.6kb
yellow open   sunweitest-testlog                zVnW3ynaSRC37LIJJ19Vzw   5   1          3            0     21.3kb         21.3kb
yellow open   logstash-system-auth-2019.10.01   5VXULR5JRKmP877KYnBGug   5   1          9            0       41kb           41kb
yellow open   logstash-system-auth-2019.10.02   PHvU3v1MQ_iZ5_8xLi3pug   5   1         18            0     57.8kb         57.8kb
  • 登录kibana查看
    web地址:http://192.168.31.57:5601
    在这里插入图片描述

4.3 使用nginx为kibana设置访问认证

https://www.elastic.co/cn/subscriptions在这块可以查看elastic所支持的产品特性

server {
        #本机监听9797端口,转发至kibana节点
        listen       192.168.31.183:9797;
        location / {
            #使用nginx反向代理到真正的kibana节点,转发前进行身份认证
            proxy_pass http://192.168.31.57:5601;
            #访问提示
            auth_basic "please input your username and password...";
            #密码文件
            auth_basic_user_file /root/passwd.db;
        }
    }
#可以使用openssl passwd -crypt password 进行密码加密存储

配置后进行访问:
在这里插入图片描述

5. 引入redis

下图架构,相比之前的进行了改善,增加了缓冲层,将数据通过logstash收集存入redis中,然后再有logstash去消费数据,最后存入elasticsearch中。这样能够充分缓解es压力。
在这里插入图片描述

5.1 实验架构图

5.2 安装部署

  • logstash-to-redis.conf
input{
  file{
    path =>["/var/log/httpd/access_log"]
    type =>"system"
    tags =>["syslog","test"]
    start_position =>"beginning"
  }
  file{
    path =>["/opt/sunwei"]
    type =>"system"
    tags =>["auth","test"]
    start_position =>"beginning"
  }
}
filter{
}
output{
  redis{
    host => "192.168.31.57:6379"
    password => "123456"
    data_type => "list"
    key => "logstash:redis"
  }
}
  • logstash-from-redis.conf
input{
  redis{
    host => "192.168.31.57"
    port => "6379"
    password => "123456"
    db => "0"
    data_type => "list"
    key => "logstash:redis"
  }
}
filter{

}
output{
  if [type] == "system" {
    if [tags][0] == "syslog" {
      elasticsearch {
        hosts => ["http://192.168.31.147:9200","http://192.168.31.35:9200","http://192.168.31.183:9200"]
        index => "logstash-system-syslog-%{+YYYY.MM.dd}"
      }
      stdout {codec => rubydebug}
    }
    else if [tags][0] == "auth" {
      elasticsearch {
        hosts => ["http://192.168.31.147:9200","http://192.168.31.35:9200","http://192.168.31.183:9200"]
        index => "logstash-system-auth-%{+YYYY.MM.dd}"
      }
      stdout {codec => rubydebug}
    }
  }
}

5.3 验证

6. 问题解决

问题:在线上机器上部署的filebeat读多行日志有bug 把内存占满了,引发omm,导致机器重启。
解决:使用systemd进行服务进程控制

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值