002 - ELKF部署安装、配置

ELK

1、 准备环境

1.1 下载安装包

ELK源码包下载地址

选择 Linux x86_64

下载安装包 :

elasticsearch-7.16.3-linux-x86_64.tar.gz

kibana-7.16.3-linux-x86_64.tar.gz

logstash-7.16.3-linux-x86_64.tar.gz

filebeat-7.16.3-linux-x86_64.tar.gz

1.2 rsyslog插件

收集所有日志

[appview@db03 ~]$ ip a | grep 'inet 192.1'
    inet 192.168.75.36/24 brd 192.168.75.255 scope global noprefixroute ens33
    
[appview@db03 ~]$ sudo yum -y install rsyslog

[appview@db03 ~]$ rpm -aq |grep rsyslog
rsyslog-8.24.0-57.el7_9.3.x86_64

[appview@db03 ~]$ ls /etc/rsyslog.d/
listen.conf
[appview@db03 ~]$ cat /etc/rsyslog.d/listen.conf 
$SystemLogSocketName /run/systemd/journal/syslog

[root@db03 ~]# vim /etc/rsyslog.conf 
$ModLoad imudp
$UDPServerRun 514

*.*                     /var/log/edon.log


[root@db03 ~]# systemctl restart rsyslog.service 
[root@db03 ~]# netstat -tunlp | grep 514
udp        0      0 0.0.0.0:514             0.0.0.0:*                           2048/rsyslogd       
udp6       0      0 :::514                  :::*                                2048/rsyslogd       

[root@db03 ~]# ll /var/log/edon.log 
-rw-------. 1 root root 779 716 23:47 /var/log/edon.log
[root@db03 ~]# tailf /var/log/edon.log

# 测试写入日志
[appview@db03 ~]$ logger 'elk测试数据'

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nBOyWSB7-1691364354819)(assets/image-20230716225704063.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wymeNXFS-1691364354820)(assets/image-20230716225751289.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1bCCkekP-1691364354821)(assets/image-20230716225830840.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3iQkS20T-1691364354821)(assets/image-20230716225948395.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Ifjc1bHu-1691364354821)(assets/image-20230716230053270.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-valm39JE-1691364354830)(assets/image-20230716230137545.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qpAgzji8-1691364354832)(assets/image-20230716231946508.png)]

1.3 安装jdk

[root@db03 ~]# tar xf jdk-20_linux-x64_bin.tar.gz 
[root@db03 ~]# ll
总用量 187080
-rw-------. 1 root root      1421 1030 2022 anaconda-ks.cfg
drwxr-xr-x. 9 root root       136 717 00:20 jdk-20.0.1
-rw-r--r--. 1 root root 191562615 717 00:05 jdk-20_linux-x64_bin.tar.gz
[root@db03 ~]# mv jdk-20.0.1 /usr/local/src/
[root@db03 ~]# cd /usr/local/src/jdk-20.0.1/
[root@db03 jdk-20.0.1]# cd bin/
[root@db03 bin]# ./java -version
  va version "20.0.1" 2023-04-18
Java(TM) SE Runtime Environment (build 20.0.1+9-29)
Java HotSpot(TM) 64-Bit Server VM (build 20.0.1+9-29, mixed mode, sharing)

79  tar xf jdk-11.0.1_linux-x64_bin.tar.gz 
80  ll
81  mv jdk-11.0.1 /usr/local/src/
82  cd /usr/local/src/
83  ll
84  vim /etc/profile

[root@db03 bin]# vim /etc/profile
# JAVA_HOME=/usr/local/src/jdk-20.0.1
JAVA_HOME=/usr/local/src/jdk-11.0.1
PATH=$JAVA_HOME/bin:$PATH:$HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export PATH JAVA_HOME CLASSPATH CATALINA_HOME

1.4 安装filebeat

ip:192.168.75.36

[appview@elk01 ~]$ tar xf elasticsearch-7.16.3-linux-x86_64.tar.gz 
[appview@elk01 ~]$ ll
总用量 304032
drwxrwxr-x. 2 appview appview         6 716 18:43 app
drwxr-xr-x. 9 appview appview       155 17 2022 elasticsearch-7.16.3
-rw-rw-r--. 1 appview appview 311327254 716 21:58 elasticsearch-7.16.3-linux-x86_64.tar.gz
[appview@elk01 ~]$ mv elasticsearch-7.16.3 app/
[appview@elk01 ~]$ cd app/elasticsearch-7.16.3/
[appview@elk01 elasticsearch-7.16.3]$ ll
[appview@db03 ~]$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.16.3-linux-x86_64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 34.2M  100 34.2M    0     0  6872k      0  0:00:05  0:00:05 --:--:-- 8156k


[appview@db03 ~]$ tar xf filebeat-7.16.3-linux-x86_64.tar.gz 
[appview@db03 ~]$ ll
总用量 35092
drwxrwxr-x. 5 appview appview      212 717 06:44 filebeat-7.16.3-linux-x86_64
-rw-rw-r--. 1 appview appview 35932836 717 06:43 filebeat-7.16.3-linux-x86_64.tar.gz

[appview@db03 ~]$ cd filebeat-7.16.3-linux-x86_64/
[appview@db03 filebeat-7.16.3-linux-x86_64]$ ll
总用量 128052
-rw-r--r--.  1 appview appview   3778847 17 2022 fields.yml
-rwxr-xr-x.  1 appview appview 125167328 17 2022 filebeat
-rw-r--r--.  1 appview appview    166534 17 2022 filebeat.reference.yml
-rw-------.  1 appview appview      8273 17 2022 filebeat.yml
drwxr-xr-x.  3 appview appview        15 17 2022 kibana
-rw-r--r--.  1 appview appview     13675 17 2022 LICENSE.txt
drwxr-xr-x. 76 appview appview      4096 17 2022 module
drwxr-xr-x.  2 appview appview      4096 17 2022 modules.d
-rw-r--r--.  1 appview appview   1964303 17 2022 NOTICE.txt
-rw-r--r--.  1 appview appview       814 17 2022 README.md


[appview@db03 ~]$ mkdir app
[appview@db03 ~]$ mv filebeat-7.16.3-linux-x86_64 app/

[appview@db03 ~]$ cd app/filebeat-7.16.3-linux-x86_64/

# 修改配置文件
[appview@db03 filebeat-7.16.3-linux-x86_64]$ egrep -v '#|^$' ~/app/filebeat-7.16.3-linux-x86_64/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/edon.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
output.elasticsearch:
  hosts: ["192.168.75.32:9200"]
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

1.5 安装elasticsearch

1.5.1 安装

ip:192.168.75.32

$ mkdir app
$ tar xf elasticsearch-7.16.3-linux-x86_64.tar.gz -C app/
$ cd app/elasticsearch-7.16.3/config/
$ cp elasticsearch.yml{,.bak}

# 修改配置文件里面的如下语句,取消注释,删掉, “node-2”
# cluster.initial_master_nodes: [“node-1”, “node-2”]
[appview@elk02 config]$ egrep -v '#|^$' elasticsearch.yml
cluster.name: my-application
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]

1.5.2 处理报错

[ERROR][o.e.b.Bootstrap ] [node-1] node validation exception
[2] bootstrap checks failed. You must address the points described in the following [2] lines before starting Elasticsearch.
bootstrap check failure [1] of [2]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
bootstrap check failure [2] of [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

【错误】【o.e.b。[node-1]节点验证异常

[2]引导检查失败。在启动Elasticsearch之前,必须解决以下[2]行中描述的要点。

[2]的引导检查失败[1]:elasticsearch进程的最大文件描述符[4096]太低,增加到至少[65535]

Bootstrap check failure [2] of [2]: Max virtual memory areas vm。Max_map_count[65530]太低,至少增加到[262144]

这个报错意味着在启动 Elasticsearch 之前需要解决两个引导检查(bootstrap checks)失败的问题:

  1. max file descriptors(最大文件描述符)过低:Elasticsearch 进程的最大文件描述符数设置过低。建议将其增加至至少 65535。

  2. max virtual memory areas vm.max_map_count(最大虚拟内存区域)过低:vm.max_map_count 设置过低。建议将其增加至至少 262144。

要解决这些问题,您可以按照以下步骤进行操作:

  1. 调整最大文件描述符数(max file descriptors):

    • 打开 /etc/security/limits.conf 配置文件:

      sudo vim /etc/security/limits.conf 
      
    • 在文件末尾添加以下行(如果已经存在,请更新现有的对应行):

      *               soft    nofile            65536
      * 				hard 	nofile 			  65536
      * 				soft    nproc 		      65536
      * 				hard    nproc 			  65536
      
    • 保存并关闭文件。

  2. 调整最大虚拟内存区域(max virtual memory areas):

    • 打开 /etc/sysctl.conf 配置文件:

      sudo vim /etc/sysctl.conf
      
    • 在文件末尾添加以下行(如果已经存在,请更新现有的对应行):

      vm.max_map_count=655360       
      
    • 保存并关闭文件。

  3. 应用配置更改:

    • 运行以下命令以使新的限制和配置生效:

      sudo /sbin/sysctl -p
      
    • 重新启动 Elasticsearch:

      sudo systemctl restart elasticsearch
      

注意改完需要重新连接才能生效

这些步骤将更新系统的限制和配置,以满足 Elasticsearch 启动时的要求。在完成这些更改后,您应该能够成功启动 Elasticsearch。如果问题仍然存在,请确保您正确执行了以上步骤,并检查相关配置文件的语法和格式是否正确。

1.5.3 启动elasticsearch

[appview@elk02 ~]$ cd app/elasticsearch-7.16.3/bin/
[appview@elk02 bin]$ ./elasticsearch -d

[appview@elk02 bin]$ netstat -tunlp | grep 92*
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp6       0      0 :::9200                 :::*                    LISTEN      2539/java           
tcp6       0      0 :::9300                 :::*                    LISTEN      2539/java           


[appview@elk02 bin]$ ps -ef |grep el
appview    2539      1 44 14:21 pts/1    00:00:35 /home/appview/app/elasticsearch-7.16.3/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j2.formatMsgNoLookups=true -Djava.locale.providers=SPI,COMPAT --add-opens=java.base/java.io=ALL-UNNAMED -XX:+UseG1GC -Djava.io.tmpdir=/tmp/elasticsearch-8404053089288380777 -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Xms1885m -Xmx1885m -XX:MaxDirectMemorySize=988807168 -XX:G1HeapRegionSize=4m -XX:InitiatingHeapOccupancyPercent=30 -XX:G1ReservePercent=15 -Des.path.home=/home/appview/app/elasticsearch-7.16.3 -Des.path.conf=/home/appview/app/elasticsearch-7.16.3/config -Des.distribution.flavor=default -Des.distribution.type=tar -Des.bundled_jdk=true -cp /home/appview/app/elasticsearch-7.16.3/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
appview    2566   2539  0 14:21 pts/1    00:00:00 /home/appview/app/elasticsearch-7.16.3/modules/x-pack-ml/platform/linux-x86_64/bin/controller


[appview@elk02 bin]$ curl -X GET 192.168.75.32:9200
{
  "name" : "node-1",
  "cluster_name" : "my-application",
  "cluster_uuid" : "EUFNxa6yT1ezmyC1ML__aA",
  "version" : {
    "number" : "7.16.3",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "4e6e4eab2297e949ec994e688dad46290d018022",
    "build_date" : "2022-01-06T23:43:02.825887787Z",
    "build_snapshot" : false,
    "lucene_version" : "8.10.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-V0nL9re5-1691364354833)(assets/image-20230717143132398.png)]

1.6 安装kibana

ip:192.168.75.33

[appview@elk03 ~]$ mkdir app
[appview@elk03 ~]$ 
[appview@elk03 ~]$ 
[appview@elk03 ~]$ tar xf kibana-7.16.3-linux-x86_64.tar.gz -C app/
[appview@elk03 ~]$ 
[appview@elk03 ~]$ 
[appview@elk03 ~]$ cd app/kibana-7.16.3-linux-x86_64/config/
[appview@elk03 config]$ 
[appview@elk03 config]$ ls
kibana.yml  node.options
[appview@elk03 config]$ 
[appview@elk03 config]$ cp kibana.yml{,.bak}
[appview@elk03 config]$ ll
总用量 20
-rw-r--r--. 1 appview appview 5243 17 2022 kibana.yml
-rw-r--r--. 1 appview appview 5243 717 14:45 kibana.yml.bak
-rw-r--r--. 1 appview appview  305 17 2022 node.options

[appview@elk03 config]$ egrep -v '#|^$' kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.75.32:9200"]
kibana.index: ".kibana"
i18n.locale: "zh-CN"

[appview@elk03 bin]$ ./kibana

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mUwrRpqU-1691364354834)(assets/image-20230718000943919.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8qjDIyo3-1691364354834)(assets/image-20230718001109927.png)]

1.7 安装apache

ELK 收集日志流程

(apache日志)

  1. apache 日志配置文件设置为 json格式
  2. filebeat 读取日志,按json给 elasticsearch
  3. kibana 读取elasticsearch数据

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Fq0Qixbj-1691364354835)(assets/image-20230718205237098.png)]

1.7.1 查看/删除 elasticsearch索引

[appview@elk02 bin]$ curl 127.0.0.1:9200/_cat/indices
green  open .kibana_7.16.3_001                6a3_3bUTQga8KJcT9-HaVA 1 0  658   20  2.4mb  2.4mb
green  open .geoip_databases                  EcLwae0JQuCcK0Imp9t6KQ 1 0   42   11 50.2mb 50.2mb
green  open .apm-custom-link                  j2t5z-oPSNuVd02MzN68aQ 1 0    0    0   226b   226b
yellow open filebeat-7.16.3-2023.07.17-000001 MC80Tul_Qm6Pf0IUIO5XbQ 3 1 6891    0  1.4mb  1.4mb
green  open .apm-agent-configuration          g5fwFyoYTYmXkwrqyt1djA 1 0    0    0   226b   226b
green  open .kibana_task_manager_7.16.3_001   DiWAOc7aSl-saFKB7lTcuQ 1 0   17 1275  2.6mb  2.6mb
green  open .async-search                     3sUUutGXQyq5obOCMHagng 1 0    0    0   249b   249b
green  open .tasks                            nUiOAMr-S-axPRJgUJuW7A 1 0   20    0 52.8kb 52.8kb

# 删除elasticsearech索引
[appview@elk02 bin]$ curl -XDELETE 127.0.0.1:9200/filebeat-7.16.3-2023.07.17-000001
{"acknowledged":true}[appview@elk02 bin]$ 
[appview@elk02 bin]$ curl 127.0.0.1:9200/_cat/indices
green open .geoip_databases                EcLwae0JQuCcK0Imp9t6KQ 1 0  42   11 50.2mb 50.2mb
green open .kibana_7.16.3_001              6a3_3bUTQga8KJcT9-HaVA 1 0 658   20  2.4mb  2.4mb
green open .apm-custom-link                j2t5z-oPSNuVd02MzN68aQ 1 0   0    0   226b   226b
green open .apm-agent-configuration        g5fwFyoYTYmXkwrqyt1djA 1 0   0    0   226b   226b
green open .kibana_task_manager_7.16.3_001 DiWAOc7aSl-saFKB7lTcuQ 1 0  17 1494  2.6mb  2.6mb
green open .async-search                   3sUUutGXQyq5obOCMHagng 1 0   0    0   249b   249b
green open .tasks                          nUiOAMr-S-axPRJgUJuW7A 1 0  20    0 52.8kb 52.8kb

1.7.2 下载httpd

[appview@db03 filebeat]$ sudo yum -y install httpd
sudo systemctl start httpd
sudo chown -R appview:appview httpd/
./filebeat -e -c filebeat.yml

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2KY3rj8H-1691364354835)(assets/image-20230718222559657.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LFxkeC56-1691364354836)(assets/image-20230720214250724.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-anaGUQZp-1691364354836)(assets/image-20230720214447623.png)]

1.7.3 修改配置文件

# 修改apache配置文件,增加json配置,将access_log输入用定义的json格式
[appview@db03 ~]$ sudo vim /etc/httpd/conf/httpd.conf

   LogFormat "{ \
                \"time\":\"%{%Y-%m-%d %H:%M:%S}t\", \
                \"client_ip\":\"%a\", \
                \"request\":\"%r\", \
                \"status\":\"%>s\", \
                \"bytes\":\"%b\", \
                \"referer\":\"%{Referer}i\", \
                \"user_agent\":\"%{User-Agent}i\" \
                }" apache_json_format



    CustomLog "logs/access_log" apache_json_format

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-maLO7hPz-1691364354836)(assets/image-20230721005527923.png)]

1.7.4 重启服务

# 修改配置,重启生效
[appview@db03 ~]$ sudo systemctl restart httpd.service 
[appview@db03 ~]$ sudo systemctl status httpd.service

[appview@db03 ~]$ tailf /var/log/httpd/access_log 
# 观察日志已经改变json格式

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-SAYqT6Ef-1691364354837)(assets/image-20230721005631604.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-TJJvnhSc-1691364354837)(assets/image-20230720222428781.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-C0GKx5zU-1691364354837)(assets/image-20230720232604414.png)]

修改filebeat配置

# 修改filebeat.yml
$ vim filebeat.yml

  json.keys_under_root: true
  json.overwrite_keys: true
  
  
filebeat.inputs:
- type: log
  paths:
    - /path/to/your/apache/logfile.log
  json.message_key: message   # message_key是用来合并多行json日志使用的,如果配置该项还需要配置multiline的设置
  json.keys_under_root: true  # keys_under_root可以让字段位于根节点,默认为false
  json.overwrite_keys: true   # 对于同名的key,覆盖原有key值
  json.add_error_key: true    # 将解析错误的消息记录储存在error.message字段中
# 36服务器配置
[appview@db03 filebeat]$ egrep -v '#|^$' filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/httpd/access_log
  json.message_key: message
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.elasticsearch:
  hosts: ["192.168.75.32:9200"]
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~  
  
# 32服务器配置  
[appview@elk02 config]$ egrep -v '#|^$' elasticsearch.yml
cluster.name: my-application
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]  

# 33服务器配置
[appview@elk03 config]$ egrep -v '#|^$' kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.75.32:9200"]
kibana.index: ".kibana"
i18n.locale: "zh-CN"

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-V3JblYbD-1691364354838)(assets/image-20230721003309184.png)]

启动filebeat

# 启动filebeat
[appview@db03 filebeat]$ ./filebeat -e -c filebeat.yml

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-TjubbyEL-1691364354838)(assets/image-20230720232413760.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VRARoSpi-1691364354839)(assets/image-20230721005814365.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7U3W8Wg1-1691364354839)(assets/image-20230721232251783.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ry7KbSJk-1691364354839)(assets/image-20230721232521807.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zzyDN5BY-1691364354839)(assets/image-20230721232514066.png)]

1.8 安装NGINX

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6t9LnnCf-1691364354840)(assets/image-20230724225612528.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hr7HZghP-1691364354840)(assets/image-20230724230222954.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rG1eOPCX-1691364354840)(assets/image-20230724231820915.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-NoQz9Kqn-1691364354841)(assets/image-20230724232359439.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ziQb3VEI-1691364354841)(assets/image-20230725061522043.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BE347zce-1691364354841)(assets/image-20230725061548527.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rWpPPbPz-1691364354842)(assets/image-20230725062518839.png)]

1.9 filebeat采集多日志

多采集一项增加如下配置

- type: log
  enabled: true
  paths:
    - /var/log/httpd/access_log 
  json.message_key: message
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true

示例代码

[appview@db03 filebeat]$ egrep -v "#|^$" filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log 
  json.message_key: message
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true
- type: log
  enabled: true
  paths:
    - /var/log/httpd/access_log 
  json.message_key: message
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
setup.template.name: "web_edon_com"
setup.template.pattern: "web_edon_com_nginx_"
setup.ilm.enabled: false
output.elasticsearch:
  hosts: ["192.168.75.32:9200"]
  index: "web_edon_com_nginx_%{+yyyy.MM.dd}"
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

修改NGINX字段名称

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QZlwuifF-1691364354842)(assets/image-20230730000322130.png)]

方便区别kibana展示

    log_format json_format '{"timestamp":"$time_iso8601", "Ng_remote_addr":"$remote_addr", "Ng_remote_user":"$remote_user", "Ng_request":"$request", "Ng_status":$status, "Ng_request_time
":$request_time, "Ng_body_bytes_sent":$body_bytes_sent, "Ng_http_referer":"$http_referer", "Ng_http_user_agent":"$http_user_agent"}';

    # 定义名为json_format的log_format,用于将日志输出为JSON格式

    access_log /var/log/nginx/access.log json_format;

nginx -t

nginx -s reload

[root@db03 ~]# tailf /var/log/nginx/access.log
{"timestamp":"2023-07-30T00:04:35+08:00", "Ng_remote_addr":"192.168.75.1", "Ng_remote_user":"-", "Ng_request":"GET / HTTP/1.1", "Ng_status":304, "Ng_request_time":0.000, "Ng_body_bytes_sent":0, "Ng_http_referer":"-", "Ng_http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"}
{"timestamp":"2023-07-30T00:04:35+08:00", "Ng_remote_addr":"192.168.75.1", "Ng_remote_user":"-", "Ng_request":"GET / HTTP/1.1", "Ng_status":304, "Ng_request_time":0.000, "Ng_body_bytes_sent":0, "Ng_http_referer":"-", "Ng_http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"}

1.10 修改filebeat,传送es索引名称

elasticsearch收集多台服务器日志,通过filebeat更改自定义索引名称.

# 定义要监控的日志文件路径和相关配置
filebeat.inputs:
# 配置日志类型,如果有多个应用系统可以配置多个`- type`
- type: log 
  # 启用日志记录配置,在es上创建索引库
  enabled: true
  # 定义监控的日志文件路径,可以指定多个目录
  paths:
    - /var/log/nginx/access.log 
  # 日志文件中的JSON数据放在根级别  
  json.keys_under_root: true
  # 如果解析后的JSON数据与已有的键名冲突,将覆盖以有的键名
  json.overwrite_keys: true  
  # 给日志的来源打一个标签
  tags: ["nginx"]
  
- type: log 
  enabled: true
  paths:
    - /var/log/httpd/access_log 
  json.keys_under_root: true
  json.overwrite_keys: true  
  tags: ["httpd"]

# 定义es输出的相关设置  
output.elasticsearch:
  # 定义es服务器的主机ip和端口
  hosts: ["192.168.75.32:9200"]
  # 定义索引的配置,根据标签来动态设置不同的索引名
  indices:
    # 定义NGINX标签对应的索引名格式,根据日期动态生成索引名,例如:`nginx-access-2023.08.01`
    - index: "nginx-access-%{+yyyy.MM.dd}"
      # 根据标签的值进行条件判断,如果包含指定的标签`nginx`,将使用对应的索引名格式
      when.contains:
        tags: "nginx"
    # 定义httpd标签对应的索引名格式,根据日期动态生成索引名,例如:`httpd-access-2023.08.01`    
    - index: "httpd-access-%{+yyyy.MM.dd}"
      # 根据标签的值进行条件判断,如果包含指定的标签`httpd`,将使用对应的索引名格式
      when.contains:
        tags: "httpd"
        
# 定义索引模版的名称:test表示索引模版的名称为test
setup.template.name: "test"
# 定义匹配索引的模式:test-*表示索引模版匹配所有以test-*开头的索引
setup.template.pattern: "test-*"
# 定义是否启用索引模版,false:表示不启用
setup.template.enabled: false
# 定义是否允许覆盖现有的索引模版,true:表示允许覆盖现有所有的模版
setup.template.overwrite: true
[appview@db03 filebeat]$ egrep -v "#|^$" filebeat.yml
filebeat.inputs:
- type: log 
  enabled: true
  paths:
    - /var/log/nginx/access.log 
  json.keys_under_root: true
  json.overwrite_keys: true  
  tags: ["nginx"]
  
- type: log 
  enabled: true
  paths:
    - /var/log/httpd/access_log 
  json.keys_under_root: true
  json.overwrite_keys: true  
  tags: ["httpd"]
output.elasticsearch:
  hosts: ["192.168.75.32:9200"]
  indices:
    - index: "test-nginx-access-%{+yyyy.MM.dd}"
      when.contains:
        tags: "nginx"
    - index: "test-httpd-access-%{+yyyy.MM.dd}"
      when.contains:
        tags: "httpd"
setup.template.name: "test"
setup.template.pattern: "test-*"
setup.template.enabled: false
setup.template.overwrite: true

# 重启filebeat服务
[appview@db03 filebeat]$ ./filebeat -e -c filebeat.yml
[appview@elk02 ~]$ curl 127.0.0.1:9200/_cat/indices | grep test
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   980  100   980    0     0   141k      0 --:--:-- --:--:-- --:--:--  159k
yellow open test-nginx-access-2023.08.01      DOq_Q3kjQBGRY4jx4udzGg 1 1  11      0   81kb   81kb
yellow open test-httpd-access-2023.08.01      wu46jrRbTnO5uBemEQJTvQ 1 1  15      0 58.3kb 58.3kb

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mHc6LEMB-1691364354843)(assets/image-20230801232609959.png)]

1.11 logstash

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-o5vvcWlN-1691364354843)(assets/image-20230730183704662.png)]

开启mysql慢日志

# 参数说明:
slow_query_log :慢查询开启的状态,ON 开启  OFF未开启
slow_query_log_file:慢查询日志存放的位置(这个目录需要mysql的运行账号的可读可写的权限,一般设置为mysql的数据存放目录)
long_query_time : 查询超过多少秒才记录

# 注意日志目录要是mysql用户,要有写的权限,没有目录需要创建
[root@c1 data]# egrep -v "#|^$" /application/mysql/my.cnf 
[mysqld]
slow_query_log = 1
slow_query_log_file = "/application/mysql/data/web-slow.log"
log_queries_not_using_indexes = 1
long_query_time = 2
secure_file_priv = ''
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

[root@c1 mysql]# /etc/init.d/mysqld restart 
[root@c1 ~]# seq 1 19999999 > /tmp/big

[root@c1 ~]# mysql -u root -pedon111
mysql> load data infile '/tmp/big' into table db1.t1;
ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot execute this statement

# 首先在mysql环境下查询secure_file_priv参数。
mysql> show variables like 'secure-file-priv';
Empty set (0.00 sec)

处理报错:

ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot execute this statement

# 在my.cnf配置文件里面添加secure_file_prive参数,重启MySQL
[root@c1 ~]# egrep -v "#|^$" /application/mysql/my.cnf | grep secure_file_priv
secure_file_priv = ''

[root@c1 ~]# /etc/init.d/mysqld restart 
[root@c1 ~]# mysql -u root -pedon111

mysql> load data infile '/tmp/big' into table db1.t1;
Query OK, 19999999 rows affected (25.77 sec)
Records: 19999999  Deleted: 0  Skipped: 0  Warnings: 0

测试slow日志

# 测试slow日志
mysql> create database db1;
Query OK, 1 row affected (0.00 sec)

mysql> create table db1.t1(id int(10)not null)engine=innodb;
Query OK, 0 rows affected (0.01 sec)

mysql> load data infile '/tmp/big' into table db1.t1;
mysql> select * from db1.t1 where id = '13999';
+-------+
| id    |
+-------+
| 13999 |
| 13999 |
| 13999 |
| 13999 |
| 13999 |
| 13999 |
| 13999 |
| 13999 |
+-------+
8 rows in set (39.79 sec)

[root@c1 data]# tailf /application/mysql/data/web-slow.log 
/application/mysql-5.6.49/bin/mysqld, Version: 5.6.49-log (Source distribution). started with:
Tcp port: 0  Unix socket: (null)
Time                 Id Command    Argument
# Time: 230731 23:01:45
# User@Host: root[root] @ localhost []  Id:     1
# Query_time: 25.378867  Lock_time: 0.001519 Rows_sent: 0  Rows_examined: 0
SET timestamp=1690815705;
load data infile '/tmp/big' into table db1.t1;
# Time: 230731 23:07:21
# User@Host: root[root] @ localhost []  Id:     1
# Query_time: 39.792536  Lock_time: 0.000098 Rows_sent: 8  Rows_examined: 159999992
SET timestamp=1690816041;
select * from db1.t1 where id = '13999';

下载安装包

服务器:192.168.75.31

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.16.3-linux-x86_64.tar.gz

[appview@elk01 download]$ wget https://artifacts.elastic.co/downloads/logstash/logstash-7.16.3-linux-x86_64.tar.gz
[appview@elk01 download]$ tar xf logstash-7.16.3-linux-x86_64.tar.gz -C ../app/
[appview@elk01 download]$ ll ../app/
总用量 0
drwxrwxr-x. 13 appview appview 266 82 07:14 logstash-7.16.3
[appview@elk01 download]$ cd ../app/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nbGGjMbN-1691364354843)(assets/image-20230801064726652.png)]

配置filebeat指向Logstash

192.168.75.28(filebeat)将mysql慢日志 推送给 192.168.75.31(Logstash)

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /application/mysql-5.6.49/data/web-slow.log
    
  multiline.pattern: "^# User@Host:"
  multiline.negate: true
  multiline.match: after
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.kibana:
output.logstash:
  hosts: ["192.168.75.31:5044"]

安装JDK8

https://www.oracle.com/java/technologies/downloads/#license-lightbox

export JAVA_HOME=/home/appview/app/jdk1.8.0_381/  #jdk安装目录
export JRE_HOME=${JAVA_HOME}/jre                   #jre安装目录
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=$PATH:${JAVA_HOME}/bin:${JRE_HOME}/bin

创建用户

$ ln -s logstash-7.16.3/ logstash
$ sudo groupadd logstash
$ sudo useradd -r -g logstash -d ~/app/logstash -s /sbin/nologin -c "logstash" logstash
$ sudo chown -R logstash:logstash logstash
$ ll
总用量 4
drwxrwxr-x.  8 appview  appview   115 82 21:41 jdk-11.0.1
drwxrwxr-x.  8 appview  appview  4096 82 21:15 jdk1.8.0_381
drwxrwxr-x.  8 appview  appview   115 82 21:35 jdk-9.0.1
lrwxrwxrwx.  1 logstash logstash   16 82 21:25 logstash -> logstash-7.16.3/
drwxrwxr-x. 14 appview  appview   278 82 07:17 logstash-7.16.3

测试logstash

logstash解压即可使用,无需安装

./logstash -e 'input { stdin { } } output { stdout {} }'进行启动测试

[appview@elk01 jdk1.8.0_381]$ ../logstash/bin/logstash  -e 'input { stdin { } } output { stdout {} }'
Using JAVA_HOME defined java: /home/appview/app/jdk1.8.0_381/
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
DEPRECATION: The use of JAVA_HOME is now deprecated and will be removed starting from 8.0. Please configure LS_JAVA_HOME instead.
Sending Logstash logs to /home/appview/app/logstash/logs which is now configured via log4j2.properties
[2023-08-02T21:53:03,661][INFO ][logstash.runner          ] Log4j configuration path used is: /home/appview/app/logstash/config/log4j2.properties
[2023-08-02T21:53:03,679][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.16.3", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 Java HotSpot(TM) 64-Bit Server VM 25.381-b09 on 1.8.0_381-b09 +indy +jit [linux-x86_64]"}
[2023-08-02T21:53:04,044][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-08-02T21:53:05,983][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-08-02T21:53:06,563][INFO ][org.reflections.Reflections] Reflections took 70 ms to scan 1 urls, producing 119 keys and 417 values 
[2023-08-02T21:53:08,256][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x68a19606 run>"}
[2023-08-02T21:53:08,939][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.68}
[2023-08-02T21:53:09,027][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2023-08-02T21:53:09,099][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

{
          "host" => "elk01",
       "message" => "",
    "@timestamp" => 2023-08-02T13:53:28.436Z,
      "@version" => "1"
}

{
          "host" => "elk01",
       "message" => "",
    "@timestamp" => 2023-08-02T13:53:29.490Z,
      "@version" => "1"
}

{
          "host" => "elk01",
       "message" => "",
    "@timestamp" => 2023-08-02T13:53:30.619Z,
      "@version" => "1"
}
^C[2023-08-02T21:53:31,832][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2023-08-02T21:53:32,007][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2023-08-02T21:53:32,954][INFO ][logstash.runner          ] Logstash shut down.

编辑配置

stdout {
codec => rubydebug
}

测试,将结果输入到屏幕

错误日志:/home/appview/app/logstash/logs/logstash-plain.log

[appview@elk01 config]$ mkdir conf.d
[appview@elk01 config]$ cp logstash-sample.conf conf.d/
[appview@elk01 config]$ cd conf.d/
[appview@elk01 conf.d]$ cp logstash-sample.conf logstash_to_elasticsearch.conf
[appview@elk01 conf.d]$ egrep -v '#|^&' logstash_to_elasticsearch.conf 

input {
  beats {
    port => 5044
  }
}
 
output {
  elasticsearch {
    hosts => ["http://192.168.75.32:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
  
  stdout {
    codec => rubydebug   # 配置这一步,是为了在前台启动logstash时看到实时采集信息
  } 
}


[appview@elk01 conf.d]$ # 启动Logstash
[appview@elk01 conf.d]$ /home/appview/app/logstash/bin/logstash -f /home/appview/app/logstash/config/conf.d/logstash_to_elasticsearch.conf


[2023-08-02T22:36:17,581][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}

测试Logstash收集mysql慢日志

filebeat设置匹配message
  # 正则表达式:不以`# User@Host:`开头的行合并到上一行的末尾
  multiline.pattern: "^# User@Host:"
  # true或false:默认是false,配置pattern的行合并到上一行;true,不匹配pattern的行合并到上一行
  multiline.negate: true
  # after或before,合并到上一行的末尾或开头
  multiline.match: after

模拟不配置正则表达匹配

192.168.75.28 MySQL的慢日志

# 执行SQL,查看记录的慢日志
[root@c1 filebeat]# mysql -uroot -pedon111 -e "select * from db1.t1 where id = '99';"
[root@c1 ~]# tailf /application/mysql-5.6.49/data/web-slow.log
# Time: 230803 23:29:14
# User@Host: root[root] @ localhost []  Id:     7
# Query_time: 41.498820  Lock_time: 0.000130 Rows_sent: 8  Rows_examined: 159999992
SET timestamp=1691076554;
select * from db1.t1 where id = '99';

192.168.75.31 Logstash收集MySQL慢日志

每行信息都以一个message字段打印出来

{
         "agent" => {
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
        "ephemeral_id" => "618cbe62-38ea-4d58-b459-1295158bee6d",
             "version" => "7.16.3",
                "type" => "filebeat",
                "name" => "c1",
            "hostname" => "c1"
    },
         "input" => {
        "type" => "log"
    },
    "@timestamp" => 2023-08-03T15:29:17.123Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
      "@version" => "1",
          "host" => {
        "name" => "c1"
    },
           "ecs" => {
        "version" => "1.12.0"
    },
           "log" => {
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 2096
    },
       "message" => "SET timestamp=1691076554;"
}
{
           "log" => {
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 2012
    },
         "input" => {
        "type" => "log"
    },
    "@timestamp" => 2023-08-03T15:29:17.123Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
      "@version" => "1",
           "ecs" => {
        "version" => "1.12.0"
    },
          "host" => {
        "name" => "c1"
    },
       "message" => "# Query_time: 41.498820  Lock_time: 0.000130 Rows_sent: 8  Rows_examined: 159999992",
         "agent" => {
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
        "ephemeral_id" => "618cbe62-38ea-4d58-b459-1295158bee6d",
             "version" => "7.16.3",
                "type" => "filebeat",
                "name" => "c1",
            "hostname" => "c1"
    }
}
{
         "agent" => {
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
        "ephemeral_id" => "618cbe62-38ea-4d58-b459-1295158bee6d",
             "version" => "7.16.3",
                "type" => "filebeat",
                "name" => "c1",
            "hostname" => "c1"
    },
         "input" => {
        "type" => "log"
    },
    "@timestamp" => 2023-08-03T15:29:17.123Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
      "@version" => "1",
           "ecs" => {
        "version" => "1.12.0"
    },
          "host" => {
        "name" => "c1"
    },
           "log" => {
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 1962
    },
       "message" => "# User@Host: root[root] @ localhost []  Id:     7"
}
{
         "agent" => {
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
        "ephemeral_id" => "618cbe62-38ea-4d58-b459-1295158bee6d",
             "version" => "7.16.3",
                "type" => "filebeat",
                "name" => "c1",
            "hostname" => "c1"
    },
         "input" => {
        "type" => "log"
    },
    "@timestamp" => 2023-08-03T15:29:17.123Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
      "@version" => "1",
           "ecs" => {
        "version" => "1.12.0"
    },
          "host" => {
        "name" => "c1"
    },
           "log" => {
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 1938
    },
       "message" => "# Time: 230803 23:29:14"
}
{
           "log" => {
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 2122
    },
         "input" => {
        "type" => "log"
    },
    "@timestamp" => 2023-08-03T15:29:17.123Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
      "@version" => "1",
           "ecs" => {
        "version" => "1.12.0"
    },
          "host" => {
        "name" => "c1"
    },
         "agent" => {
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
        "ephemeral_id" => "618cbe62-38ea-4d58-b459-1295158bee6d",
             "version" => "7.16.3",
                "type" => "filebeat",
                "name" => "c1",
            "hostname" => "c1"
    },
       "message" => "select * from db1.t1 where id = '99';"
}

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5UXu2VN2-1691364354844)(assets/image-20230803230012540.png)]

mysql服务器安装filebeat采集slow.log日志

ip:192.168.75.28

  1. 安装jdk
  2. 安装filebeat
  3. mysql慢日志 /application/mysql-5.6.49/data/web-slow.log
# 开启慢日志记录
[root@c1 ~]# egrep -v "#|^$" /application/mysql/my.cnf
[mysqld]
slow_query_log = 1
slow_query_log_file = "/application/mysql/data/web-slow.log"
log_queries_not_using_indexes = 1
long_query_time = 2
secure_file_priv = ''
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

[root@c1 ~]# cd app/
[root@c1 app]# ll
总用量 0
lrwxrwxrwx 1 root root  29 82 23:03 filebeat -> filebeat-7.16.3-linux-x86_64/
drwxr-xr-x 6 root root 248 82 23:36 filebeat-7.16.3-linux-x86_64
lrwxrwxrwx 1 root root  13 82 23:03 jdk -> jdk1.8.0_381/
drwxr-xr-x 8 root root 294 82 23:02 jdk1.8.0_381

# 设置java变量
[root@c1 app]# cat /etc/profile| grep JAVA
export JAVA_HOME=/root/app/jdk/
export JRE_HOME=${JAVA_HOME}/jre                   
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=$PATH:${JAVA_HOME}/bin:${JRE_HOME}/bin

# 设置filebeat文件,指定mysql慢日志目录,并发送到Logstash服务器
[root@c1 app]# egrep -v '#|^$' filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /application/mysql-5.6.49/data/web-slow.log
    
  multiline.pattern: "^# User@Host:"
  multiline.negate: true
  multiline.match: after
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.kibana:
output.logstash:
  hosts: ["192.168.75.31:5044"]


# 启动filebeat
[root@c1 filebeat]# ./filebeat -e -c filebeat.yml

# 执行SQL语句,查看Logstash日志
[root@c1 data]# mysql -uroot -pedon111 -e "select * from db1.t1 where id = '99';"
Warning: Using a password on the command line interface can be insecure.
+----+
| id |
+----+
| 99 |
| 99 |
| 99 |
| 99 |
| 99 |
| 99 |
| 99 |
| 99 |
+----+
启动Logstash,测试

192.168.75.31

# mysql执行查询语句,超过两秒的记录下来
[appview@elk01 conf.d]$ /home/appview/app/logstash/bin/logstash -f /home/appview/app/logstash/config/conf.d/logstash_to_elasticsearch.conf 

{
       "message" => "# Time: 230802 23:43:41",
         "input" => {
        "type" => "log"
    },
    "@timestamp" => 2023-08-02T15:43:46.760Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
      "@version" => "1",
           "ecs" => {
        "version" => "1.12.0"
    },
          "host" => {
        "name" => "c1"
    },
         "agent" => {
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
        "ephemeral_id" => "9ce94fb6-f0f7-4f63-ba28-a096197ef7b6",
             "version" => "7.16.3",
                "type" => "filebeat",
                "name" => "c1",
            "hostname" => "c1"
    },
           "log" => {
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 1494
    }
}
{
       "message" => "# User@Host: root[root] @ localhost []  Id:     5\n# Query_time: 41.555481  Lock_time: 0.000075 Rows_sent: 8  Rows_examined: 159999992\nSET timestamp=1690991021;\nselect * from db1.t1 where id = '99';",
         "input" => {
        "type" => "log"
    },
    "@timestamp" => 2023-08-02T15:43:46.760Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
      "@version" => "1",
           "ecs" => {
        "version" => "1.12.0"
    },
          "host" => {
        "name" => "c1"
    },
         "agent" => {
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
        "ephemeral_id" => "9ce94fb6-f0f7-4f63-ba28-a096197ef7b6",
             "version" => "7.16.3",
                "type" => "filebeat",
                "name" => "c1",
            "hostname" => "c1"
    },
           "log" => {
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 1518,
         "flags" => [
            [0] "multiline"
        ]
    }
}

1.12 Grok格式化日志

思路:

Logstash处理数据

  1. “message” => “# Time: 230802 23:43:41” --> 删除此行内容
  2. “message” => “# User@Host: root[root] @ localhost [] Id: 5\n# Query_time: 41.555481 Lock_time: 0.000075 Rows_sent: 8 Rows_examined: 159999992\nSET timestamp=1690991021;\nselect * from db1.t1 where id = ‘99’;” --> 过滤日志生成JSON格式
# 1.修改28的filebeat
[root@c1 filebeat]# egrep -v "#|^$" filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /application/mysql-5.6.49/data/web-slow.log
    
  multiline.negate: true
  multiline.match: after
    
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.kibana:
output.logstash:
  hosts: ["192.168.75.31:5044"]
  
# 2.重启filebeat
[root@c1 filebeat]# ./filebeat -e -c filebeat.yml

配置Logstash的grok插件

grok插件:配置Logstash格式化日志JSON格式

filter {
   #这一步格式化messages为json格式
    grok {
           match => [ "message", "(?m)^# User@Host: %{USER:query_user}\[[^\]]+\] @ (?:(?<query_host>\S*) )?\[(?:%{IP:query_ip})?\]\s+Id:\s+%{NUMBER:id:int}\s# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)" ]
       }
#这一步是将日志中的时间那一行(如:# Time: 181218  9:17:42)加上一个“drop”的tag
    grok {
        match => { "message" => "# Time: " }
        add_tag => [ "drop" ]
        tag_on_failure => []
    }
#删除标签中含有drop的行。也就是要删除慢日志里面的“# Time: 181218  9:17:42”这样的内容
    if "drop" in [tags] {
        drop {}
    }
 
# 时间转换
    date {
        match => ["mysql.slowlog.timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss"]
        target => "@timestamp"
        timezone => "Asia/Shanghai"
    }
 
    ruby {
        code => "event.set('[@metadata][today]', Time.at(event.get('@timestamp').to_i).localtime.strftime('%Y.%m.%d'))"
    }
 
#删除字段message
    mutate {
        remove_field => [ "message" ]
    }
}

启动测试grok

# 3.修改Logstash配置文件,grok格式化日志

[appview@elk01 conf.d]$ pwd
/home/appview/app/logstash/config/conf.d

[appview@elk01 conf.d]$ cat logstash_to_elasticsearch.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
 
input {
  beats {
    port => 5044
  }
}

filter {
   #这一步格式化messages为json格式
    grok {
           match => [ "message", "(?m)^# User@Host: %{USER:query_user}\[[^\]]+\] @ (?:(?<query_host>\S*) )?\[(?:%{IP:query_ip})?\]\s+Id:\s+%{NUMBER:id:int}\s# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)" ]
       }
#这一步是将日志中的时间那一行(如:# Time: 181218  9:17:42)加上一个“drop”的tag
    grok {
        match => { "message" => "# Time: " }
        add_tag => [ "drop" ]
        tag_on_failure => []
    }
#删除标签中含有drop的行。也就是要删除慢日志里面的“# Time: 181218  9:17:42”这样的内容
    if "drop" in [tags] {
        drop {}
    }
 
# 时间转换
    date {
        match => ["mysql.slowlog.timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss"]
        target => "@timestamp"
        timezone => "Asia/Shanghai"
    }
 
    ruby {
        code => "event.set('[@metadata][today]', Time.at(event.get('@timestamp').to_i).localtime.strftime('%Y.%m.%d'))"
    }
 
#删除字段message
    mutate {
        remove_field => [ "message" ]
    }
}

# 将日志发送给elasticsearch
output {
  elasticsearch {
    hosts => ["http://192.168.75.32:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
  # 配置这一步,是为了在前台启动logstash时看到实时采集信息
  stdout {
    codec => rubydebug   
  } 
}
### MySQL慢日志
# Time: 230806 20:38:34
# User@Host: root[root] @ localhost []  Id:    49
# Query_time: 4.478010  Lock_time: 0.000057 Rows_sent: 1  Rows_examined: 19999999
SET timestamp=1691325514;
select * from db1.t1 where id = '8667';


# 4.启动服务,查看grok格式化的效果
[appview@elk01 conf.d]$ /home/appview/app/logstash/bin/logstash -f /home/appview/app/logstash/config/conf.d/logstash_to_elasticsearch.conf 

{
             "host" => {
        "name" => "c1"
    },
              "ecs" => {
        "version" => "1.12.0"
    },
       "query_user" => "root",
        "rows_sent" => 1,
         "@version" => "1",
       "query_host" => "localhost",
              "log" => {
         "flags" => [
            [0] "multiline"
        ],
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
        "offset" => 12144
    },
       "@timestamp" => 2023-08-06T13:30:46.116Z,
       "query_time" => 4.550142,
        "lock_time" => 4.2e-05,
        "timestamp" => "1691328641",
            "input" => {
        "type" => "log"
    },
             "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
               "id" => 54,
           "action" => "select",
            "query" => "select * from db1.t1 where id = '688663';",
    "rows_examined" => 19999999,
            "agent" => {
                "name" => "c1",
        "ephemeral_id" => "2834ba7f-349e-4221-b38c-7b964f2441f3",
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
            "hostname" => "c1",
                "type" => "filebeat",
             "version" => "7.16.3"
    }
}

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rjhwfKZd-1691364354844)(assets/image-20230803234144564.png)]

规范Logstash

[appview@elk01 conf.d]$ cat logstash_to_elasticsearch.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
 
input {
  beats {
    port => 5044
  }
}



filter {
 
    # 这一步格式化messages为json格式
    grok {
           match => [ "message", "(?m)^# User@Host: %{USER:query_user}\[[^\]]+\] @ (?:(?<query_host>\S*) )?\[(?:%{IP:query_ip})?\]\s+Id:\s+%{NUMBER:id:int}\s# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)" ]
       }
    # 这一步是将日志中的时间那一行(如:# Time: 181218  9:17:42)加上一个“drop”的tag
    grok {
        match => { "message" => "# Time: " }
        add_tag => [ "drop" ]
        tag_on_failure => []
    }
    # 删除标签中含有drop的行。也就是要删除慢日志里面的“# Time: 181218  9:17:42”这样的内容
    if "drop" in [tags] {
        drop {}
    }
 
    # 时间转换
    date {
        match => ["mysql.slowlog.timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss"]
        target => "@timestamp"
        timezone => "Asia/Shanghai"
    }
 
    ruby {
        code => "event.set('[@metadata][today]', Time.at(event.get('@timestamp').to_i).localtime.strftime('%Y.%m.%d'))"
    }
 
    # 删除字段message
    mutate {
        remove_field => [ "message" ]
    }
}


 
output {
  elasticsearch {
    hosts => ["http://192.168.75.32:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
  
#  stdout {
     # 配置这一步,是为了在前台启动logstash时看到实时采集信息
#    codec => rubydebug   
#  }
 
}

nohup 启动命令

Logstash启动命令
[appview@elk01 conf.d]$ nohup /home/appview/app/logstash/bin/logstash -f /home/appview/app/logstash/config/conf.d/logstash_to_elasticsearch.conf > ../../logs/out.log &
[1] 12352
[appview@elk01 conf.d]$ nohup: 忽略输入重定向错误到标准输出端

[appview@elk01 conf.d]$ tailf ../../logs/out.log 
filebeat启动命令
[root@c1 filebeat]# nohup /root/app/filebeat/filebeat -e -c filebeat.yml > logs/out.log &
[1] 8306
[root@c1 filebeat]# nohup: 忽略输入重定向错误到标准输出端

[root@c1 filebeat]# tailf logs/out.log 
elasticsearch启动命令
[appview@elk02 bin]$ nohup ./elasticsearch > ../logs/elasticsearch.log &
[1] 71331
[appview@elk02 bin]$ nohup: 忽略输入重定向错误到标准输出端

[appview@elk02 bin]$ tailf ../logs/elasticsearch.log 

kibana启动命令

[appview@elk03 bin]$ nohup ./kibana > ../logs/kibana.log &
[1] 12673
[appview@elk03 bin]$ nohup: 忽略输入重定向错误到标准输出端

[appview@elk03 bin]$ tailf ../logs/kibana.log

[appview@elk03 logs]$ netstat -tunlp |grep 5601
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      12673/./../node/bin 

[appview@elk03 logs]$ ps -ef | grep 12673
appview   12673  12578 21 23:25 pts/0    00:00:27 ./../node/bin/node ./../src/cli/dist

[appview@elk03 logs]$ lsof -i :5601
COMMAND   PID    USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
node    12673 appview   49u  IPv4 1074326      0t0  TCP *:esmagent (LISTEN)

1.13 Logstash总结

复盘Logstash之grok格式化日志

  1. 安装Logstash插件
  2. 问题Mysql的一个web-slow.log日志是多行,且有多个message字段,解决filebeat.yml处理开启多行合并一行
  3. 问题Logstash还有两个message字段
  4. 使用grok插件来处理web-slow.log日志为JSON格式
# 日志案例:
{
            "agent" => {
                "name" => "c1",
                "type" => "filebeat",
                  "id" => "30cf4678-f38c-420d-9365-6e3d12446317",
            "hostname" => "c1",
             "version" => "7.16.3",
        "ephemeral_id" => "f01f5112-3da8-4954-9663-8870712e300a"
    },
       "query_user" => "root",
       "query_host" => "localhost",
    "rows_examined" => 19999999,
           "action" => "select",
            "input" => {
        "type" => "log"
    },
         "@version" => "1",
       "query_time" => 4.582767,
            "query" => "select * from db1.t1 where id = '8';",
       "@timestamp" => 2023-08-06T22:42:51.387Z,
        "lock_time" => 7.1e-05,
             "host" => {
        "name" => "c1"
    },
        "rows_sent" => 1,
               "id" => 66,
              "log" => {
        "offset" => 14807,
          "file" => {
            "path" => "/application/mysql-5.6.49/data/web-slow.log"
        },
         "flags" => [
            [0] "multiline"
        ]
    },
              "ecs" => {
        "version" => "1.12.0"
    },
             "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
        "timestamp" => "1691361763"
}

filebeat报错

2023-08-07T06:42:52.389+0800    ERROR   [logstash]      logstash/async.go:280   Failed to publish events caused by: write tcp 192.168.75.28:59912->192.168.75.31:5044: write: connection reset by peer
2023-08-07T06:42:53.599+0800    ERROR   [publisher_pipeline_output]     pipeline/output.go:180  failed to publish events: write tcp 192.168.75.28:59912->192.168.75.31:5044: write: connection reset by peer

1.14 Kibana展示

删除elasticsearch索引数据,重新生成索引

[appview@elk02 bin]$ curl -XDELETE 127.0.0.1:9200/filebeat*
 
[appview@elk02 bin]$ curl 127.0.0.1:9200/_cat/indices
green  open .kibana_7.16.3_001              6a3_3bUTQga8KJcT9-HaVA 1 0 899    82  2.6mb  2.6mb
yellow open test-nginx-access-2023.08.01    pOwQZrlxSmSLOZ_LFbIbGQ 1 1  28     0   39kb   39kb
green  open .geoip_databases                EcLwae0JQuCcK0Imp9t6KQ 1 0  42    42 39.5mb 39.5mb
green  open .apm-custom-link                _1uKD61fTFC0sGZ5ROLFKw 1 0   0     0   226b   226b
green  open .apm-agent-configuration        VVC_OlhgTIqWXnOuFMcWhg 1 0   0     0   226b   226b
yellow open filebeat-7.16.3-2023.08.06      EE8CierNQ5utfC2Or5J2kQ 1 1   5     0 64.1kb 64.1kb
green  open .kibana_task_manager_7.16.3_001 DiWAOc7aSl-saFKB7lTcuQ 1 0  17 24678   15mb   15mb
green  open .async-search                   gzYI9AfIR2WVkjC3E7jedA 1 0   2     0  3.8kb  3.8kb
yellow open test-httpd-access-2023.08.01    a7Mp4X42RCC8rIJJY6q2hA 1 1 196     0 83.4kb 83.4kb
green  open .tasks                          qKLTFGJOQBGJolrS1IXI1A 1 0   6     0 41.1kb 41.1kb

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2WVeNBXw-1691364354845)(assets/image-20230807070545983.png)]

1.15 总结实现架构

mysql日志 --> filebeat采集 --> logstash(中转站)收集日志,转为JSON格式 --> elasticsearch处理日志 --> kibana展示、分析日志

1.16 消息中间件kafka


[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5KIZMR5g-1691364354845)(assets/image-20230807072446121.png)]

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值