ELK实战一

一、总结ELK 各组件的主要功能

elasticsearch:负责数据存储及检索
Elasticsearch简介:
1、Elasticsearch使用Java语言开发,是建立在全文搜索引擎Apache Lucene基础之上的搜索引擎,https://lucene.apache.org。
2、是一个高度可扩展的开源全文搜索和分析引擎,可实现数据的近实时(Near Real Time、NRT)全文检索。
3、支持分布式以实现集群高可用。
4、高性能、可以处理大规模业务数据。
5、数据以json文档格式存储,基于API接口进行数据读写。
6、对数据实现跨主机分片、分片复制以实现数据跨主机的高可用。
logstash:是一个具有实时传输能力的数据收集与处理组件,其可以通过插件实现各场景的日志收集、日志过滤、日志处理及日志输出,支持普通log、json格式等格式的日志解析,处理完成后把日志发送给elasticsearch cluster进行存储。
kibana:负责从ES读取数据进行可视化展示及数据管理,Kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等。

二、部署ES cluster,并实现XPACK认证

# 配置主机名解析
root@es-node1:~# vim /etc/hosts
172.18.10.170 es-node1
172.18.10.171 es-node2
172.18.10.172 es-node3
# 内核参数优化
root@es-node1:~# vim /etc/sysctl.conf
vm.max_map_count=262144
# 资源limit优化
root@es-node1:~# vim /etc/security/limits.conf
root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000 
root soft msgqueue 8192000
root hard msgqueue 8192000
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000   
* soft msgqueue 8192000
* hard msgqueue 8192000

# 创建普通用户运行环境
root@es-node1:~# groupadd -g 2888 elasticsearch
root@es-node1:~# useradd -u 2888 -g 2888 -r -m -s /bin/bash elasticsearch
root@es-node1:~# passwd elasticsearch

# 部署elasticsearch集群
root@es-node1:~# mkdir /apps
root@es-node1:~# cd /apps/
root@es-node1:/apps# tar -xvf elasticsearch-8.5.1-linux-x86_64.tar.gz 
root@es-node1:/apps# ln -sv /apps/elasticsearch-8.5.1 /apps/elasticsearch
root@es-node1:/apps# reboot

# xpack认证签发环境
root@es-node1:~# chown -R elasticsearch.elasticsearch /apps/elasticsearch*
root@es-node1:~# su - elasticsearch
elasticsearch@es-node1:~$ cd /apps/elasticsearch
elasticsearch@es-node1:/apps/elasticsearch$ vim instances.yml
instances:
  - name: "es1.example.com"
    ip:
      - "172.18.10.170"
  - name: "es2.example.com"
    ip:
      - "172.18.10.171"
  - name: "es3.example.com"
    ip:
      - "172.18.10.172"
#生成CA私钥,默认名字为elastic-stack-ca.p12
elasticsearch@es-node1:/apps/elasticsearch$ bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]:  # 使用默认名字,直接回车
Enter password for elastic-stack-ca.p12 : 	# 不使用密码

# 生成CA公钥,默认名称为elastic-certificates.p12
elasticsearch@es-node1:/apps/elasticsearch$ bin/elasticsearch-certutil  cert --ca elastic-stack-ca.p12 # 不使用密码。回车默认即可

# 签发elasticsearch集群主机证书
elasticsearch@es-node1:/apps/elasticsearch$ bin/elasticsearch-certutil cert --silent --in instances.yml --out certs.zip --pass magedu123 --ca elastic-stack-ca.p12
elasticsearch@es-node1:/apps/elasticsearch$ ll
total 2256
drwxr-xr-x  9 elasticsearch elasticsearch    4096 11月  7 22:04 ./
drwxr-xr-x  3 root          root             4096 11月  7 21:46 ../
drwxr-xr-x  2 elasticsearch elasticsearch    4096 11月 10  2022 bin/
-rw-------  1 elasticsearch elasticsearch   11581 11月  7 22:04 certs.zip
drwxr-xr-x  3 elasticsearch elasticsearch    4096 11月  7 21:46 config/
-rw-------  1 elasticsearch elasticsearch    3596 11月  7 22:01 elastic-certificates.p12
lrwxrwxrwx  1 elasticsearch elasticsearch      25 11月  7 21:46 elasticsearch-8.5.1 -> /apps/elasticsearch-8.5.1/
-rw-------  1 elasticsearch elasticsearch    2672 11月  7 21:59 elastic-stack-ca.p12
-rw-rw-r--  1 elasticsearch elasticsearch     191 11月  7 21:56 instances.yml
drwxr-xr-x  8 elasticsearch elasticsearch    4096 11月 10  2022 jdk/
drwxr-xr-x  5 elasticsearch elasticsearch    4096 11月 10  2022 lib/
-rw-r--r--  1 elasticsearch elasticsearch    3860 11月 10  2022 LICENSE.txt
drwxr-xr-x  2 elasticsearch elasticsearch    4096 11月 10  2022 logs/
drwxr-xr-x 67 elasticsearch elasticsearch    4096 11月 10  2022 modules/
-rw-r--r--  1 elasticsearch elasticsearch 2235851 11月 10  2022 NOTICE.txt
drwxr-xr-x  2 elasticsearch elasticsearch    4096 11月 10  2022 plugins/
-rw-r--r--  1 elasticsearch elasticsearch    8107 11月 10  2022 README.asciidoc

# 证书分发
# 本机(es-node1证书)
elasticsearch@es-node1:/apps/elasticsearch$ unzip certs.zip
elasticsearch@es-node1:/apps/elasticsearch$ mkdir config/certs
elasticsearch@es-node1:/apps/elasticsearch$ cp -rp es1.example.com/es1.example.com.p12 config/certs/

# (es-node2证书)
root@es-node2:~# su - elasticsearch 
elasticsearch@es-node2:~$ cd /apps/elasticsearch
elasticsearch@es-node2:/apps/elasticsearch$ mkdir config/certs
elasticsearch@es-node1:/apps/elasticsearch$ scp -rp es2.example.com elasticsearch@172.18.10.171:/apps/elasticsearch/config/certs/

# (es-node3证书)
root@es-node3:~# su - elasticsearch 
elasticsearch@es-node3:~$ cd /apps/elasticsearch
elasticsearch@es-node3:/apps/elasticsearch$ mkdir config/certs
elasticsearch@es-node1:/apps/elasticsearch$ scp -rp es3.example.com elasticsearch@172.18.10.172:/apps/elasticsearch/config/certs/

# 生成keystore文件(keystore是保存了证书密码的认证文件magedu123)
elasticsearch@es-node1:/apps/elasticsearch$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
Enter value for xpack.security.transport.ssl.keystore.secure_password: # 密码magedu123
elasticsearch@es-node1:/apps/elasticsearch$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
Enter value for xpack.security.transport.ssl.truststore.secure_password: # 密码magedu123
# 分发认证文件
elasticsearch@es-node1:/apps/elasticsearch/config$ scp /apps/elasticsearch/config/elasticsearch.keystore 172.18.10.172:/apps/elasticsearch/config/elasticsearch.keystore
elasticsearch@es-node1:/apps/elasticsearch/config$ scp /apps/elasticsearch/config/elasticsearch.keystore 172.18.10.172:/apps/elasticsearch/config/elasticsearch.keystore

# node2分发认证文件
elasticsearch@es-node1:/apps/elasticsearch$ scp -rp es2.example.com/es3.example.com.p12 172.18.10.172:/apps/elasticsearch/config/certs/

# node3分发认证文件
elasticsearch@es-node1:/apps/elasticsearch$ scp -rp es3.example.com/es3.example.com.p12 172.18.10.172:/apps/elasticsearch/config/certs/

# 编辑配置文件
# node1
elasticsearch@es-node1:/apps/elasticsearch$ grep -Ev "^#|^$" config/elasticsearch.yml 
cluster.name: magedu-es-cluster
node.name: node1
path.data: /data/esdata
path.logs: /data/eslogs
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["172.18.10.170", "172.18.10.171", "172.18.10.172"]
cluster.initial_master_nodes: ["172.18.10.170", "172.18.10.171", "172.18.10.172"]
action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es1.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es1.example.com.p12
root@es-node1:~# mkdir -p /data/eslogs
root@es-node1:~# mkdir -p /data/esdata
root@es-node1:/# chown -R elasticsearch.elasticsearch /data/*

# node2  配置文件
elasticsearch@es-node2:/apps/elasticsearch$ grep -Ev "^#|^$" config/elasticsearch.yml 
cluster.name: magedu-es-cluster
node.name: node2
path.data: /data/esdata
path.logs: /data/eslogs
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["172.18.10.170", "172.18.10.171", "172.18.10.172"]
cluster.initial_master_nodes: ["172.18.10.170", "172.18.10.171", "172.18.10.172"]
action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es2.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es2.example.com.p12
root@es-node2:~# mkdir -p /data/eslogs
root@es-node2:~# mkdir -p /data/esdata
root@es-node2:/# chown -R elasticsearch.elasticsearch /data/*

# node3  配置文件
elasticsearch@es-node3:/apps/elasticsearch$ grep -Ev "^#|^$" config/elasticsearch.yml 
cluster.name: magedu-es-cluster
node.name: node3
path.data: /data/esdata
path.logs: /data/eslogs
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["172.18.10.170", "172.18.10.171", "172.18.10.172"]
cluster.initial_master_nodes: ["172.18.10.170", "172.18.10.171", "172.18.10.172"]
action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es3.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es3.example.com.p12
root@es-node3:~# mkdir -p /data/eslogs
root@es-node3:~# mkdir -p /data/esdata
root@es-node3:/# chown -R elasticsearch.elasticsearch /data/*

# 配置service文件
root@es-node1:/# vim /lib/systemd/system/elasticsearch.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target

[Service]
RuntimeDirectory=elasticsearch
Environment=ES_HOME=/apps/elasticsearch
Environment=ES_PATH_CONF=/apps/elasticsearch/config
Environment=PID_DIR=/apps/elasticsearch

WorkingDirectory=/apps/elasticsearch
User=elasticsearch
Group=elasticsearch
ExecStart=/apps/elasticsearch/bin/elasticsearch --quiet

# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.

StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target

root@es-node1:/# scp /lib/systemd/system/elasticsearch.service root@172.18.10.171:/lib/systemd/system/elasticsearch.service
root@es-node1:/# scp /lib/systemd/system/elasticsearch.service root@172.18.10.172:/lib/systemd/system/elasticsearch.service

root@es-node1:/# systemctl daemon-reload && systemctl start elasticsearch.service && systemctl enable elasticsearch.service
root@es-node2:~# systemctl daemon-reload && systemctl start elasticsearch.service && systemctl enable elasticsearch.service
root@es-node3:~# systemctl daemon-reload && systemctl start elasticsearch.service && systemctl enable elasticsearch.service

# 批量修改默认账号密码 123456
elasticsearch@es-node1:/apps/elasticsearch$ bin/elasticsearch-setup-passwords interactive
******************************************************************************
Note: The 'elasticsearch-setup-passwords' tool has been deprecated. This       command will be removed in a future release.
******************************************************************************

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana_system]: 
Reenter password for [kibana_system]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

# 创建超级管理员账户
elasticsearch@es-node1:/apps/elasticsearch$ ./bin/elasticsearch-users useradd magedu -p 123456 -r superuser
# 验证

在这里插入图片描述

三、掌握ES 插件head的基本使使用

连接和概览:

在这里插入图片描述

数据浏览:

在这里插入图片描述

查询:

在这里插入图片描述

四、掌握Logstash部署、配置文件编写及收集多个日志文件,并写入到ES不同的index中

# 基于logstash收集本地日志文件
# 安装logstash
root@elk-logstash:/apps# dpkg -i logstash-8.5.1-amd64.deb

# 配置日志收集案例,监控syslog日志
root@elk-logstash:~# cat /etc/logstash/conf.d/syslog-to-es.conf
input {
  file {
    path => "/var/log/syslog"
    type => "systemlog"
    start_position => "beginning"
    stat_interval => "1"
}
}

output {
  if [type] == "systemlog" {
    elasticsearch {
      hosts => ["172.18.10.170:9200"]
      index => "magedu-systemlog-%{+YYYY.MM.dd}"
      password => "123456"
      user => "magedu" 
    }
  }
}

root@elk-logstash:~# systemctl restart logstash
root@elk-logstash:~# chmod 777 /var/log/syslog
# 验证

在这里插入图片描述

# 收集auth.log日志
root@elk-logstash:~# vim /etc/logstash/conf.d/syslog-to-es.conf 
input {
  file {
    path => "/var/log/syslog"
    type => "systemlog"
    start_position => "beginning"
    stat_interval => "1"
}

  file {
    path => "/var/log/auth.log"
    type => "authlog"
    start_position => "beginning"
    stat_interval => "1"
}
}

output {
  if [type] == "systemlog" {
    elasticsearch {
      hosts => ["172.18.10.170:9200"]
      index => "magedu-systemlog-%{+YYYY.MM.dd}"
      password => "123456"
      user => "magedu"
    }
  }

  if [type] == "authlog"{
    elasticsearch {
      hosts => ["172.18.10.170:9200"]
      index => "magedu-authlog-%{+YYYY.MM.dd}"
      password => "123456"
      user => "magedu"
    }
  }
}
# 检查
root@elk-logstash:~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog-to-es.conf -t
root@elk-logstash:~# chmod 777 /var/log/auth.log
root@elk-logstash:~# systemctl restart logstash
# 验证

在这里插入图片描述

五、Kibana的安装及使用、heartbeat和metricbeat的安装使用

# Kibana安装使用
[root@redis-2 apps]# rpm -ivh kibana-8.5.1-x86_64.rpm 
root@elk-logstash:/apps# grep -Ev "^$|^#" /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://172.18.10.170:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
i18n.locale: "zh-CN"
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
pid.file: /run/kibana/kibana.pid
[root@redis-2 apps]# systemctl restart kibana.service
[root@redis-2 apps]# systemctl enable kibana.service
[root@redis-2 apps]# lsof -i:5601
COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
node    3012 kibana   19u  IPv4  50629      0t0  TCP localhost:esmagent (LISTEN)
[root@redis-2 apps]# tail -f /var/log/kibana/kibana.log
# 登录界面,使用ES的用户登录即可

在这里插入图片描述

索引管理

在这里插入图片描述

创建索引视图

在这里插入图片描述

查看视图索引

在这里插入图片描述

# metricbeat的安装使用
root@elk-logstash:/apps# dpkg -i metricbeat-8.5.1-amd64.deb 
root@elk-logstash:/apps#  grep -v "#" /etc/metricbeat/metricbeat.yml |grep -v "^$"
metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.kibana:
  host: "172.18.10.173:5601"
  setup_kibana_username: "magedu"
  setup_kibana_password: "123456"
output.elasticsearch:
  hosts: ["172.18.10.170:9200"]
  username: "magedu"
  password: "123456"
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
root@elk-logstash:/apps# systemctl restart metricbeat.service 

# heartbeat的安装使用
root@elk-logstash:/apps# dpkg -i  heartbeat-8.5.1-amd64.deb 
root@elk-logstash:/apps#  grep -v "#" /etc/heartbeat/heartbeat.yml | grep -v "^$"
heartbeat.config.monitors:
  path: ${path.config}/monitors.d/*.yml
  reload.enabled: false
  reload.period: 5s
heartbeat.monitors:
- type: http
  enabled: true
  id: http-monitor
  name: http-domain-Monitor
  urls: ["http://www.magedu.com","http://www.baidu.com"]
  schedule: '@every 10s'
- type: icmp
  enabled: true
  id: icmp-monitor
  name: icmp-ip-monitor
  schedule: '*/1 * * * * * *'
  hosts: ["172.18.10.170","172.18.10.173"]
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.kibana:
  host: "172.18.10.173:5601"
  setup_kibana_username: "magedu"
  setup_kibana_password: "123456"
output.elasticsearch:
  hosts: ["172.18.10.170:9200"]
  username: "magedu"
  password: "123456"
processors:
  - add_observer_metadata:root@elk-logstash:/apps#  grep -v "#" /etc/heartbeat/heartbeat.yml | grep -v "^$"
heartbeat.config.monitors:
  path: ${path.config}/monitors.d/*.yml
  reload.enabled: false
  reload.period: 5s
heartbeat.monitors:
- type: http
  enabled: true
  id: http-monitor
  name: http-domain-Monitor
  urls: ["http://www.magedu.com","http://www.baidu.com"]
  schedule: '@every 10s'
- type: icmp
  enabled: true
  id: icmp-monitor
  name: icmp-ip-monitor
  schedule: '*/1 * * * * * *'
  hosts: ["172.18.10.170","172.18.10.173"]
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.kibana:
  host: "172.18.10.173:5601"
  setup_kibana_username: "magedu"
  setup_kibana_password: "123456"
output.elasticsearch:
  hosts: ["172.18.10.170:9200"]
  username: "magedu"
  password: "123456"
processors:
  - add_observer_metadata:
root@elk-logstash:/apps# systemctl restart heartbeat-elastic.service 

在这里插入图片描述
在这里插入图片描述

六、基于Logstash Filter总结,并基于Gork解析Nginx默认格式的访问日志及错误日志为JSON格式、并写入Elasticsearch并在Kibana展示;

# 安装nginx
root@elk-logstash:/apps# tar -xf nginx-1.14.2.tar.gz 
root@elk-logstash:/apps/nginx-1.14.2# ./configure --prefix=/apps/nginx
root@elk-logstash:/apps/nginx-1.14.2# make
root@elk-logstash:/apps/nginx-1.14.2# make install
root@elk-logstash:/apps/nginx# /apps/nginx/sbin/nginx
root@elk-logstash:/var/log/logstash# vim /etc/logstash/conf.d/nginx-log-to-es.conf
input {
  file {
    path => "/apps/nginx/logs/access.log"
    type => "nginx-accesslog"
    stat_interval => "1"
    start_position => "beginning"
  }

  file {
    path => "/apps/nginx/logs/error.log"
    type => "nginx-errorlog"
    stat_interval => "1"
    start_position => "beginning"
  }
}

filter {
  if [type] == "nginx-accesslog" {
  grok {
    match => { "message" => ["%{IPORHOST:clientip} - %{DATA:username} \[%{HTTPDATE:request-time}\] \"%{WORD:request-method} %{DATA:request-uri} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:useragent}\""] }
    remove_field => "message"
    add_field => { "project" => "magedu"}
  }
  mutate {
    convert => [ "[response_code]", "integer"]
    }
  }
  if [type] == "nginx-errorlog" {
    grok {
      match => { "message" => ["(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:loglevel}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:message}, client: %{IPV4:clientip}, server: %{GREEDYDATA:server}, request: \"(?:%{WORD:request-method} %{NOTSPACE:request-uri}(?: HTTP/%{NUMBER:httpversion}))\", host: %{GREEDYDATA:domainname}"]}
      remove_field => "message"
    }
  }
}

output {
  if [type] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["172.18.10.170:9200"]
      index => "magedu-nginx-accesslog-%{+yyyy.MM.dd}"
      user => "magedu"
      password => "123456"
  }}

  if [type] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["172.18.10.170:9200"]
      index => "magedu-nginx-errorlog-%{+yyyy.MM.dd}"
      user => "magedu"
      password => "123456"
  }}
}
root@elk-logstash:/var/log/logstash# systemctl restart logstash

在这里插入图片描述

在这里插入图片描述

七、基于logstash收集Nginx JSON格式访问日志

# 修改nginx日志格式
root@elk-logstash:/etc/logstash/conf.d# vim /apps/nginx/conf/nginx.conf
    log_format logstash_json '{ "@timestamp": "$time_local", '
                         '"@fields": { '
                         '"remote_addr": "$remote_addr", '
                         '"remote_user": "$remote_user", '
                         '"body_bytes_sent": "$body_bytes_sent", '
                         '"request_time": "$request_time", '
                         '"status": "$status", '
                         '"request": "$request", '
                         '"request_method": "$request_method", '
                         '"http_referrer": "$http_referer", '
                         '"body_bytes_sent":"$body_bytes_sent", '
                         '"http_x_forwarded_for": "$http_x_forwarded_for", '
                         '"http_user_agent": "$http_user_agent" } }';

    access_log  logs/access.log  logstash_json;
# 重启nginx
root@elk-logstash:/apps/nginx/conf# /apps/nginx/sbin/nginx -s reload
root@elk-logstash:/etc/logstash/conf.d# vim /etc/logstash/conf.d/nginx-json-log-to-es.conf 
input {
  file {
    path => "/apps/nginx/logs/access.log"
    start_position => "end"
    type => "nginx-json-accesslog"
    stat_interval => "1"
    codec => json
  }
}


output {
  if [type] == "nginx-json-accesslog" {
    elasticsearch {
      hosts => ["172.18.10.170:9200"]
      index => "nginx-accesslog-json-%{+YYYY.MM.dd}"
      user => "magedu"
      password => "123456"
  }}
}
root@elk-logstash:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-json-log-to-es.conf -t
root@elk-logstash:/etc/logstash/conf.d# systemctl restart logstash
# 验证

在这里插入图片描述

  • 5
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
随着人口老龄化和空巢化等社会问题的日益严峻,养老问题及以及养老方式的变革成为了当前社会的发展焦点,传统的养老模式以救助型和独立型为主,社会养老的服务质量与老年人的养老需求还存在一定的差距,人们生活水平的提高以及养老多元化需求的增加都需要通过创新和灵活开放的养老模式来应对未来的养老需求,结合目前我国养老模式及养老服务问题的内容的分析,互助养老模式作为一种新型的养老模式结合自主互助的集体养老理念,帮助老年人实现了满足个性需求的养老方案,互助养老模式让老年人具备了双重角色的同时也实现可持续的发展特色。目前我国老年人的占比以每年5%的速度在飞速增长,养老问题及养老服务的提供已经无法满足当前社会养老的切实需求,在养老服务质量和养老产品的变革过程中需要集合多元化的养老模式来满足更多老人的养老需求。 鉴于我国目前人口老龄化的现状以及迅速扩张的养老服务需求,现有的养老模式已经无法应对和满足社会发展的需求,快速增长的养老人员以及养老服务供给不足造成了紧张的社会关系,本文结合当前养老服务的发展需求,利用SSM框架以及JSP技术开发设计一款正对在线互助养老的系统,通过系统平台实现养老机构信息的传递及线上预约,搭建了起了用户、养老机构以及系统管理员的三方数据平台,借助网页端实现在线的养老互助信息查询、养老机构在线预约以及求助需求等功能,通过自养互养的养老模式来帮助老年人重新发现自我价值以及丰富养老的主观能动性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值