ELK STACK搭建与应用

一、环境准备

1.1 IP地址规划

IP地址应用hostname
192.168.56.11jdk、filebeat、es、logstash、kibanaelk01
192.168.56.12jdk、eselk02
192.168.56.13jdk、eselk03

1.2 时间同步

[root@elk01/02/03 ~]# cat /var/spool/cron/root
# 时间同步
*/5 * * * * ntpdate time1.aliyun.com & >/dev/null 

注:使用crontab -e编辑易可

二、ES安装部署

因为elasticsearch服务运行需要java环境,因此两台elasticsearch服务器需要安装java环境。

2.1 jdk环境安装

[root@elk01/02/03 opt]# pwd
/opt
[root@elk01/02/03 opt]# ll jdk-8u144-linux-x64.tar.gz 
-rw-r--r-- 1 root root 185515842 May 31 09:54 jdk-8u144-linux-x64.tar.gz
[root@elk01/02/03 opt]# tar xf jdk-8u144-linux-x64.tar.gz 
[root@elk01/02/03 opt]# ln -s jdk1.8.0_144 jdk
[root@elk01/02/03 opt]# tail -3 /etc/profile
export JAVA_HOME=/opt/jdk
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
[root@elk01/02/03 opt]# source /etc/profile
[root@elk01/02/03 opt]# java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

2.2 安装ES

[root@elk01/02/03 opt]# tar xf elasticsearch-6.4.3.tar.gz
[root@elk01/02/03 opt]# ln -s elasticsearch-6.4.3 elasticsearch

创建elasticsearch数据和日志目录
[root@elk01/02/03 opt]# mkdir -p  /data/elasticsearch/{data,log}
创建es普通用户,授权es用户管理安装和数据日志目录
[root@elk01/02/03 opt]# useradd -M es
[root@elk01/02/03 opt]# chown -R es:es /data/elasticsearch/ /opt/elasticsearch*
编辑es配置文件
[root@elk01/02/03 opt]# grep -Ev "^$|#" /opt/elasticsearch/config/elasticsearch.yml 
cluster.name: my-es						# ELK的集群名称,名称相同即属于是同一个集群
node.name: node-1 						# 本机在集群内的节点名称,一个集群之内节点的名称不能重复(elk02设置为node-2)
path.data: /data/elasticsearch/data     # 数据路径
path.logs: /data/elasticsearch/log		# 日志路径
bootstrap.memory_lock: true				# 服务启动的时候锁定足够的内存,防止数据写入swap
network.host: 192.168.56.11				# 网络监听地址
http.port: 9200							# 服务监听的端口
discovery.zen.ping.unicast.hosts: ["192.168.56.11", "192.168.56.12","192.168.56.13"]	# 设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点
discovery.zen.minimum_master_nodes: 2   # 设置在选举 Master 节点时需要参与的最少的候选
主节点数,默认为 1。
http.cors.enabled: true					# 表示是否支持跨域,默认为false
http.cors.allow-origin: "*" 			# 当设置允许跨域,默认为*,表示支持所有域名,如果我们只是允许某些网站能访问,那么可以使用正则表达式
node.master: true						# 指定该节点是否有资格被选举成为master,默认是true
node.data: true							# 指定该节点是否存储索引数据,默认为true
http.max_content_length: 200mb

注:scp传输此文件到elk02、elk03,修改node.name和network.host。

jvm优化

[root@elk01/02/03 opt]# grep -Ei "xms|xmx" /opt/elasticsearch/config/jvm.options |grep -v '^#'
# 生产中一般不超过32G
-Xms1g
-Xmx1g

文件描述符优化

[root@elk01/02/03 opt]# tail -n 4 /etc/security/limits.conf
每个进程可以打开的文件数的限制
es hard nofile 65536
es soft fsize unlimited
es hard memlock unlimited
es soft memlock unlimited

修改/etc/sysctl.conf

[root@elk01/02/03 opt]# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
[root@elk01/02/03 opt]# sysctl -p
net.ipv4.ip_forward = 1
vm.max_map_count = 262144

启动es

[root@elk01/02/03 opt]# su es -c "/opt/elasticsearch/bin/elasticsearch -d"
[root@elk01/02/03 opt]# netstat -lntp | grep 9200
注:启动失败,查看日志是否有报错

2.3 ES-Head插件部署

[root@elk01 opt]# yum -y install npm git
[root@elk01 opt]# git clone git://github.com/mobz/elasticsearch-head.git
[root@elk01 opt]# cd elasticsearch-head/
[root@elk01 elasticsearch-head]# npm install grunt -save --registry=https://registry.npm.taobao.org
[root@elk01 elasticsearch-head]# ll
total 48
drwxr-xr-x  2 root root   25 Nov 22 22:03 crx
-rw-r--r--  1 root root  248 Nov 22 22:03 Dockerfile
-rw-r--r--  1 root root  221 Nov 22 22:03 Dockerfile-alpine
-rw-r--r--  1 root root  104 Nov 22 22:03 elasticsearch-head.sublime-project
-rw-r--r--  1 root root 2240 Nov 22 22:03 Gruntfile.js
-rw-r--r--  1 root root 3482 Nov 22 22:03 grunt_fileSets.js
-rw-r--r--  1 root root 1100 Nov 22 22:03 index.html
-rw-r--r--  1 root root  559 Nov 22 22:03 LICENCE
drwxr-xr-x 94 root root 4096 Nov 22 22:07 node_modules
-rw-r--r--  1 root root  933 Nov 22 22:07 package.json
-rw-r--r--  1 root root  100 Nov 22 22:03 plugin-descriptor.properties
drwxr-xr-x  4 root root   53 Nov 22 22:03 proxy
-rw-r--r--  1 root root 7243 Nov 22 22:03 README.textile
drwxr-xr-x  5 root root  182 Nov 22 22:03 _site
drwxr-xr-x  5 root root   49 Nov 22 22:03 src
drwxr-xr-x  4 root root   70 Nov 22 22:03 test

[root@elk01 elasticsearch-head]# npm run start &
[root@elk01 elasticsearch-head]# netstat -lntup|grep 9100
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      3797/grunt

修改elasticsearch服务配置文件,开启跨域访问支持,然后重启elasticsearch服务

# tail -2 /opt/elasticsearch/config/elasticsearch.yml 
http.cors.enabled: true
http.cors.allow-origin: "*"

# netstat -lntp | grep 9200
# kill -9 processid
[root@elk01/02/03 opt]# su es -c "/opt/elasticsearch/bin/elasticsearch -d"

页面访问测试

页面访问http://192.168.56.11:9100后在连接框输入http://192.168.56.11:9200连接,带⭐图标为主节点。

三、Logstash部署

注:测试实验,仅部署elk01一台上
[root@elk01 ~]# cd /opt/
[root@elk01 opt]# tar xf logstash-6.4.3.tar.gz 
[root@elk01 opt]# ln -s logstash-6.4.3 logstash
[root@elk01 bin]# tail -1 /etc/profile
export PATH=$JAVA_HOME/bin:/opt/logstash/bin:$PATH
[root@elk01 bin]# source /etc/profile

编辑服务启动文件

[root@elk01 opt]# cat /usr/lib/systemd/system/logstash.service 
[Unit]
Description=logstash

[Service]
Type=simple
User=root
Group=root
Environment=JAVA_HOME=/opt/jdk
Environment=LS_HOME=/opt/logstash
Environment=LS_SETTINGS_DIR=/opt/logstash/config/
Environment=LS_PIDFILE=/usr/opt/logstash/logstash.pid
Environment=LS_USER=root
Environment=LS_GROUP=root
Environment=LS_GC_LOG_FILE=/opt/logstash/logs/gc.log
Environment=LS_OPEN_FILES=16384
Environment=LS_NICE=19
Environment=SERVICE_NAME=logstash
Environment=SERVICE_DESCRIPTION=logstash
ExecStart=/opt/logstash/bin/logstash "--path.settings" "/opt/logstash/config/"
Restart=always
WorkingDirectory=/opt/logstash
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

启动logstash

[root@elk01 opt]# systemctl start logstash.service && systemctl status logstash.service && systemctl enable logstash.service

#测试数据输出到 elasticsearch

[root@elk01 opt]# logstash  -e 'input { stdin{} } output { elasticsearch {hosts => ["192.168.56.11:9200"] index =>  "mytest-%{+YYYY.MM.dd}" }}'
The stdin plugin is now waiting for input:
[2021-11-23T17:11:47,488][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2021-11-23T17:11:47,993][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
this is a test

#kibana web界面验证数据

四、Kibana部署

[root@elk01 opt]# tar xf kibana-6.4.3-linux-x86_64.tar.gz 
[root@elk01 opt]# ln -s kibana-6.4.3-linux-x86_64 kibana
[root@elk01 opt]# grep -Ev "^$|#" /opt/kibana/config/kibana.yml
server.port: 5601
server.host: "192.168.56.11"
elasticsearch.url: "http://192.168.56.11:9200"

启动

[root@elk01 opt]# nohup /opt/kibana/bin/kibana &

#浏览器访问:192.168.56.11:5601

五、 Logstash收集日志

说明:通过logstash收集别的日志文件,前提需要logstash用户对被收集的日志文件有读的权限并对写入的文件有写的权限。

5.1 收集单个系统日志

[root@elk01 config]# pwd
/opt/logstash/config
[root@elk01 config]# cat syslog.conf 
input {
  file {
    path => "/var/log/syslog"      #日志路径
    type => "systemlog"              #类型,自定义,在进行多个日志收集存储时可以通过该项进行判断输出
    start_position => "beginning"    #logstash 从什么位置开始读取文件数据,默认是结束位置(end),也就是说 logstash 进程会以类似 tail -F 的形式运行。如果你是要导入原有数据,把这个设定改成"beginning",logstash 进程就从头开始读取,类似 less +F 的形式运行。
    stat_interval => "2"             #logstash 每隔多久检查一次被监听文件状态(是否有更新) ,默认是 1 秒。
  }
}

output {
  elasticsearch {
    hosts => ["192.168.56.11:9200"]                 #指定录入的ES节点
    index => "logstash-syslog-%{+YYYY.MM.dd}"    #索引名称
  }
}

检测配置文件语法是否有错误

[root@elk01 config]# logstash -f syslog.conf -t
Configuration OK

启动

[root@elk01 config]# nohup logstash -f /opt/logstash/config/syslog.conf &

:两台不同的服务器录入一个索引,可以通过字段host进行区分。

5.2 收集Nginx日志

定义Nginx服务json日志格式

[root@elk01 config]# yum install -y nginx
[root@elk01 config]# cd /etc/nginx/
[root@elk01 nginx]# cp nginx.conf{,.bak}
[root@elk01 nginx]# vim /etc/nginx/nginx.conf
log_format access_json '{"@timestamp":"$time_iso8601",'		# 定义日志json格式   
 '"host":"$server_addr",'
 '"clientip":"$remote_addr",'
 '"size":$body_bytes_sent,'
 '"responsetime":$request_time,'
 '"upstreamtime":"$upstream_response_time",'
 '"upstreamhost":"$upstream_addr",'
 '"http_host":"$host",'
 '"url":"$uri",'
 '"domain":"$host",'
 '"xff":"$http_x_forwarded_for",'
 '"referer":"$http_referer",'
 '"status":"$status"}';
 access_log /var/log/nginx/access.log access_json; 			# 修改格式为上面定义的access_json格式
# 检查配置文件
[root@elk01 nginx]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# 重启服务
[root@elk01 nginx]# systemctl restart nginx.service
[root@elk01 nginx]# curl -sI http://192.168.56.11
HTTP/1.1 200 OK
Server: nginx/1.20.1
Date: Wed, 24 Nov 2021 07:32:03 GMT
Content-Type: text/html
Content-Length: 4833
Last-Modified: Fri, 16 May 2014 15:12:48 GMT
Connection: keep-alive
ETag: "53762af0-12e1"
Accept-Ranges: bytes
# 页面访问http://192.168.56.11

收集日志

[root@elk01 nginx]# cd /opt/logstash/config/
[root@elk01 config]# cat nginx-log.conf 
input {
 file {
   path => "/var/log/nginx/access.log"
   start_position => "beginning"
   type => "nginx-accesslog"
   codec => json
 } 
   file {
   path => "/var/log/messages"
   start_position => "end"  
   type => "messages"
 }
    file {
   path => "/var/log/nginx/error.log"
   start_position => "end"  
   type => "nginx-error"
 }}
output {
  if [type] == "nginx-accesslog" {
     elasticsearch {
     hosts => ["192.168.56.11:9200"]
     index => "logstash-nginx-accesslog-%{+YYYY.MM.dd}"
  }}
  if [type] == "messages" {
     elasticsearch {
     hosts => ["192.168.56.11:9200"]
     index => "logstash-messages-%{+YYYY.MM.dd}"
 }}
  if [type] == "nginx-error" {
     elasticsearch {
     hosts => ["192.168.56.11:9200"]
     index => "logstash-nginx-error-%{+YYYY.MM.dd}"
 }}
}
# 检查配置文件语法
[root@elk01 config]# logstash -f nginx-log.conf -t
Configuration OK
# 启动测试
[root@elk01 config]# nohup logstash -f /opt/logstash/config/nginx-log.conf &
# 检查索引
[root@elk01 config]# curl -s -XGET 192.168.56.11:9200/_cat/indices|grep -E 'nginx|message'
green open logstash-nginx-error-2021.11.24     iWepL4m5QguFd8YeIvojXw 5 1   6 0  54.2kb  24.3kb
green open logstash-nginx-accesslog-2021.11.24 4TxiKyYKRUeNbH8tfFtv3A 5 1  13 0 151.4kb  75.7kb
green open logstash-messages-2021.11.24        n4CelnpUSyKkPYz6DrU8dA 5 1 247 0 374.6kb 180.8kb

5.3 收集tomcat日志

# 安装tomcat
[root@elk01 config]# cd /usr/local/
[root@elk01 local]# wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.55/bin/apache-tomcat-9.0.55.tar.gz
[root@elk01 local]# tar xf apache-tomcat-9.0.55.tar.gz 
[root@elk01 local]# ln -s /usr/local/apache-tomcat-9.0.55 /usr/local/tomcat
[root@elk01 local]# mkdir /usr/local/tomcat/webapps/webdir
[root@elk01 local]# echo "<h1>welcome to use tomcat</h1>" > /usr/local/tomcat/webapps/webdir/index.html

# 修改匹配格式为json格式
[root@elk01 local]# vim +166 /usr/local/tomcat/conf/server.xml
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/>
               
# 启动tomcat,并进行访问测试生成日志
[root@elk01 local]# /usr/local/tomcat/bin/startup.sh 
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /opt/jdk
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:   
Tomcat started.
[root@elk01 local]# netstat -lntup|grep 8080
tcp6       0      0 :::8080                 :::*                    LISTEN      12987/java   

# 进行压测,测试tomcat日志是否转换为json格式
[root@elk01 local]# ab -n 100 -c 10 http://192.168.56.11:8080/webdir/index.html
[root@elk01 local]# tail -2 /usr/local/tomcat/logs/localhost_access_log.2021-11-24.txt 
{"clientip":"192.168.56.11","ClientUser":"-","authenticated":"-","AccessTime":"[24/Nov/2021:16:31:35 +0800]","method":"GET /webdir/index.html HTTP/1.0","status":"200","SendBytes":"31","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.56.11","ClientUser":"-","authenticated":"-","AccessTime":"[24/Nov/2021:16:31:35 +0800]","method":"GET /webdir/index.html HTTP/1.0","status":"200","SendBytes":"31","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}

编辑logstash配置文件

[root@elk01 local]# cat /opt/logstash/config/tomcat-log.conf
input {
  file {
   path => "/usr/local/tomcat/logs/localhost_access_log*.txt"
   start_position => "beginning"
   type => "tomct-access-log"
   stat_interval => "2"
 }} 
output {
  if [type] == "tomct-access-log" {
   elasticsearch {
   hosts => ["192.168.56.11:9200"]
   index => "logstash-tomcat-access-%{+YYYY.MM.dd}"
   codec => "json"   #以json格式写入数据
  }}
}

检测测试文件并启动服务

[root@elk01 local]# logstash -f /opt/logstash/config/tomcat-log.conf -t
Configuration OK
[root@elk01 local]# nohup logstash -f /opt/logstash/config/tomcat-log.conf &
# 查看ElasticSearch-Head插件是否数据正常录入,然后进行kibana索引匹配,并在仪表盘查看数据。
[root@elk01 local]# curl -s -XGET 192.168.56.11:9200/_cat/indices|grep -E 'tomcat'
green open logstash-tomcat-access-2021.11.24   teB0FlV8S56CLNuE3X5lhw 5 1  100 0  88.3kb 44.1kb

5.4 收集Java日志

ES数据日志即为现成的Java日志,可以进行利用。

[root@elk01 local]# cd /opt/logstash/config/
[root@elk01 config]# cat es-java.conf 
nput {
  file {
  path => "/data/elasticsearch/log/my-es.log"
  type => "eslog"
  start_position => "beginning"
  codec => multiline {
  pattern => "^\["
  negate => true
  what => "previous"
 }}
 }
output {
  if [type] == "eslog" {
   elasticsearch {
   hosts => ["192.168.56.11:9200"]
   index => "logstash-es-%{+YYYY.MM.dd}"
   codec => "json"
  }}
}

检测配置文件并启动

[root@elk01 config]# logstash -f /opt/logstash/config/es-java.conf -t
Configuration OK
[root@elk01 config]# nohup logstash -f /opt/logstash/config/es-java.conf &
[root@elk01 config]# curl -s -XGET 192.168.56.11:9200/_cat/indices|grep -E 'logstash-es'
green open logstash-es-2021.11.24              B0Uf0GGjS1eFxh01iUm8gQ 5 1   46 0  84.7kb  39.7kb

5.5 logstash 收集日志并写入 redis

[root@elk02 ~]# yum install redis -y
注:选择elk02作为redis节点
[root@elk02 ~]# grep -Ev '^$|#' /etc/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile /var/log/redis/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass 123456
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

[root@elk02 ~]# systemctl restart redis
[root@elk02 ~]# netstat -lntup|grep 6379
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      1810/redis-server 0 

编辑配置文件,测试文件,并启动

[root@elk01 ~]# cat /opt/logstash/config/nginx-redis.conf
input {
 file {
   path => "/var/log/nginx/access.log"
   start_position => "beginning"
   type => "nginx-accesslog"
   codec => json
  }}
output {
  if [type] == "nginx-accesslog" {
   redis {
     data_type => "list"
     host => "192.168.56.12"
     port => "6379"
     db => "0"
     key => "nginx-accesslog"
     password => "123456"
  }}
}

[root@elk01 ~]# logstash -f /opt/logstash/config/nginx-redis.conf -t
Configuration OK
[root@elk01 ~]# nohup logstash -f /opt/logstash/config/nginx-redis.conf &

连接redis服务器验证key值

[root@elk02 ~]# redis-cli -h 192.168.56.12
192.168.56.12:6379> auth 123456
OK
192.168.56.12:6379> keys *
1) "nginx-accesslog"

配置其他logstash服务器从redis读取数据

# 测试安装另一台logstash服务器安装到elk03
[root@elk03 ~]# cd /opt/
[root@elk03 opt]# tar xf logstash-6.4.3.tar.gz 
[root@elk03 opt]# ln -s logstash-6.4.3 logstash
[root@elk03 opt]# vim /etc/profile
[root@elk03 opt]# tail -1 /etc/profile
export PATH=$JAVA_HOME/bin:/opt/logstash/bin:$PATH
[root@elk03 opt]# source /etc/profile

# 编辑服务文件
[root@elk03 opt]# cat /usr/lib/systemd/system/logstash.service 
[Unit]
Description=logstash

[Service]
Type=simple
User=root
Group=root
Environment=JAVA_HOME=/opt/jdk
Environment=LS_HOME=/opt/logstash
Environment=LS_SETTINGS_DIR=/opt/logstash/config/
Environment=LS_PIDFILE=/usr/opt/logstash/logstash.pid
Environment=LS_USER=root
Environment=LS_GROUP=root
Environment=LS_GC_LOG_FILE=/opt/logstash/logs/gc.log
Environment=LS_OPEN_FILES=16384
Environment=LS_NICE=19
Environment=SERVICE_NAME=logstash
Environment=SERVICE_DESCRIPTION=logstash
ExecStart=/opt/logstash/bin/logstash "--path.settings" "/opt/logstash/config/"
Restart=always
WorkingDirectory=/opt/logstash
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

# 启动服务
[root@elk03 opt]# systemctl start logstash.service && systemctl status logstash.service && systemctl enable logstash.service

编辑logstash配置文件

[root@elk03 opt]# cat /opt/logstash/config/redis-key.conf
input {
  redis {
    data_type => "list"
    key => "nginx-accesslog"
    host => "192.168.56.12"
    port => "6379"
    db => "0"
    password => "123456"
    codec => "json"
    type => "nginx-accesslog"
  }}
output {
 if [type] == "nginx-accesslog" {
 elasticsearch {
   hosts => ["192.168.56.13:9200"]
   index => "logstash-nginx-accesslog-redis-%{+YYYY.MM.dd}"
}}
}

[root@elk03 opt]# logstash -f /opt/logstash/config/redis-key.conf -t
Configuration OK

[root@elk03 opt]# nohup logstash -f /opt/logstash/config/redis-key.conf &

[root@elk03 opt]# curl -s -XGET 192.168.56.11:9200/_cat/indices|grep -E 'logstash-nginx-accesslog-redis'
green open logstash-nginx-accesslog-redis-2021.11.24 XQI9QWNtT4C0bt2McecdAw 5 1   12 0 107.2kb  53.6kb

redis验证

[root@elk02 ~]# redis-cli -h 192.168.56.12
192.168.56.12:6379> auth 123456
OK
192.168.56.12:6379> keys *
1) "nginx-accesslog"
192.168.56.12:6379> keys *
(empty list or set)				# 数据已经被消费
192.168.56.12:6379> 

# 然后kibana界面索引匹配,验证数据

六、filebeat收集日志

6.1 收集Nginx日志

# 注:利用之前的json格式

[root@elk01 ~]# cd /opt/
[root@elk01 opt]# tar xf filebeat-6.4.3-linux-x86_64.tar.gz
[root@elk01 opt]# ln -s filebeat-6.4.3-linux-x86_64 filebeat
[root@elk01 opt]# cat /opt/filebeat/nginxlog.yml
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
  
output.elasticsearch:
  hosts: ["10.0.0.11:9200"]
#  index: "nginx-json-%{[beat.version]}-%{+yyyy.MM.dd}"
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
        tags: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
        tags: "error"
setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
  
# 启动
[root@elk01 opt]# nohup /opt/filebeat/filebeat -e -c /opt/filebeat/nginxlog.yml &
[root@elk01 opt]# curl -s -XGET 192.168.56.11:9200/_cat/indices|grep -E "access|error"|grep "6.4.3"
green open nginx-error-6.4.3-2021.11.24              Eg8njumpRZyKQr02CikzhA 5 1    9 0  84.7kb  42.3kb
green open nginx-access-6.4.3-2021.11.24             gv7AzzkfQ-eu3FLu4QXV3w 5 1    8 0 136.6kb  68.3kb

6.2 收集Tomcat日志

# 注:利用之前的json格式

[root@elk01 opt]# cat /opt/filebeat/tomcat-filebeat.yml
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /usr/local/tomcat/logs/localhost_access_log*.txt 
  json.keys_under_root: true
  json.overwrite_keys: true

output.elasticsearch:
  hosts: ["192.168.56.11:9200"]
  index: "tomcat-json-%{[beat.version]}-%{+yyyy.MM.dd}"
setup.template.name: "tomcat"
setup.template.pattern: "tomcat-*"
setup.template.enabled: false
setup.template.overwrite: true

[root@elk01 opt]# curl -s -XGET 192.168.56.11:9200/_cat/indices|grep "tomcat-json"
green open tomcat-json-6.4.3-2021.11.24              Ft47l2ztSB-lpPMRHta4jg 5 1  200 0   239kb 106.4kb

6.3 收集docker日志

# docker环境已具备
[root@elk01 opt]# docker version
[root@elk01 opt]# docker pull nginx
[root@elk01 opt]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED      SIZE
nginx        latest    ea335eea17ab   7 days ago   141MB
[root@elk01 opt]# docker run -itd --name nginx -p 81:80  nginx
cc3e0368b002214c33f433b897c7d307ee876e4b910748d66eb3b22722af8d21
[root@elk01 opt]# docker ps 
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS                               NAMES
cc3e0368b002   nginx     "/docker-entrypoint.…"   14 seconds ago   Up 13 seconds   0.0.0.0:81->80/tcp, :::81->80/tcp   nginx
[root@elk01 opt]# docker inspect nginx|grep -w "Id"
        "Id": "cc3e0368b002214c33f433b897c7d307ee876e4b910748d66eb3b22722af8d21",
        
[root@elk01 opt]# ll /var/lib/docker/containers/cc3e0368b002214c33f433b897c7d307ee876e4b910748d66eb3b22722af8d21/
total 28
-rw-r----- 1 root root 2210 Nov 24 22:17 cc3e0368b002214c33f433b897c7d307ee876e4b910748d66eb3b22722af8d21-json.log
drwx------ 2 root root    6 Nov 24 22:17 checkpoints
-rw------- 1 root root 2898 Nov 24 22:17 config.v2.json
-rw-r--r-- 1 root root 1512 Nov 24 22:17 hostconfig.json
-rw-r--r-- 1 root root   13 Nov 24 22:17 hostname
-rw-r--r-- 1 root root  174 Nov 24 22:17 hosts
drwx--x--- 2 root root    6 Nov 24 22:17 mounts
-rw-r--r-- 1 root root   51 Nov 24 22:17 resolv.conf
-rw-r--r-- 1 root root   71 Nov 24 22:17 resolv.conf.hash

编辑配置文件

[root@elk01 opt]# cat /opt/filebeat/docker-nginx.yml
filebeat.inputs:
  - type: docker
    containers.ids:
      - 'cc3e0368b002214c33f433b897c7d307ee876e4b910748d66eb3b22722af8d21'
    tags: ["docker-nginx"]
output.elasticsearch:
  hosts: ["192.168.56.11:9200"]
  index: "docker-nginx-%{[beat.version]}-%{+yyyy.MM.dd}"
setup.template.name: "docker"
setup.template.pattern: "docker-*"
setup.template.enabled: false
setup.template.overwrite: true

启动并测试

[root@elk01 opt]# nohup /opt/filebeat/filebeat -e -c /opt/filebeat/docker-nginx.yml &
[root@elk01 opt]# curl -s -XGET 192.168.56.11:9200/_cat/indices|grep "docker"
green open docker-nginx-6.4.3-2021.11.24             qKzD8uITSGmdizdyLoDynA 5 1   15 0 115.1kb  57.5kb

# kibana页面添加验证

七 ElasticSearch 集群添加用户安全认证功能

7.1 添加用户安全认证功能

若想破解白金版服务,需要启动安全设置,也就是要在每个节点中安装证书。6.4版本的ES不需要安装x-pack,因为在6.4的版本中x-pack已经是一个内置的组建了。先进入试用模式,若先配置证书再设置密码就会导致Kibana无法连接到Elasticsearch,设置密码时出现以下提示。

Unexpected response code [403] from calling GET http://192.168.56.11:9200/_xpack/security/_authenticate?pretty
It doesn't look like the X-Pack security feature is available on this Elasticsearch node.
Please check if you have installed a license that allows access to X-Pack Security feature.

ERROR: X-Pack Security is not available.

打开Kibana时会出现

Cannot connect to the Elasticsearch cluster currently configured for Kibana.

为避免此问题,请先点击试用再配置x-pack的相关证书。[Management][License Management][Start a 30-day trial][Start trial][Start my trial]

生成 TLS 和身份验证

[root@elk01 ~]# cd /opt/elasticsearch
[root@elk01 elasticsearch]# bin/elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass ""
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires a SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files


Certificates written to /opt/elasticsearch-6.4.3/config/elastic-certificates.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

# 上述操作完成之后,在 config 路径下会生成证书 elastic-certificates.p12,如下所示:
[root@elk01 elasticsearch]# ll config/*.p12 config/*.keystore
-rw------- 1 root root 3448 Dec 16 20:37 config/elastic-certificates.p12
-rw-rw---- 1 es   es    207 Nov 22 20:25 config/elasticsearch.keystore

将elk01 上的证书依次拷贝到其他节点:

[root@elk01 elasticsearch]# scp /opt/elasticsearch/config/elastic-certificates.p12 192.168.56.12:/opt/elasticsearch/config/
[root@elk01 elasticsearch]# scp /opt/elasticsearch/config/elastic-certificates.p12 192.168.56.13:/opt/elasticsearch/config/                         

修改文件权限

[root@elk01/02/03 ~]# chown es:es /opt/elasticsearch/config/elastic-certificates.p12

修改配置文件

编辑 elasticsearch.yml文件,追加如下内容:

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

重启 ES 集群

首先重启节点 1

[root@elk01 elasticsearch]# ps -ef|grep elasticsearch
es         1912      1  4 20:07 ?        00:01:53 /opt/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.h7cxhuBZ -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/opt/elasticsearch -Des.path.conf=/opt/elasticsearch/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /opt/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es         1941   1912  0 20:08 ?        00:00:00 /opt/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root       5008   2622  0 20:52 pts/1    00:00:00 grep --color=auto elasticsearch
[root@elk01 elasticsearch]# kill -9 1912
[root@elk01 elasticsearch]# su es -c "/opt/elasticsearch/bin/elasticsearch -d"

依次重启节点 2 和节点 3。

创建 Elasticsearch 集群密码(手动设置)

在节点 1 上执行如下命令,设置用户密码。设置完之后,数据会自动同步到其他节点。

[root@elk01 elasticsearch]# /opt/elasticsearch/bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,kibana,logstash_system,beats_system.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [elastic]

访问验证

再次无密码访问 elasticsearch,发现提示安全认证错误。

[root@elk01 elasticsearch]# curl -XGET 192.168.56.11:9200/_cluster/health?pretty
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "missing authentication token for REST request [/_cluster/health?pretty]",
        "header" : {
          "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "missing authentication token for REST request [/_cluster/health?pretty]",
    "header" : {
      "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status" : 401
}

#输入帐号:elastic,密码:123456(此密码非真实密码,仅为了写博客记录),再次访问,发现成功。
[root@elk01 elasticsearch]# curl -u elastic:123456 -XGET 192.168.56.11:9200/_cluster/health?pretty
{
  "cluster_name" : "my-es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 52,
  "active_shards" : 105,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

#上述访问方式为明文密码输入,不推荐,可以改为如下方式访问。
[root@elk01 elasticsearch]# curl -u elastic -XGET 192.168.56.11:9200/_cluster/health?pretty
Enter host password for user 'elastic':

将密码部署到kibana应用

[root@elk01 elasticsearch]# grep -Ev '^$|#'  /opt/kibana/config/kibana.yml|grep elasticsearch
elasticsearch.url: "http://192.168.56.11:9200"
elasticsearch.username: "elastic"
elasticsearch.password: "123456"

重启kibana

[root@elk01 elasticsearch]# netstat -lntup|grep 5601
tcp        0      0 192.168.56.11:5601      0.0.0.0:*               LISTEN      2698/node           
[root@elk01 elasticsearch]# kill -9 2698
[root@elk01 elasticsearch]# nohup /opt/kibana/bin/kibana >/dev/null &

此时,需要输入设置的用户名:elastic,密码:123456进入kibana.

7.2 破解x-pack

由于是试用的只有30天,我们进行破解设置更长天数。

不同于手工安装的x-pack,Elasticsearch6.4内包含的x-pack位于modules/x-pack-core中,即:/opt/elasticsearch/modules/x-pack-core/x-pack-core-6.4.2.jar文件。

反编译jar文件,修改x-pack源码,我们重点关心项目中的两个文件:

  • org.elasticsearch.license.LicenseVerifier.java
  • org.elasticsearch.xpack.core.XPackBuild.java

两个文件文件相较于之前的版本有一定的变化,但是不影响破解过程。原始文件就不贴了,直接给出修改后的Java文件。你可以在本地新建一个同名的Java文件并将上面给出的代码拷贝到文件中,这样你就拥有了两个修改过的x-pack程序的Java源码文件。

org.elasticsearch.license.LicenseVerifier.java

package org.elasticsearch.license;

import java.nio.*;
import org.elasticsearch.common.bytes.*;
import java.security.*;
import java.util.*;
import org.elasticsearch.common.xcontent.*;
import org.apache.lucene.util.*;
import org.elasticsearch.core.internal.io.*;
import java.io.*;

public class LicenseVerifier
{
    public static boolean verifyLicense(final License license, final byte[] publicKeyData) {
        return true;
    }
    
    public static boolean verifyLicense(final License license) {
        return true;
    }
}

org.elasticsearch.xpack.core.XPackBuild.java

package org.elasticsearch.xpack.core;

import org.elasticsearch.common.io.*;
import java.net.*;
import org.elasticsearch.common.*;
import java.nio.file.*;
import java.io.*;
import java.util.jar.*;

public class XPackBuild
{
    public static final XPackBuild CURRENT;
    private String shortHash;
    private String date;
    
    @SuppressForbidden(reason = "looks up path of xpack.jar directly")
    static Path getElasticsearchCodebase() {
        final URL url = XPackBuild.class.getProtectionDomain().getCodeSource().getLocation();
        try {
            return PathUtils.get(url.toURI());
        }
        catch (URISyntaxException bogus) {
            throw new RuntimeException(bogus);
        }
    }
    
    XPackBuild(final String shortHash, final String date) {
        this.shortHash = shortHash;
        this.date = date;
    }
    
    public String shortHash() {
        return this.shortHash;
    }
    
    public String date() {
        return this.date;
    }
    
    static {
        final Path path = getElasticsearchCodebase();
        String shortHash = null;
        String date = null;
        Label_0157: {
            shortHash = "Unknown";
            date = "Unknown";
        }
        CURRENT = new XPackBuild(shortHash, date);
    }
}

编译java程序

[root@elk01 ~]# ll *.java
-rw-r--r-- 1 root root  532 Dec 16 21:13 LicenseVerifier.java
-rw-r--r-- 1 root root 1276 Dec 16 21:13 XPackBuild.java

编译LicenseVerifier.java

[root@elk01 ~]# javac -cp "/opt/elasticsearch/modules/x-pack-core/*:/opt/elasticsearch/lib/*" LicenseVerifier.java

编译XPackBuild.java

[root@elk01 ~]# javac -cp "/opt/elasticsearch/modules/x-pack-core/*:/opt/elasticsearch/lib/*" XPackBuild.java 

查看

[root@elk01 ~]# ll *.class
-rw-r--r-- 1 root root  410 Dec 16 21:15 LicenseVerifier.class
-rw-r--r-- 1 root root 1512 Dec 16 21:16 XPackBuild.class

备份源文件

在完成编译程序之后需要将编译好的程序添加到x-pack-core-6.4.3.jar文件中,我们可以采用先解压再替换再压缩的方式构建新的jar文件。

[root@elk01 elasticsearch]# pwd
/opt/elasticsearch
[root@elk01 elasticsearch]# cd modules/x-pack-core/
[root@elk01 x-pack-core]# ll x-pack-core-6.4.3.jar 
-rw-r--r-- 1 es es 1799091 Oct 31  2018 x-pack-core-6.4.3.jar
[root@elk01 x-pack-core]# cp x-pack-core-6.4.3.jar x-pack-core-6.4.3.jar.bak

解压jar文件

[root@elk01 x-pack-core]# unzip x-pack-core-6.4.3.jar -d ./x-pack-core-6.4.3

替换class文件

将刚刚编译好的破解的class替换到相同的位置中
[root@elk01 x-pack-core]# cp /root/LicenseVerifier.class ./x-pack-core-6.4.3/org/elasticsearch/license/
cp: overwrite ‘./x-pack-core-6.4.3/org/elasticsearch/license/LicenseVerifier.class’? y
[root@elk01 x-pack-core]# cp /root/XPackBuild.class ./x-pack-core-6.4.3/org/elasticsearch/xpack/core/
cp: overwrite ‘./x-pack-core-6.4.3/org/elasticsearch/xpack/core/XPackBuild.class’? y

重新打包jar包

[root@elk01 x-pack-core]# jar -cvf x-pack-core-6.4.3.crack.jar -C x-pack-core-6.4.3/ .

替换x-pack文件

将我们生成的被破解的jar替换到Elasticsearch的目录中:

[root@elk01 x-pack-core]# mv x-pack-core-6.4.3.crack.jar x-pack-core-6.4.3.jar
mv: overwrite ‘x-pack-core-6.4.3.jar’? y

另外需要注意需要替换集群中所有的x-pack-core-6.4.3.jar

[root@elk01 x-pack-core]# scp x-pack-core-6.4.3.jar 192.168.56.12:/opt/elasticsearch/modules/x-pack-core/
[root@elk01 x-pack-core]# scp x-pack-core-6.4.3.jar 192.168.56.13:/opt/elasticsearch/modules/x-pack-core/  

[root@elk01/02/03 ~]# chown es:es /opt/elasticsearch/modules/x-pack-core/x-pack-core-6.4.3.jar

重启Elasticsearch

7.3 升级为铂金版

申请许可证
在官网上申请一个许可证:https://license.elastic.co/registration
新申请的许可证都是普通的版本,我们需要修改一下文件中的信息让软件认为我们是铂金版。而且因为我们破解了验证证书的jar文件,所以软件无法验证我们的证书是不是真的。
通过填写的邮件获取官方发来的邮件获取证书

[root@elk01 ~]# cat xue-meng-52f0999f-8ab6-4a59-8328-fb16438fd89d-v5.json 
{"license":{"uid":"52f0999f-8ab6-4a59-8328-fb16438fd89d","type":"basic","issue_date_in_millis":1639612800000,"expiry_date_in_millis":1671235199999,"max_nodes":100,"issued_to":"xue meng (xuemeng)","issuer":"Web Form","signature":"AAAAAwAAAA3SuPRxGqBe6rYeM0UWAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxaktJRVl5MUYvUWh3bHZVUTllbXNPbzBUemtnbWpBbmlWRmRZb25KNFlBR2x0TXc2K2p1Y1VtMG1UQU9TRGZVSGRwaEJGUjE3bXd3LzRqZ05iLzRteWFNekdxRGpIYlFwYkJiNUs0U1hTVlJKNVlXekMrSlVUdFIvV0FNeWdOYnlESDc3MWhlY3hSQmdKSjJ2ZTcvYlBFOHhPQlV3ZHdDQ0tHcG5uOElCaDJ4K1hob29xSG85N0kvTWV3THhlQk9NL01VMFRjNDZpZEVXeUtUMXIyMlIveFpJUkk2WUdveEZaME9XWitGUi9WNTZVQW1FMG1DenhZU0ZmeXlZakVEMjZFT2NvOWxpZGlqVmlHNC8rWVVUYzMwRGVySHpIdURzKzFiRDl4TmM1TUp2VTBOUlJZUlAyV0ZVL2kvVk10L0NsbXNFYVZwT3NSU082dFNNa2prQ0ZsclZ4NTltbU1CVE5lR09Bck93V2J1Y3c9PQAAAQAsQoubzzaiusjhy9UaDjs5Xf/Jz2mK/TN5Ir6Ezo6C10B4CnaYrhs2fWI17j/ENpyw4G4NzJ6blh/RXQVxrHW/Hiu3Lag6ZSjQcq+hy3GAbafpEemPnjkHiljbE05225c0rKcPmnHk/HpCaeRhb2nvXuZFeKsrLJwZ7adhJaGOEoMWMMwgQDXzlrv5it7QS9WylPEtoxwbsdWY0zmA+Bev0X8IJD6C0YSPZByjwB2KkV0EgZqRTQjg93ErGwLnkBVlf09w7PLg7IpnfEojRdQlHRUk8T5+iHKLVV4y39EyC//pEkNWMYznYIuJvd4z96mouMRf1jyeg9hlsn6/92jw","start_date_in_millis":1639612800000}}

此证书的时间为1年使用时间,你可以通过下面网站进行换算http://tool.chinaz.com/Tools/unixtime.aspx,**目前我申请了一个20 年的时间

将 "type":"basic" 替换为 "type":"platinum"    # 基础班变更为铂金版
将 "expiry_date_in_millis":1671235199999 替换为 "expiry_date_in_millis":2270764800000 # 1年变为20年

查看当前的 license

[root@elk01 ~]# curl -u elastic:123456 192.168.56.11:9200/_license
{
  "license" : {
    "status" : "active",
    "uid" : "6855a036-2c42-4e87-b2c0-399c95cb009b",
    "type" : "trial",
    "issue_date" : "2021-12-16T12:34:12.940Z",
    "issue_date_in_millis" : 1639658052940,
    "expiry_date" : "2022-01-15T12:34:12.940Z",
    "expiry_date_in_millis" : 1642250052940,
    "max_nodes" : 1000,
    "issued_to" : "my-es",
    "issuer" : "elasticsearch",
    "start_date_in_millis" : -1
  }
}

替换 license

[root@elk01 ~]# curl -XPUT -u elastic:123456 -H "Content-Type: application/json" 'http://192.168.56.11:9200/_xpack/license?acknowledge=true' -d @xue-meng-52f0999f-8ab6-4a59-8328-fb16438fd89d-v5.json
{"acknowledged":true,"license_status":"valid"}

查看

[root@elk01 ~]# curl -u elastic:123456 192.168.56.11:9200/_license
{
  "license" : {
    "status" : "active",
    "uid" : "52f0999f-8ab6-4a59-8328-fb16438fd89d",
    "type" : "platinum",
    "issue_date" : "2021-12-16T00:00:00.000Z",
    "issue_date_in_millis" : 1639612800000,
    "expiry_date" : "2041-12-16T00:00:00.000Z",
    "expiry_date_in_millis" : 2270764800000,
    "max_nodes" : 100,
    "issued_to" : "xue meng (xuemeng)",
    "issuer" : "Web Form",
    "start_date_in_millis" : 1639612800000
  }
}

八 ES连接Grafana

下载并启动

[root@elk01 ~]# wget https://dl.grafana.com/enterprise/release/grafana-enterprise-7.3.6-1.x86_64.rpm
[root@elk01 ~]# sudo yum install grafana-enterprise-7.3.6-1.x86_64.rpm
[root@elk01 ~]# systemctl start grafana-server.service 
[root@elk01 ~]# systemctl status grafana-server.service 
[root@elk01 ~]# systemctl enable grafana-server.service

查看安装目录
[root@elk01 ~]# rpm -ql grafana-enterprise |head
/etc/grafana
/etc/init.d/grafana-server
/etc/sysconfig/grafana-server
/usr/lib/systemd/system/grafana-server.service
/usr/sbin/grafana-cli
/usr/sbin/grafana-server
/usr/share/grafana/VERSION
/usr/share/grafana/bin/grafana-cli
/usr/share/grafana/conf/defaults.ini
/usr/share/grafana/conf/ldap.toml

验证

浏览器输入ip+port(默认端口3000)验证即可。默认用户名和密码为admin、admin。首次会需要重新设置密码。
grafana初始化界面

添加数据源

【齿轮】–【Configuration】–【Add data sources】
添加数据源
选择添加elasticsearch数据源
选择elasticsearch数据源
配置elasticsearch数据源
elasticsearch数据源配置
创建Dashbord

创建好数据源之后,就需要创建DashBoard(仪表盘),可以自定义,也可以导入你需要的仪表盘,官方提供了很多的可选仪表盘。
创建Dashboard
grafana制图
grafana制图

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值