说明:
此套ELK日志管理系统通过Filebeat来抓取应用服务日志数据传输到Kafka对日志数据做缓冲。
准备工作:
1、JDK1.8:ELK部分组件是使用Java编写的,所以需要Java环境支持。
2、Zookeeper-3.4.14:使用Kafka需要通过Zookeeper来作为注册中心注册元数据信息。
3、Kafka_2.12:使用kafka来为Filebeat收集过来的日志做缓冲,防止日志量过大冲垮系统。
4、Filbeat-6.6.0:使用Filebeat收集应用服务产出的日志数据,然后发送到Kafka内,相当于kafka的生产者。
5、Logstash-6.6.0:消费kafka内的日志数据,主要是对日志内容进行格式解析,然后同步到Elasticsearch中。
6、Elasticsearch-6.6.0:主要是用来存储Logstash同步过来的日志数据,以提供日志搜索。
7、Kibana-6.6.0:对于Elasticsearch存储的日志数据进行可视化展示,以方便开发人员查看应用服务日志信息。
安装:
1、JDK1.8安装及配置
1)下载jdk-8u251-linux-x64.tar.gz压缩包至"/home/software"目录下。
2)解压jdk-8u251-linux-x64.tar.gz压缩包至"/usr/local"目录下,执行:tar -zxvf jdk-8u251-linux-x64.tar.gz -C /usr/local。
3)对"/usr/local"下面的jdk1.8.0_251文件夹修改名字为:jdk1.8,执行:mv jdk1.8.0_251 jdk1.8。
4)配置环境变量:
vim /etc/profile
添加如下配置:
export JAVA_HOME=/usr/local/jdk1.8
export CLASSPATH=.:%JAVA_HOME%/lib/dt.jar:%JAVA_HOME%/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
然后保存退出,执行:wq。
刷新环境变量:
source /etc/profile
5)校验配置结果:
java -version
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)
结果为以上内容说明Java环境已经配置完成。
2、Zookeeper-3.4.14安装及配置(集群版)
1)准备工作:
准备3个节点,要求配置好主机名称,服务器之间系统时间保持一致,注意 /etc/hostname 和 /etc/hosts 配置主机名称(在这个里我准备zookeeper-master,zookeeper-slave1,zookeeper-slave2三节点),特别注意 以下操作3个节点要同时进行操作哦!
注意关闭防火墙!
1.启动防火墙systemctl start firewalld
2.关闭防火墙systemctl stop firewalld
3.重启防火墙systemctl restart firewalld
4.查看防火墙状态systemctl status firewalld
5.开机禁用防火墙systemctl disable firewalld
2)上传zk到三台服务器节点:
zookeeper-3.4.14.tar.gz压缩包分别上传到三台服务器的"/home/software"目录下。
3)对三个zookeeper-3.4.14.tar.gz压缩包进行解压至"/usr/local"目录下,执行:tar -zxvf zookeeper-3.4.14.tar.gz -C /usr/local。
4)配置Zookeeper的环境变量:
vim /etc/profile
export ZK_HOME=/usr/local/zookeeper-3.4.14
export PATH=$PATH:$ZK_HOME/bin
刷新环境变量:
source /etc/profile
5)修改zookeeper的配置文件:
cd /usr/local/zookeeper-3.4.14/conf
然后复制zoo_sample.cfg文件,复制后为zoo.cfg:
mv zoo_sample.cfg zoo.cfg
vim zoo.cfg
5-1)修改数据的dir:
dataDir=/usr/local/zookeeper-3.4.14/data
5-2)修改集群地址:(三台Zookeeper地址)
server.0=zookeeper-master:2888:3888
server.1=zookeeper-slave1:2888:3888
server.2=zookeeper-slave2:2888:3888
6)增加服务器标识配置,需要2步骤,第一是创建文件夹和文件,第二是添加配置内容:
6-1)创建文件夹:
mkdir /usr/local/zookeeper-3.4.14/data
6-2)创建文件myid 路径应该创建在/usr/local/zookeeper-3.4.14/data下面,如下:
vim /usr/local/zookeeper-3.4.14/data/myid
注意这里每一台服务器的myid文件内容不同,分别修改里面的值为0,1,2;
与我们之前的zoo.cfg配置文件里:server.0,server.1,server.2 顺序相对应,然后保存退出;
7)到此为止,Zookeeper集群环境大功告成!启动zookeeper命令:
启动路径:/usr/local/zookeeper-3.4.14/bin(也可在任意目录,因为配置了环境变量)
执行命令:zkServer.sh start (注意这里3台机器都要进行启动,启动之后可以查看状态)
查看状态:zkServer.sh status (在三个节点上检验zk的mode, 会看到一个leader和俩个follower)
集群关闭:zkServer.sh stop
8)zkCli.sh 进入zookeeper客户端
根据提示命令进行操作:
查找:ls / ls /zookeeper
创建并赋值: create /imooc zookeeper
获取: get /imooc
设值: set /imooc zookeeper1314
PS1: 任意节点都可以看到zookeeper集群的数据一致性
PS2: 创建节点有俩种类型:短暂(ephemeral) 持久(persistent), 这些可以查找相关资料,我们这里作为入门不做过多赘述!
9)配置Zookeeper开机启动:
cd /etc/rc.d/init.d/
开机启动zookeeper脚本:
touch zookeeper
chmod 777 zookeeper
vim zookeeper
#!/bin/bash
#chkconfig:2345 20 90
#description:zookeeper
#processname:zookeeper
export JAVA_HOME=/usr/local/jdk1.8
export PATH=$JAVA_HOME/bin:$PATH
case $1 in
start) /usr/local/zookeeper-3.4.14/bin/zkServer.sh start;;
stop) /usr/local/zookeeper-3.4.14/bin/zkServer.sh stop;;
status) /usr/local/zookeeper-3.4.14/bin/zkServer.sh status;;
restart) /usr/local/zookeeper-3.4.14/bin/zkServer.sh restart;;
*) echo "require start|stop|status|restart" ;;
esac
开机启动配置:chkconfig zookeeper on
验证:
chkconfig --add zookeeper
chkconfig --list zookeeper
这个时候我们就可以用servicezookeeper start/stop来启动停止zookeeper服务了
使用chkconfig--add zookeeper命令把zookeeper添加到开机启动里面
添加完成之后接这个使用chkconfig--list 来看看我们添加的zookeeper是否在里面
如果上面的操作都正常的话;你就可以重启你的linux服务器了
3、Kafka_2.12安装及配置
1)Kafka环境搭建准备:
1-1)下载Kafka安装包:https://archive.apache.org/dist/kafka/2.1.0/kafka_2.12-2.1.0.tgz
1-2)上传到:192.168.137.183
2)搭建Kafka环境:
2-1)上传kafka_2.12-2.1.0.tgz包到目录“/home/software”下
2-2)将kafka_2.12-2.1.0.tgz包解压到目录"/usr/local"下,执行:tar -zxvf kafka_2.12-2.1.0.tgz -C /usr/local
2-3)重命名kafka,执行:mv kafka_2.12-2.1.0/ kafka_2.12
2-4)修改Kafka配置文件
vim /usr/local/kafka_2.12/config/server.properties
如果是集群的话Kafka的broker.id一定是不能相同的
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
# 默认端口号
port=9092
# 本机地址
host.name=192.168.137.183
# 备用地址
advertised.host.name=192.168.137.183
# Kafka的log日志文件(其实就是消息文件)路径
# 这样的话以后Kafka实际数据存到哪,partitions相当于log文件存到这个目录下面
log.dirs=/usr/local/kafka_2.12/kafka-logs
# Kafka在默认创建topic的时候,默认指定有多少个分区
num.partitions=5
# kafka怎么去和我的zookeeper去做一个协调
# zookeeper默认通信内部通信集群端口是2181,写一下对外连接的地址
zookeeper.connect=192.168.137.183:2181,192.168.137.184:2181,192.168.137.185:2181
2-5)创建kafka存储消息(log日志数据)的目录(千万别忘了,要不然会报错)
mkdir -p /usr/local/kafka_2.12/kafka-logs
2-6)到此为止,kafka已经配置成功,执行启动命令,启动kafka,后端启动
/usr/local/kafka_2.12/bin/kafka-server-start.sh /usr/local/kafka/kafka_2.12/config/server.properties &
3)kafka-manager管控台搭建与脚本测试验证
3-1)安装kafka manager可视化管控台,把kafka-manager-2.0.0.2.zip上传到192.168.137.183的/home/software目录下
3-2)解压文件:unzip kafka-manager-2.0.0.2.zip -d /usr/local/kafka
3-3)修改配置文件:vim /usr/local/kafka/kafka-manager-2.0.0.2/conf/application.conf
kafka-manager.zkhosts="192.168.137.183:2181,192.168.137.184:2181,192.168.137.185:2181"
3-4)启动kafka manager控制台
/usr/local/kafka/kafka-manager-2.0.0.2/bin/kafka-manager &
3-5)浏览器访问控制台:默认端口号是9000
http://192.168.137.183:9000/
4、Filebeat-6.6.0安装及配置
1)上传filebeat-6.6.0-linux-x86_64.tar.gz压缩包至目录"/home/sodtware"下
2)解压filebeat-6.6.0-linux-x86_64.tar.gz压缩包至目录"/usr/local"下,执行:tar -zxvf filebeat-6.6.0-linux-x86_64.tar.gz -C /usr/local
3)修改filebeat-6.6.0-linux-x86_64目录名称为filebeat-6.6.0,执行:mv filebeat-6.6.0-linux-x86_64/ filebeat-6.6.0
4)配置filebeat,vim /usr/local/filebeat-6.6.0/filebeat.yml
###################### Filebeat Configuration Example #########################
filebeat.prospectors:
- input_type: log
paths:
## app-服务名称.log, 为什么写死,防止发生轮转抓取历史数据
- /usr/local/logs/app-collector.log
#定义写入 ES 时的 _type 值
document_type: "app-log"
multiline:
#pattern: '^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})' # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
pattern: '^\[' # 指定匹配的表达式(匹配以 "{ 开头的字符串)
negate: true # 是否匹配到
match: after # 合并到上一行的末尾
max_lines: 2000 # 最大的行数
timeout: 2s # 如果在规定时间没有新的日志事件就不等待后面的日志
fields:
logbiz: collector
logtopic: app-log-collector ## 按服务划分用作kafka topic
evn: dev
- input_type: log
paths:
- /usr/local/logs/error-collector.log
document_type: "error-log"
multiline:
#pattern: '^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})' # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
pattern: '^\[' # 指定匹配的表达式(匹配以 "{ 开头的字符串)
negate: true # 是否匹配到
match: after # 合并到上一行的末尾
max_lines: 2000 # 最大的行数
timeout: 2s # 如果在规定时间没有新的日志事件就不等待后面的日志
fields:
logbiz: collector
logtopic: error-log-collector ## 按服务划分用作kafka topic
evn: dev
output.kafka:
enabled: true
hosts: ["192.168.137.183:9092"]
topic: '%{[fields.logtopic]}'
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
logging.to_files: true
5)检查配置是否正确:
cd /usr/local/filebeat-6.6.0
./filebeat -c filebeat.yml -configtest
执行结果为:Config OK,表示配置正确
6)启动filebeat:
/usr/local/filebeat-6.6.0/filebeat &
7)查看filebeat是否启动成功:
ps -ef | grep filebeat
5、Logstash-6.6.0安装及配置
1)上传logstash-6.6.0.tar.gz安装包至目录"/home/software"下
2)解压logstash-6.6.0.tar.gz安装包至目录"/usr/local"下
3)conf下配置文件说明:
logstash配置文件:/config/logstash.yml
JVM参数文件:/config/jvm.options
日志格式配置文件:log4j2.properties
制作Linux服务参数:/config/startup.options
4)增加workers工作线程数 可以有效的提升logstash性能:
vim /usr/local/logstash-6.6.0/config/logstash.yml
pipeline.workers: 16
5)配置启动配置文件:
mkdir -p /usr/local/logstash-6.6.0/script/
vim logstash-script.conf
## multiline 插件也可以用于其他类似的堆栈式信息,比如 linux 的内核日志。
input {
kafka {
bootstrap_servers => "192.168.137.183:9092"
codec => json
consumer_threads => 1 ## 增加consumer的并行消费线程数,因为我们的这个topic下目前只有一个partition,所以写1就行
decorate_events => true
#auto_offset_rest => "latest"
group_id => "app-log-group"
kafka {
## error-log-服务名称
topics_pattern => "error-log-.*"
bootstrap_servers => "192.168.137.183:9092"
codec => json
consumer_threads => 1
decorate_events => true
#auto_offset_rest => "latest"
group_id => "error-log-group"
}
}
filter {
## 时区转换
ruby {
code => "event.set('index_time',event.timestamp.time.localtime.strftime('%Y.%m.%d'))"
}
# 这个在filebeat.yml中配置的[fields][logtopic]中是否包含'app-log'
if "app-log" in [fields][logtopic]{
# grok表达式去匹配
grok {
## 表达式
match => ["message", "\[%{NOTSPACE:currentDateTime}\] \[%{NOTSPACE:level}\] \[%{NOTSPACE:thread-id}\] \[%{NOTSPACE:class}\] \[%{DATA:hostName}\] \[%{DATA:ip}\] \[%{DATA:applicationName}\] \[%{DATA:location}\] \[%{DATA:messageInfo}\] ## (\'\'|%{QUOTEDSTRING:throwable})"]
}
}
if "error-log" in [fields][logtopic]{
grok {
## 表达式
match => ["message", "\[%{NOTSPACE:currentDateTime}\] \[%{NOTSPACE:level}\] \[%{NOTSPACE:thread-id}\] \[%{NOTSPACE:class}\] \[%{DATA:hostName}\] \[%{DATA:ip}\] \[%{DATA:applicationName}\] \[%{DATA:location}\] \[%{DATA:messageInfo}\] ## (\'\'|%{QUOTEDSTRING:throwable})"]
}
}
}
## 测试输出到控制台:
output {
stdout { codec => rubydebug }
}
## elasticsearch:
output {
if "app-log" in [fields][logtopic]{
## es插件
elasticsearch {
# es服务地址
hosts => ["192.168.137.197:9200"]
# 用户名密码
user => "elastic"
password => "123456"
## 索引名,+ 号开头的,就会自动认为后面是时间格式:
## javalog-app-service-2019.01.23
index => "app-log-%{[fields][logbiz]}-%{index_time}"
# 是否嗅探集群ip:一般设置true;http://192.168.11.35:9200/_nodes/http?pretty
# 通过嗅探机制进行es集群负载均衡发日志消息
sniffing => true
# logstash默认自带一个mapping模板,进行模板覆盖
template_overwrite => true
}
}
if "error-log" in [fields][logtopic]{
elasticsearch {
hosts => ["192.168.137.197:9200"]
user => "elastic"
password => "123456"
index => "error-log-%{[fields][logbiz]}-%{index_time}"
sniffing => true
template_overwrite => true
}
}
}
6)启动logstash,-f去指定按照什么方式去启动
nohup /usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/script/logstash-script.conf &
6、Elasticsearch-6.6.0安装及配置(单机版)
1)上传elasticsearch-6.6.0.tar.gz压缩包至"/home/software"目录下
2)解压elasticsearch-6.6.0.tar.gz压缩包至"/usr/local"目录下,执行:tar -zxvf elasticsearch-6.6.0.tar.gz -C /usr/local
3)修改配置文件:
cd /usr/local/elasticsearch-6.6.0/config
vim elasticsearch.yml
---------------------------------------------- 我是分割线,非配置文件内容 ------------------------------------------------
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
## 集群名称
cluster.name: es_log_cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
## es-node-2 es-node-3 不同节点名称不同
node.name: es-node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
## es数据存放位置
path.data: /usr/local/elasticsearch-6.6.0/data
#
# Path to log files:
#
## es日志存放位置
path.logs: /usr/local/elasticsearch-6.6.0/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
## 锁内存,强制占用(类似oracle的锁内存)保证es启动正常
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
## network.host不同节点IP对应 (对外发布IP)
network.host: 192.168.137.197
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
## 防止脑裂配置
## 当新节点加入的时候,配置一个初始化主机列表用于节点发现.
## 默认的主机列表是 ["127.0.0.1", "[::1]"]
discovery.zen.ping.unicast.hosts: ["192.168.137.197:9300"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
## 最小节点数,为了避免脑裂的发生,使用如下配置(数值为节点总数/2 + 1)
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
## 如果集群发生重启,直到N个节点启动完成,才能开始进行集群初始化恢复动作
gateway.recover_after_nodes: 1
#
# For more information, consult the gateway module documentation.
#
## 集群应该预期有几个节点(master或node都算)
gateway.expected_nodes: 1
## 等待凑齐预期节点时间,例如:先等凑够了3个节点,再等5分钟看看有没有凑齐5个节点
gateway.recover_after_time: 5m
## 禁止在一个操作系统启动多个节点
node.max_local_storage_nodes: 1
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
## 删除索引时,需要明确的名称
action.destructive_requires_name: true
## 防止同一个分片的主副本放在同一台机器上
cluster.routing.allocation.same_shard.host: true
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/local/elasticsearch-6.6.0/config/elastic-stack-ca.p12
xpack.security.transport.ssl.truststore.path: /usr/local/elasticsearch-6.6.0/config/elastic-stack-ca.p12
---------------------------------------------- 我是分割线,非配置文件内容 ------------------------------------------------
4)生成elastic-stack-ca.p12证书:
cd /usr/local/elasticsearch-6.6.0/bin
./elasticsearch-certutil ca
在bin目录下会产生:elastic-stack-ca.p12,然后移动该文件至/usr/local/elasticsearch-6.6.0/config下(如果是集群,全部节点都要上传此文件):
mv elastic-stack-ca.p12 ../config
5)修改JVM参数:
cd /usr/local/elasticsearch-6.6.0/config
vim jvm.options
#-Xms1g
#-Xmx1g
修改为:
-Xms128m
-Xmx128m
6)修改:vim /etc/security/limits.conf:
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
* soft memlock unlimited
* hard memlock unlimited
7)修改:vim /etc/sysctl.conf:
vm.max_map_count=262145
刷新配置:sysctl -p
8)添加用户并授权:
useradd esuser
chown -R esuser:esuser /usr/local/elasticsearch-6.6.0
9)启动Elasticsearch:
切换用户:
su esuser
进入目录:
cd /usr/local/elasticsearch-6.6.0/bin
后台启动es:
./elasticsearch -d
10)检验是否成功:
访问:http://192.168.137.197:9200/,查看访问结果:
{
name: "es-node-1",
cluster_name: "es_log_cluster",
cluster_uuid: "Vel3dCabRg6EYy9liyefCQ",
version: {
number: "6.6.0",
build_flavor: "default",
build_type: "tar",
build_hash: "a9861f4",
build_date: "2019-01-24T11:27:09.439740Z",
build_snapshot: false,
lucene_version: "7.6.0",
minimum_wire_compatibility_version: "5.6.0",
minimum_index_compatibility_version: "5.0.0"
},
tagline: "You Know, for Search"
}
访问结果类似于以上信息,说明启动成功
11)x-pack破解:
说明:Elasticsearch集成Elastic Stack的X-Pack组件包,包括安全、告警、监控、报表生成、图分析、机器学习等组件,用户可以开箱即用,依靠运行Agent收集和监控Elaticsearch、Logstach、Kibana等实例的索引和指标,并借助Kibana可视化能力实时监控这些应用。
11-1)覆盖/usr/local/elasticsearch-6.6.0/modules/x-pack-core/x-pack-core-6.6.0.jar中的两个class文件
11-2)用 LicenseVerifier.class 覆盖 x-pack-core-6.6.0.jar\org\elasticsearch\license 目录下的同名文件
11-3)用 XPackBuild.class 覆盖 x-pack-core-6.6.0.jar\org\elasticsearch\xpack\core 目录下的同名文件
11-4)类获取地址https://pan.baidu.com/s/1ESqoZW8eieO7Zdgs31hxsQ,密码:uqnd
11-5)替换完jar包中的class文件,将替换好的jar包替换服务器上的jar:/usr/local/elasticsearch-6.6.0/modules/x-pack-core/x-pack-core-6.6.0.jar
12)es集群重置密码:
1、添加本地账户(所有master节点都要执行)
cd /usr/local/elasticsearch-6.6.0/bin
./elasticsearch-users useradd tempuser -p tempuser -r superuser
./elasticsearch-users list
结果:
tempuser : superuser
2、执行重置elastic账户的密码
curl -XPUT -u tempuser:tempuser http://192.168.137.197:9200/_xpack/security/user/elastic/_password -H "Content-Type: application/json" -d '
{
"password": "123456"
}'
3、测试重置密码是否生效
curl --user elastic:123456 '192.168.137.197:9200/_cat/health?v'
结果:
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1604650029 08:07:09 es_log_cluster yellow 1 1 25 25 0 0 20 0 - 55.6%
13)使用自带shell修改密码,将所有密码修改为:123456(如果失败继续下面操作,先去安装Kibana,配置license,然后再来执行这个操作)
cd /usr/local/elasticsearch-6.6.0/bin
./elasticsearch-setup-passwords interactive
14)重新启动es(先停掉es):根据自己安装的模式进行启动
单机启动:
cd /usr/local/elasticsearch-6.6.0/bin
./elasticsearch
集群启动:
curl -u elastic:123456 192.168.137.197:9200
15)有需要配置中文ik分词器的话也可以进行配置:
15-1)下载ik分词器:https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.6.0/elasticsearch-analysis-ik-6.6.0.zip
15-2)上传ik分词器至目录:"/home/software"下
15-3)创建ik分词器目录,执行:
mkdir -p /usr/local/elasticsearch-6.6.0/plugins/ik/
15-4)解压elasticsearch-analysis-ik-6.6.0.zip压缩包至以上新创建的目录下,执行:
unzip -d /usr/local/elasticsearch-6.6.0/plugins/ik/ elasticsearch-analysis-ik-6.6.0.zip
15-5)重启Elasticsearch,加载ik分词器插件
注意:如果发现在重启Elasticsearch时候会报关于"elastic"账号权限问题,可能是重启es导致账户恢复到默认,所以这时候需要重新执行下es账户修改指令:
curl -XPUT -u tempuser:tempuser http://192.168.137.197:9200/_xpack/security/user/elastic/_password -H "Content-Type: application/json" -d '
{
"password": "123456"
}'
7、kibana-6.6.0安装及配置
1)上传kibana-6.6.0-linux-x86_64.tar.gz压缩包至目录"/home/software"
2)解压kibana-6.6.0-linux-x86_64.tar.gz压缩包至目录"/usr/local"
3)重命名kibana-6.6.0-linux-x86_64为kibana-6.6.0,执行:mv kibana-6.6.0-linux-x86_64 kibana-6.6.0
4)修改配置文件:
cd /usr/local/kibana-6.6.0/config
vim kibana.yml
server.host: "0.0.0.0"
server.name: "192.168.137.196"
elasticsearch.hosts: ["http://192.168.137.197:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "123456"
5)启动Kibana:
/usr/local/kibana-6.6.0/bin/kibana &
6)访问Kibana:
http://192.168.137.196:5601/app/kibana
7)申请license:
https://license.elastic.co/registration
8)修改申请的license,注意license.json文件名称不能变否则认证失败:
修改:
基础版变更为铂金版: “type”:“basic” 替换为 “type”:“platinum”
1年变为50年: “expiry_date_in_millis”:1561420799999 替换为 “expiry_date_in_millis”:3107746200000
9)进入kibana控制台,Management->License Management上传修改后的token
10)重启 Elasticsearch 和 Kibana
11)配置watcher监控告警
11-1)在Kibana控制台选择"DEV Tools",进入Console界面
先创建"error-log-*"索引库的模板:
## 建立模板
PUT _template/error-log-
{
"template": "error-log-*",
"order": 0,
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"match_mapping_type": "string",
"path_match": "message",
"mapping": {
"norms": false,
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
},
{
"throwable_field": {
"match_mapping_type": "string",
"path_match": "throwable",
"mapping": {
"norms": false,
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
},
{
"string_fields": {
"match_mapping_type": "string",
"match": "*",
"mapping": {
"norms": false,
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
],
"_all": {
"enabled": false
},
"properties": {
"hostName": {
"type": "keyword"
},
"ip": {
"type": "ip"
},
"level": {
"type": "keyword"
},
"currentDateTime": {
"type": "date"
}
}
}
}
}
再创建"error-log-*索引库的watcher监控:
## 创建watcher
PUT _xpack/watcher/watch/error_log_collector_watcher
{
"trigger": {
"schedule": {
"interval": "5s"
}
},
"input": {
"search": {
"request": {
"indices": ["<error-log-collector-{now+8h/d}>"],
"body": {
"size": 0,
"query": {
"bool": {
"must": [
{
"term": {"level": "ERROR"}
}
],
"filter": {
"range": {
"currentDateTime": {
"gt": "now-30s" , "lt": "now"
}
}
}
}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gt": 0
}
}
},
"transform": {
"search": {
"request": {
"indices": ["<error-log-collector-{now+8h/d}>"],
"body": {
"size": 1,
"query": {
"bool": {
"must": [
{
"term": {"level": "ERROR"}
}
],
"filter": {
"range": {
"currentDateTime": {
"gt": "now-30s" , "lt": "now"
}
}
}
}
},
"sort": [
{
"currentDateTime": {
"order": "desc"
}
}
]
}
}
}
},
"actions": {
"test_error": {
"webhook" : {
"method" : "POST",
"url" : "http://192.168.137.190:8001/accurateWatch",
"body" : "{\"title\": \"异常错误告警\", \"applicationName\": \"{{#ctx.payload.hits.hits}}{{_source.applicationName}}{{/ctx.payload.hits.hits}}\", \"level\":\"告警级别P1\", \"body\": \"{{#ctx.payload.hits.hits}}{{_source.messageInfo}}{{/ctx.payload.hits.hits}}\", \"executionTime\": \"{{#ctx.payload.hits.hits}}{{_source.currentDateTime}}{{/ctx.payload.hits.hits}}\"}"
}
}
}
}
## 说明:
# "actions"里面为满足上面条件后执行的事件,"webhook"里面内容为watcher监控到当前时间前30s内的"ERROR"级别日志后请求的接口地址及内容,此接口可以写在对应的应用服务内接受watcher的回调,然后在这个接口内可以实现“邮件”、“短信”、“微信”、“企业微信”、“钉钉”的预警通知发送给对应的开发人员。
## 以下为watcher命令的常见API:
# 查看watcher
GET _xpack/watcher/watch/error_log_collector_watcher
# 删除watcher
DELETE _xpack/watcher/watch/error_log_collector_watcher
11-2)验证出现"ERROR"级别日志时候,watcher监控是否起到作用(注意要在当前时间前30s内触发的"ERROR"级别日志,超时可能会看不到效果)
在配置ELK的应用服务内触发"ERROR"级别日志,然后在Kibana控制台上点击Management里面的Watcher,找到对应的Watcher点进去,观察State为"小喇叭图标+Firing(可能会略有延迟)",说明watcher监控预警配置成功了。