搭建ELK分布式日志采集系统
开始
ELK对应版本为7.9.2
本文主要实现从springBoot将日志格式化后传输到logstash中,再通过logstash将日志放到elasticsearch中,是一种比较简单的elk方案,在日志量比较小的情况下可以使用,反之可以使用另外一种比较可靠的方案,例如,通过filebeat去监控读取日志文件放到kafka队列中,用kafka作为日志缓冲,防止大量日志直接打到logstash和es中,再通过logstash去消费kafka队列日志消息,这种方案可在我另外几篇文章中找到,有兴趣可以看看~
资源下载
elasticsearch,logstash,kibana下载地址
链接:https://pan.baidu.com/s/174gYmmSaIvkaApMoTLc0yA
提取码:dtfp
安装elasticsearch
1,解压
tar -zxvf linux_elasticsearch-7.9.2-linux-x86_64.tar.gz
2,进入解压后的文件夹
cd elasticsearch-7.9.2
3,创建data文件夹
mkdir data
4,修改config/elasticsearch.yml 对应属性值(去掉#注释再修改)
cluster.name: my-elasticsearch
node.name: node-1
path.data: /usr/local/elk_7.9.2/elasticsearch/elasticsearch-7.9.2/data
path.logs: /usr/local/elk_7.9.2/elasticsearch/elasticsearch-7.9.2/logs
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
5,新增用户es
adduser es
passwd es
6, 修改elasticsearch文件夹所属用户为es
chown es /usr/local/elk_7.9.2/elasticsearch/elasticsearch-7.9.2/ -R
7,编辑 /etc/security/limits.conf,在末尾加上
es soft nofile 65536
es hard nofile 65536
es soft nproc 4096
es hard nproc 4096
8,编辑 vim /etc/security/limits.d/20-nproc.conf,将* 改为用户名(es)
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
es soft nproc 4096
root soft nproc unlimited
9,编辑 /etc/sysctl.conf,在末尾加上
vm.max_map_count = 655360
10,执行
[root@localhost bin]# sysctl -p
vm.max_map_count = 655360
11,切换es用户,进入bin文件夹后台启动elasticsearch
su es
./elasticsearch -d
12,访问地址 IP:9200,出现如下信息即为成功
{
"name" : "node-1",
"cluster_name" : "cbos2.1-elasticsearch",
"cluster_uuid" : "cyln5kOhRvSXL176RxfmPw",
"version" : {
"number" : "7.9.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "d34da0ea4a966c4e49417f2da2f244e3e97b4e6e",
"build_date" : "2020-09-23T00:45:33.626720Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
安装Kibana
1,解压kibana
tar -zxvf linux_kibana-7.9.2-linux-x86_64.tar.gz
2,进入解压后的文件夹
cd kibana-7.9.2-linux-x86_64
3, 配置config/kibana.yml参考
server.port: 5601
# ...
server.host: "0.0.0.0"
# ...
server.name: "my-kibana"
# ...
elasticsearch.hosts: ["http://192.168.1.106:9200"]
# .开启中文
i18n.locale: "zh-CN"
4, 后台启动bin文件夹下面的kibana
nohup ./kibana --allow-root &
5,访问地址为 IP:5601,出现如下即为成功
安装Logstash
1,解压步骤同上
2,在config下面新增xx.conf配置文件,内容如下
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 5000
codec => json{
charset=>"UTF-8"
}
}
}
output {
elasticsearch {
hosts => ["192.168.1.142:9200"] #可写多个 ,隔开
user => "es"
password => "es123456"
index => "xxx-log" #对应的es索引名称
}
}
3, 后台启动bin文件夹下面的logstash
nohup ./logstash -f /usr/local/elk_7.9.2/logstash/logstash-7.9.2/config/xxx.conf &
SpringBoot配置logstash
1,引入logstash依赖
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.3</version>
</dependency>
2,logback.xml添加如下配置
<!--日志写入logstash-->
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>192.168.1.xx:5000</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="info">
<appender-ref ref="logstash" />
</root>
3,在kibana看到对应的日志记录
4,如果没看到你的索引,可以点击左边菜单栏的Stack Management,进入索引模式,创建索引模式即可