在搭建ELK之前,我们需要做一些准备工作。
正如官方所说的那样 https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html,Elasticsearch默认使用mmapfs目录来存储索引。操作系统默认的mmap计数太低可能导致内存不足,我们可以使用下面这条命令来增加内存
为了防止ElasticSearch启动报错,max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
vi /etc/sysctl.conf
添加配置
vm.max_map_count=655360
执行命令
sysctl -p
目录结构
创建目录
mkdir -p app/elk/elasticsearch/data/ app/elk/kibana/ app/elk/logstash/pipeline/
要授权给该目录,否则elasticsearch无法启动
chmod 777 app/elk/elasticsearch/data
创建各个文件夹下的文件
touch app/elk/docker-compose.yml app/elk/elasticsearch/elasticsearch.yml app/elk/kibana/kibana.yml app/elk/logstash/pipeline/logstash.conf app/elk/logstash/logstash.yml
把下列文件内容复制进去,不要落下
elasticsearch.yml
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
#kibana.yml
## Default Kibana configuration from Kibana base image.
### https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js
server.name: kibana
server.host: 0.0.0.0
##汉化kibana
i18n.locale: "zh-CN"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
#
### X-Pack security credentials
##
elasticsearch.username: elastic
elasticsearch.password: changeme
logstash.conf
注意下:我配置的是多来源日志,不同的系统,访问不同的TCP端口,如果只有一个来源的话,就删除掉一个就行
input {
tcp {
type => "springboot1"
mode => "server"
host => "0.0.0.0"
port => 5000
codec => json_lines
}
tcp {
type => "springboot2"
mode => "server"
host => "0.0.0.0"
port => 5010
codec => json_lines
}
}
output {
if [type] == "springboot1" {
elasticsearch {
hosts => "elasticsearch:9200"
index => "springboot1-logstash-%{+YYYY.MM.dd}"
user => elastic
password => changeme
}
}
else if [type] == "springboot2" {
elasticsearch {
hosts => "elasticsearch:9200"
index => "springboot2-logstash-%{+YYYY.MM.dd}"
user => elastic
password => changeme
}
}
}
logstash.yml
## Default Logstash configuration from Logstash base image.
### https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#elasticsearch 这里写的是你的ip
## X-Pack security credentials
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
docker-compose.yml
version: "3"
services:
###配置elasticsearch
elasticsearch:
image: elasticsearch:7.2.0
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
discovery.type: single-node
##es的密码
ELASTIC_PASSWORD: changeme
#设置JVM最大(小)可用内存为1024,这个很重要,我一开始没有设置这个,我的es起不来
ES_JAVA_OPTS: "-Xmx1024m -Xms1024m"
ES_JAVA_OPTS: "-Xmx1g -Xms1g"
volumes:
# 这里注意一下 如果你想吧docker中es的数据 映射出来 你本地的 /home/elasticsearch 必须拥有 777权限
- /app/elk/elasticsearch/data/:/usr/share/elasticsearch/data
- /app/elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
#network_mode: host
###配置Logstash
logstash:
image: logstash:7.2.0
container_name: logstash
ports:
- "5000:5000/tcp"
- "5010:5010/tcp"
- "5000:5000/udp"
- "9600:9600"
#network_mode: host
environment:
discovery.type: single-node
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
volumes:
###将本机目录/opt/elk/logstach/pipeline下的文件映射到docker容器里面
- /app/elk/logstash/pipeline:/usr/share/logstash/pipeline
- /app/elk/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
depends_on:
- elasticsearch
###配置Kibana 64位系统下内存约为1.4GB,32位系统下约为0.7GB
kibana:
image: kibana:7.2.0
container_name: kibana
ports:
- "5601:5601"
volumes:
###将本机目录/opt/elk/kibana/kibana.yml下的文件映射到docker容器里面
- /app/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
#network_mode: host
depends_on:
- elasticsearch
通过docker-compose镜像安装启动
下载慢的话最好配置下镜像加速器 ,配置如下 登录阿里云找到容器镜像服务
cd /app/elk/
docker-compose up -d
Springboot 的logback.xml配置
1.maven安装对应的包
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.4</version>
</dependency>
2.配置logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml" />
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--配置logStash 服务地址-->
<destination>192.168.10.128:5000</destination>
<!-- 日志输出编码 -->
<encoder charset="UTF-8"
class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"logLevel": "%level",
"serviceName": "system-user",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"message": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH" />
<appender-ref ref="CONSOLE" />
</root>
</configuration>
如果是多来源的日志搜集,只要改下红色框里的端口就行,刚才我配置的logstash.conf就是 两个系统的 一个系统logback.xml 访问5000端口,另一个系统访问5010端口
运行起springboot 就能在kibana里看到索引
kibana访问地址:http://192.168.10.128:5601/
账号密码 elastic changeme
在JAVA里打印了些日志
kibana里也有了
亲测有效 当写完文章 我把环境卸载了 又按照文章重新装了一遍,也是可以的