CentOS 7安装ELK日志分析系统

1、Elastic Search安装

本次部署目录/data/elk下

cd /data/elk
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.4.1-linux-x86_64.tar.gz

解压到当前目录

tar -zxvf elasticsearch-8.4.1-linux-x86_64.tar.gz 

编辑elasticsearch.yml文件

cd /data/elk/elasticsearch-8.4.1/config/
vi elasticsearch.yml

新增以下内容

network.host: 0.0.0.0 #可远程访问
node.name: node-base #节点名称 这个与下面一点一定要配,不然即使启动成功也会操作超时或发生master_not_discovered_exception
cluster.initial_master_nodes: ["node-base"] #发现当前节点名称
path.conf: /data/ELK/elasticsearch/conf
path.data: /data/ELK/elasticsearch/data
http.port: 9200 #端口号
http.cors.allow-origin: "*" #以下皆是跨域配置
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-credentials: true

ElasticSearch启动不能用root账号,需要新建账户
创建新用户及授权

groupadd elsearch #增加用户组
useradd elsearch -g elsearch -p elsearch #新增用户并设置用户组
cd /data/elk
chown -R elsearch:elsearch elasticsearch-7.5.1 #修改文件属所用户
su elsearch
cd /data/elk/elasticsearch-8.4.1/bin/
sh elasticsearch > /data/elk/el.log &

启动过程中出现如下错误

[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[3]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

解决1:将当前用户的软硬限制调大
切换到root用户

su
vi /etc/security/limits.conf

新增以下配置

* soft nofile 65535
* hard nofile 65535
#也可以用@加账号
@elsearch soft nofile 65535
@elsearch hard nofile 65535
#执行
ulimit -n 65535
ulimit -n 
#返回 65535
ulimit -H -n 65535
ulimit -H -n  
#返回 65535

解决2:调大elasticsearch用户拥有的内存权限
切换到root用户

su
#临时修改
sysctl -w vm.max_map_count=262144
sysctl -a|grep vm.max_map_count #查看修改结果
#永久修改
vi /etc/sysctl.conf
#增加以下内容
vm.max_map_count=262144
#保存并执行
sysctl -p

解决3:没有配置node节点名称和主节点
修改配置文件

cd /data/elk/elasticsearch-8.4.1/config/
vi elasticsearch.yml
#修改以下配置
node.name: node-1
cluster.initial_master_nodes: ["node-1"]
#重新启动
sh elasticsearch &

出现以下画面启动成功

✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  phtF6nPnroGhPCC_n9s9

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  7c6ff0a7ab717859c2d550fc3c06d0128b67791750e08ca70076cc75bd1dafc4

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjQuMSIsImFkciI6WyIxOTIuMTY4LjEwMS4xMDI6OTIwMCJdLCJmZ3IiOiI3YzZmZjBhN2FiNzE3ODU5YzJkNTUwZmMzYzA2ZDAxMjhiNjc3OTE3NTBlMDhjYTcwMDc2Y2M3NWJkMWRhZmM0Iiwia2V5Ijoid01ySS1ZSUJzaEdRSzlZalE2a1g6YmxFMW1TeXpRZFNxcUZHYWUzcHJldyJ9

ℹ️ Configure other nodes to join this cluster:
• Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjQuMSIsImFkciI6WyIxOTIuMTY4LjEwMS4xMDI6OTIwMCJdLCJmZ3IiOiI3YzZmZjBhN2FiNzE3ODU5YzJkNTUwZmMzYzA2ZDAxMjhiNjc3OTE3NTBlMDhjYTcwMDc2Y2M3NWJkMWRhZmM0Iiwia2V5Ijoid3NySS1ZSUJzaEdRSzlZalE2a2k6Z2FOTzlEZVdSVnE1SGYtSWxLdFRyQSJ9

  If you're running in Docker, copy the enrollment token and run:

修改配置文件elasticsearch.yml

ingest.geoip.downloader.enabled: false #新增
xpack.security.enabled: true #修改为false

xpack.security.http.ssl:
  enabled: true #修改为false
  keystore.path: certs/http.p12

重新启动
访问http://ip:9200
如果不能访问请查看防火墙是否开启9200端口

firewall-cmd --zone=public --add-port=9200/tcp --permanent

自启配置
进入进入到 cd /etc/init.d 目录

cd /etc/init.d      【进入到目录】
vi elsearch    【创建es系统启动服务文件】

添加以下内容,ES_HOME为es安装目录
su es为运行用户

#!/bin/bash
#chkconfig: 345 63 37
#description: elasticsearch
#processname: elasticsearch-8.4.1

export ES_HOME=/usr/local/elasticsearch-8.4.1

case $1 in
        start)
                su elsearch<<!
                cd $ES_HOME
                ./bin/elasticsearch -d -p pid
                exit
!
                echo "elasticsearch is started"
                ;;
        stop)
                pid=`cat $ES_HOME/pid`
                kill -9 $pid
                echo "elasticsearch is stopped"
                ;;
        restart)
                pid=`cat $ES_HOME/pid`
                kill -9 $pid
                echo "elasticsearch is stopped"
                sleep 1
                su es<<!
                cd $ES_HOME
                ./bin/elasticsearch -d -p pid
                exit
!
                echo "elasticsearch is started"
        ;;
    *)
        echo "start|stop|restart"
        ;;
esac
exit 0

修改权限

chmod 777 elsearch 

添加和删除服务并设置启动方式

chkconfig --add elsearch    【添加系统服务】
chkconfig --del elsearch    【删除系统服务】

关闭和启动服务

service elsearch start     【启动】
service elsearch stop      【停止】
service elsearch restart     【重启】

设置服务是否开机启动

chkconfig elsearch on      【开启】
chkconfig elsearch off       【关闭】

2、Kibana安装

切换到root用户,下载安装包

su
cd /data/elk
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.5.1-linux-x86_64.tar.gz

解压到当前目录

tar -zxvf kibana-7.5.1-linux-x86_64.tar.gz 

编辑kibana.yml文件

cd /data/elk/kibana-7.5.1-linux-x86_64/config/
vi kibana.yml

新增以下内容

server.port: 5601
server.host: "0.0.0.0"
server.name: "kibana"
elasticsearch.url: "http://localhost:9200"
kibana.index: ".kibana"

保存并启动kibana

cd /data/elk/kibana-7.5.1-linux-x86_64/bin
#root不能直接启动kibana所以后面加--allow-root
sh kibana --allow-root > /data/elk/kibana.log &

启动出现错误

FATAL  Error: [elasticsearch.url]: definition for this key is missing

修改kibana.yml文件

elasticsearch.url: "http://localhost:9200"
#修改成下面
elasticsearch.hosts: ["http://localhost:9200"]

访问http://ip:5601
如果不能访问请查看防火墙是否开启5601端口

3、Logstash安装

切换到root用户,下载安装包

su
cd /data/elk
wget https://artifacts.elastic.co/downloads/kibana/logstash-7.5.1.tar.gz

解压到当前目录

tar -zxvf logstash-7.5.1.tar.gz 

新增logstash.conf文件

cd /data/elk/logstash-7.5.1/bin/
vi logstash.conf

新增以下内容

input {
    tcp {
        port => 5044
        codec => json_lines
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
    }
}

保存并启动logstash

cd /data/elk/logstash-7.5.1/bin/
sh logstash -f logstash.conf  &

查看日志,无报错信息,默认启动成功。

4、配置项目

添加logstash依赖

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>6.3</version>
</dependency>

配置logback-spring.xml文件
加spring后缀是能使用标签
多环境配置

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProfile name="exp">
	<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
	    <destination>192.168.5.19:5044</destination>
	    <queueSize>1048576</queueSize>
	    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
	        <providers>
	            <timestamp>
	                <timeZone>UTC</timeZone>
	            </timestamp>
	            <pattern>
	                <pattern>
	                        {
	                        "severity":"%level",
	                        "service": "wbx-gateway",
	                        "pid": "${PID:-}",
	                        "thread": "%thread",
	                        "class": "%logger{40}",
	                        "rest": "%message->%ex{full}"
	                        }
	                </pattern>
	            </pattern>
	        </providers>
	    </encoder>
	</appender>
</springProfile>
<springProfile name="test">
	<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
	    <destination>192.168.5.19:5044</destination>
	    <queueSize>1048576</queueSize>
	    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
	        <providers>
	            <timestamp>
	                <timeZone>UTC</timeZone>
	            </timestamp>
	            <pattern>
	                <pattern>
	                        {
	                        "severity":"%level",
	                        "service": "wbx-gateway",
	                        "pid": "${PID:-}",
	                        "thread": "%thread",
	                        "class": "%logger{40}",
	                        "rest": "%message->%ex{full}"
	                        }
	                </pattern>
	            </pattern>
	        </providers>
	    </encoder>
	</appender>
</springProfile>
 <property name="pattern" value="%d{yyyy-MM-dd:HH:mm:ss.SSS} [%thread] %-5level  %msg%n"/>
 <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- 字符串System.out(默认)或者System.err -->
        <target>System.out</target>
        <!-- 对记录事件进行格式化 -->
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>${pattern}</pattern>
        </encoder>
    </appender>

<springProfile name="dev">
    <root level="info">
        <appender-ref ref="STDOUT" /> 
    </root>
</springProfile>
<springProfile name="exp">
    <root level="info">
		 <appender-ref ref="LOGSTASH" /> 
	</root>
</springProfile>
<springProfile name="test">
    <root level="info">
		 <appender-ref ref="LOGSTASH" /> 
	</root>
</springProfile>
</configuration>
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值