Centos7下ELK日志系统搭建(elasticsearch6.7)

环境如下
系统:Centos7
一、准备
1.1 节点规划

IP	cluster.name	node.name    相关软件
192.168.1.11	es_log	es_1     elasticsearch、logstash、kibana、httpd
192.168.1.12	es_log	es_2     elasticsearch
192.168.1.13	es_log	es_3     elasticsearch、logstash

1.2 安装Java运行环境JRE

wget -c -P /tmp https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/jdk8u202-b08/OpenJDK8U-jre_x64_linux_hotspot_8u202b08.tar.gz
 
mkdir /usr/java;\
tar xf /tmp/OpenJDK8U-jre_x64_linux_hotspot_8u202b08.tar.gz -C /usr/java;\
rm -rf /usr/java/default;\
ln -s /usr/java/jdk8u202-b08-jre /usr/java/default;\
tee -a /usr/local/profile << 'EOF'
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH

1.3 下载elasticsearch软件:
https://github.com/elastic/elasticsearch/archive/v6.7.1.tar.gz

二、安装软件
mv elasticsearch-6.7.1.tar.gz /usr/local/
cd /usr/local/
tar -xf elasticsearch-6.7.1.tar.gz
mv elasticsearch-6.7.1 elasticsearch

三、配置
3.1 配置Limit
LimitMEMLOCK

vim /etc/security/limits.conf 文件后面添加

elk soft nofile 65536
elk hard nofile 131072
elk soft nproc 2048
elk hard nproc 4096

修改文件
# vim /etc/sysctl.conf

vm.max_map_count=655360

# sysctl -p

关闭防火墙及selinux

# systemctl stop firewalld && systemctl disable firewalld

# sed -i 's/=enforcing/=disabled/g' /etc/selinux/config  && setenforce 0


3.2 开启内存锁定
LimitNPROC
3.3 最大进程数,系统支持的最大进程数:32768
查看系统最大支持进程数:cat /proc/sys/kernel/pid_max
LimitNOFILE
打开文件数,系统默认最大文件描述符:791020
3.4 查看系统最大文件描述符:cat /proc/sys/fs/file-max

mkdir /usr/local/systemd/system/elasticsearch.service.d;\
cat > /usr/local/systemd/system/elasticsearch.service.d/override.conf << 'EOF'
[Service]
Environment=JAVA_HOME=/usr/java/default 
LimitMEMLOCK=infinity
LimitNOFILE=204800
LimitNPROC=4096
EOF

3.5 配置JVM(可选)
默认是1G
不要超过可用 RAM 的 50%

Lucene 能很好利用文件系统的缓存,它是通过系统内核管理的。如果没有足够的文件系统缓存空间,性能会受到影响。 此外,专用于堆的内存越多意味着其他所有使用 doc values 的字段内存越少

不要超过 32 GB
如果堆大小小于 32 GB,JVM 可以利用指针压缩,这可以大大降低内存的使用:每个指针 4 字节而不是 8 字节

sed -i '/-Xms1g/c\-Xms3g' /usr/local/elasticsearch/jvm.options;\
sed -i '/-Xmx1g/c\-Xmx3g' /usr/local/elasticsearch/jvm.options

3.6 配置elasticsearch.yml
cluster.name
集群名称,默认是elasticsearch,建议修改为更明确的名称,比如es_log

node.name
节点名,默认随机指定一个name列表中名字,建议修改为明确的名称,比如es_1,es_2,es_3

network.host: 0.0.0.0
主机IP

path.data
数据目录

path.logs
日志目录

discovery.zen.ping.unicast.hosts
节点发现

3.7 所有节点执行相同的命令

mkdir -p /data/elasticsearch/data /data/elasticsearch/logs;\
chown -R elasticsearch. /data/elasticsearch;\
sed -i '/cluster.name/c\cluster.name: es_log' /usr/local/elasticsearch/elasticsearch.yml;\
sed -i '/network.host/c\network.host: 0.0.0.0' /usr/local/elasticsearch/elasticsearch.yml;\
sed -i '/path.data/c\path.data: /data/elasticsearch/data' /usr/local/elasticsearch/elasticsearch.yml;\
sed -i '/path.logs/c\path.logs: /data/elasticsearch/logs' /usr/local/elasticsearch/elasticsearch.yml;\
sed -i '/discovery.zen.ping.unicast.hosts/c\discovery.zen.ping.unicast.hosts: ["192.168.1.11","192.168.1.12","192.168.1.13"]' /usr/local/elasticsearch/elasticsearch.yml

3.8 各个节点执行对应的命令
#节点-1
sed -i '/node.name/c\node.name: es_1' /usr/local/elasticsearch/elasticsearch.yml
 
#节点-2
sed -i '/node.name/c\node.name: es_2' /usr/local/elasticsearch/elasticsearch.yml
 
#节点-3
sed -i '/node.name/c\node.name: es_3' /usr/local/elasticsearch/elasticsearch.yml

3.9 启动
systemctl enable elasticsearch;\
systemctl daemon-reload;\
systemctl start elasticsearch;\
systemctl status elasticsearch

有时会碰到es服务无法启动的情况,查看/var/log/elasticsearch/下面的日志会发现
Unable to lock JVM Memory: error=12,reason=无法分配内存
这个时候需要修改/etc/security/limits.conf文件
添加这两条不限制elasticsearch分配内存
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

3.10 配置防火墙(关闭防火墙的忽略)
firewall-cmd --add-port=9200/tcp --permanent;\
firewall-cmd --add-port=9300/tcp --permanent;\
firewall-cmd --reload

四、安装中文分词插件(可选)
#查看已安装插件
/usr/local/elasticsearch/bin/elasticsearch-plugin list

#安装IK
/usr/local/elasticsearch/bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip

五、查询
#查看节点信息
curl -X GET http://localhost:9200/_nodes
 
#打开文件数信息
curl -X GET http://localhost:9200/_nodes/stats/process?filter_path=**.max_file_descriptors
 
#集群健康状态
curl -X GET http://localhost:9200/_cat/health?v
 
#查看集群索引数
curl -X GET http://localhost:9200/_cat/indices?v
 
#查看磁盘分配情况
curl -X GET http://localhost:9200/_cat/allocation?v
 
#查看集群节点
curl -X GET http://localhost:9200/_cat/nodes?v
 
#查看集群其他信息
curl -X GET http://localhost:9200/_cat

六、测试访问:
http://192.168.1.11:9200 可以看到这个节点的信息
我们使用json格式进行交互测试

curl -i -XGET 'http://192.168.58.147:9200/_count?pretty' -d '{> "query": { "match_all": {}}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95

{
  "count" : 0,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  }
}

#测试成功

可以看到上面两种交互方式并不友好,我们可以通过安装head插件,进行更加友好的访问。
/usr/local/elasticsearch/bin/plugin install mobz/elasticsearch-head
-> Installing mobz/elasticsearch-head...
Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...
....省略
Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed head into /usr/share/elasticsearch/plugins/head

安装好head插件后,我们继续进行访问测试http://192.168.1.11:9200/_plugin/head/

这里我们再介绍一个插件kopf
/usr/local/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
-> Installing lmenezes/elasticsearch-kopf...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
....省略
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed kopf into /usr/share/elasticsearch/plugins/kopf

安装完后我们访问http://192.168.1.11:9200/_plugin/kopf

七、安装logstash

7.1 下载logstash
wget https://github.com/elastic/logstash/archive/v6.7.1.tar.gz
7.2 解压:
tar -xvf logstash-6.7.1.tar.gz
mv logstash-6.7.1 /usr/local/logstash
7.3 修改配置文件:cd /usr/local/logstash/config
vi logstash.yml
http.host: "192.168.1.11"  #服务IP地址
7.4 测试:
/usr/local/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed

#可以看到我们输入什么,后面就会直接输出什么内容

按住Ctrl+c退出后,换一种格式输入输出,#这是详细格式输出,可以看到更加详细的内容

/usr/local/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug } }'


/usr/local/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.1.11:9200"] } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
abc123
test123
123456

然后我们到http://192.168.1.11:9200/_plugin/head/ 可以看到刚刚输入的信息以及索引

使用logstash收集系统日志
vim file.conf

ln -s /usr/local/logstash/bin/logstash /usr/bin/
input {
      file {
          path => "/var/log/messages"
          type => "system"
          start_position => "beginning"
      }
}

output {
     elasticsearch {
          hosts => ["192.168.1.11:9200"]
          index => "system-%{+YYYY.MM.dd}"
     }
}

下面我们尝试多个服务日志,修改file.conf.

input {
    file {
       path => "/var/log/messages"
       type => "system"
       start_position => "beginning"
    }
    file {
       path => "/var/log/httpd/access_log"
       type => "httpd"
       start_position => "beginning"
    }
}

output {
   if [type] == "system" {
     elasticsearch {
         hosts => ["192.168.1.11:9200"]
         index => "system-%{+YYYY.MMdd}"
     }
   }
   if [type] == "httpd" {
     elasticsearch {
         hosts => ["192.168.1.11:9200"]
         index => "httpd-%{+YYYY.MMdd}"
     }
   }
}

我们再来访问http://192.168.1.11:9200/_plugin/head/
如下是给java应用自己推送日志到es上:logstash -f /usr/local/logstash/bin/std_std_es.conf #监听端口9601

cat std_std_es.conf 
input {
    tcp {
        port => 9601
        codec => json_lines
    }
}
filter {
    grok {
          match => { "message" => "%{TIMESTAMP_ISO8601:logdate} \[%{NUMBER:threadId}\] %{LOGLEVEL:level}"}
    }
    if [level] == "DEBUG" {
      drop {}
     }
    if [level] == "INFO" {
      drop {}
    }
    if [level] == "WARN" {
      drop {}
    }
    date {
        match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS"]
      # target => "@timestamp"
        timezone => "Asia/Shanghai"
    }
}

output{
      elasticsearch { 
        hosts => ["192.168.1.11:9200"] 
        index => "springboot-elk-application"
      }
}

八、安装kibana

8.1 下载kibana
wget https://github.com/elastic/kibana/archive/v6.7.1.tar.gz
8.2 解压:
tar kibana-6.7.1.tar.gz -C /usr/local/
8.3 重命名:
mv /usr/local/kibana-6.7.1  /usr/local/kibana
8.4 修改配置文件:
vim /usr/local/kibana/config/kibana.yml

server.port: 5601  #监听端口
server.host: "0.0.0.0"  #服务器IP
elasticsearch.url: "http://192.168.1.11:9200" #es服务器地址及端口
kibana.index: ".kibana"   #kibana索引


启动kibana
/usr/local/kibana/bin/kibana

访问http://192.168.1.11:5601地址。

至此ELK安装完成,使用后续更新。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

划水的运维

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值