生产JAVA日志的ELK归集方案(一)

简单说明:

  • 开发经常有需求要监控生产tomcat日志,因此需要一个脱离生产主机的日志服务器
  • 开发需要监控和查询的生产日志一般是实时和近三天内的
  • 生产tomcat实例很多,高负载项目的日志归集后较大,需要定期清理归集的日志
  • 生产tomcat主机之上的原始日志会留存较长时间,一般留存一个月左右
  • 因此日志归集系统稳定性要求不高,归集的日志数据安全性也不高,推荐单机版的ELK架构
    在这里插入图片描述
  • 使用redis对日志做缓存,应对业务高峰期es处理速度不匹配的问题
  • ELK官网:https://www.elastic.co
  • ELK官网官档:https://www.elastic.co/guide/index.html
  • ELK官方历史版本包下载地址:https://www.elastic.co/downloads/past-releases#

Elasticsearch 安装配置:

  • 依据《CentOS7实验机模板搭建部署》克隆部署一台虚拟机 192.168.77.110:
# 主机名配置
HOSTNAME=es1
hostnamectl set-hostname "$HOSTNAME"
echo "$HOSTNAME">/etc/hostname
echo "$(grep -E '127|::1' /etc/hosts)">/etc/hosts
echo "$(ip a|grep "inet "|grep -v 127|awk -F'[ /]' '{print $6}') $HOSTNAME">>/etc/hosts

# 安装java环境
yum -y install java-11-openjdk

# 配置官方yum源
cd /tmp
cat >/etc/yum.repos.d/elasticsearch.repo<<EOF
[ELK-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
# baseurl=https://artifacts.elastic.co/packages/7.x/yum
# elk7.x版本的url如上,当前最新版本是第7版
gpgcheck=0
enabled=1
autorefresh=1
type=rpm-md
EOF
# yum install elasticsearch kibana logstash
# 或者直接下载rpm包部署:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.4.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.8.4-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.8.4.rpm
###wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-x86_64.rpm
###wget https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-x86_64.rpm
###wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.2.rpm
# 可以使用迅雷拉网速,无需会员,亲测很快

# 配置系统服务的ulimit,重启生效
cat >>/etc/systemd/system.conf<<EOF
DefaultLimitNOFILE=100000
DefaultLimitNPROC=65535
DefaultLimitMEMLOCK=infinity
EOF
reboot
  • 安装配置 elasticsearch
cd /tmp
yum -y localinstall elasticsearch-6.8.4.rpm
cd /etc/elasticsearch
sed -i 's/^path.data/# &/g' elasticsearch.yml
sed -i 's/^path.logs/# &/g' elasticsearch.yml
cat >>elasticsearch.yml<<EOF
cluster.name: vincent-es
node.name: $(hostname)
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
path.data: /elasticsearch/data
path.logs: /elasticsearch/logs
# discovery.zen.ping.unicast.hosts: ["$(hostname)", "XXX", ...]
EOF
mkdir -pv /elasticsearch/{data,logs}
chown -R elasticsearch: /elasticsearch
# 该目录生产需要挂接存储

# 修改jvm参数,设置 Xms=Xmx=物理内存*50%
MEM=$(free -g|grep Mem|awk '{printf "%d\n",$2/2}')
sed -i "s/-Xms1g/-Xms${MEM}g/g" jvm.options
sed -i "s/-Xmx1g/-Xmx${MEM}g/g" jvm.options

# 启动并测试
systemctl start elasticsearch
systemctl enable elasticsearch
systemctl status elasticsearch
netstat -lntup|grep 9200
curl http://$(hostname -i):9200
  • 安装elasticsearch的head插件
cd /tmp
wget https://nodejs.org/dist/v13.1.0/node-v13.1.0-linux-x64.tar.xz
cd /usr/local/
tar -xf /tmp/node-v13.1.0-linux-x64.tar.xz
chown -R root: node-v13.1.0-linux-x64/
echo 'export NODE_HOME=/usr/local/node-v13.1.0-linux-x64'>>/etc/profile
echo 'export PATH=$NODE_HOME/bin:$PATH'>>/etc/profile
source /etc/profile
node -v
npm -v

cd /usr/local
yum -y install git bzip2
git clone https://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head/
# 设置淘宝镜像源加速安装
npm config set registry https://registry.npm.taobao.org
cat >>~/.npmrc<<EOF
sass_binary_site = https://npm.taobao.org/mirrors/node-sass/
phantomjs_cdnurl = https://npm.taobao.org/mirrors/phantomjs/
EOF
npm install

cd /etc/elasticsearch/
cat >>elasticsearch.yml<<EOF
http.cors.enabled: true
http.cors.allow-origin: "*"
EOF
systemctl restart elasticsearch
cd /usr/local/elasticsearch-head/
npm run start &
# 网页访问 http://IP:9100/ 网页上操作连接 http://IP:9200

# 配置head插件的开机启动
cat >/root/checkOS/elasticsearch-headStart.sh<<EOF
#!/bin/bash
source /etc/profile
cd /usr/local/elasticsearch-head/
/usr/local/node-v13.1.0-linux-x64/bin/npm run start &
EOF
chmod +x /root/checkOS/elasticsearch-headStart.sh
echo '/root/checkOS/elasticsearch-headStart.sh'>>/etc/rc.local
  • 配置elasticsearch的数据清理脚本和自动任务
# index的命名要符合 %{+YYYY.MM.dd} 规则
cat >/root/checkOS/elasticsearchCleanIndex.sh<<EOF
#!/bin/bash
source /etc/profile
DT=\$(date +%Y.%m.%d -d'3 day ago')
for index in \$(curl -s -XGET 'http://127.0.0.1:9200/_cat/indices/?v'|awk '{print \$3}'|grep \${DT})
do
  curl -XDELETE "http://127.0.0.1:9200/\${index}"
done
EOF
chmod +x /root/checkOS/elasticsearchCleanIndex.sh
crontab -l>/tmp/crontab.tmp
echo -e '\n# Elasticsearch Clean Index'>>/tmp/crontab.tmp
echo '0 0 * * * /bin/bash /root/checkOS/elasticsearchCleanIndex.sh'>>/tmp/crontab.tmp
cat /tmp/crontab.tmp |crontab
rm -rf /tmp/crontab.tmp

Kibana安装配置

  • 将Kibana直接部署在Elasticsearch主机之上
# 安装配置启动kibana
cd /tmp/
yum -y localinstall kibana-6.8.4-x86_64.rpm
cd /etc/kibana/
cat >>kibana.yml<<EOF
server.host: "0.0.0.0"
server.port: 5601
server.name: "$(hostname)"
elasticsearch.hosts: ["http://$(hostname -i):9200"]
EOF
systemctl start kibana
systemctl enable kibana
systemctl status kibana
netstat -lntup|grep 5601
# 浏览器访问 http://192.168.77.110:5601/

Redis安装配置

  • 依据《Redis4.0 单实例安装配置》克隆一台虚拟机部署redis 192.168.77.100
  • 建议部署安装最新5.X版 http://download.redis.io/releases/redis-5.0.6.tar.gz

Web实验机模拟

  • 依据《CentOS7实验机模板搭建部署》部署一台实验机 192.168.77.10
  • 创建目录,模拟catalina.out文件的写入
HOSTNAME=web
hostnamectl set-hostname "$HOSTNAME"
echo "$HOSTNAME">/etc/hostname
echo "$(grep -E '127|::1' /etc/hosts)">/etc/hosts
echo "$(ip a|grep "inet "|grep -v 127|awk -F'[ /]' '{print $6}') $HOSTNAME">>/etc/hosts

mkdir -pv /web/tomcat8_8080_pro1/logs
mkdir -pv /web/tomcat8_8081_pro2/logs
mkdir -pv /web/tomcat8_8082_pro3/logs
mkdir -pv /web/tomcat8_8083_pro4/logs
for i in $(seq 10000)
do
  cat /tmp/catalina.out>>/web/tomcat8_8080_pro1/logs/catalina.out
  cat /tmp/catalina.out>>/web/tomcat8_8081_pro2/logs/catalina.out
  cat /tmp/catalina.out>>/web/tomcat8_8082_pro3/logs/catalina.out
  cat /tmp/catalina.out>>/web/tomcat8_8083_pro4/logs/catalina.out
  sleep 5
done &

Logstash部署配置

  • 在Web实验机上安装配置Logstash,将数据发送到redis中
cd /tmp
yum -y install java-11-openjdk
yum -y localinstall logstash-6.8.4.rpm
echo 'export LS_HOME=/usr/share/logstash'>>/etc/profile
echo 'export PATH=$LS_HOME/bin:$PATH'>>/etc/profile
source /etc/profile
# 修改logstash启动的jvm所占用的内存
sed -i 's/^-Xms1g/-Xms1g/g' /etc/logstash/jvm.options
sed -i 's/^-Xmx1g/-Xmx1g/g' /etc/logstash/jvm.options
# 创建配置文件,启动logstash将本地文件信息发送到redis中
mkdir /usr/share/logstash/conf
cd /usr/share/logstash/conf
cat >file2redis.conf<<EOF
input {
  file {
    path => "/web/tomcat8_8080_pro1/logs/catalina.out"
    type => "$(hostname -i)-8080-pro1"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
  file {
    path => "/web/tomcat8_8081_pro2/logs/catalina.out"
    type => "$(hostname -i)-8081-pro2"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
  file {
    path => "/web/tomcat8_8082_pro3/logs/catalina.out"
    type => "$(hostname -i)-8082-pro3"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
  file {
    path => "/web/tomcat8_8083_pro4/logs/catalina.out"
    type => "$(hostname -i)-8083-pro4"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
}
output {
  if [type] == "$(hostname -i)-8080-pro1" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8080-pro1"
    }
  }
  if [type] == "$(hostname -i)-8081-pro2" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8081-pro2"
    }
  }
  if [type] == "$(hostname -i)-8082-pro3" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8082-pro3"
    }
  }
  if [type] == "$(hostname -i)-8083-pro4" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8083-pro4"
    }
  }
}
EOF
# 其中codec部分将java的报错日志整合成一条信息
echo '192.168.77.100 redis'>>/etc/hosts
/usr/share/logstash/bin/logstash -f file2redis.conf &
  • logstash 开机自动启动配置:
echo 'source /etc/profile;$LS_HOME/bin/logstash -f $LS_HOME/conf/file2redis.conf &'>>/etc/rc.local
  • redis主机进行测试:
echo 'keys *' |redis-cli -h 192.168.77.100 -p 7000
# 1) "192.168.77.10-8080-pro1"
# 2) "192.168.77.10-8081-pro2"
# 3) "192.168.77.10-8083-pro4"
# 4) "192.168.77.10-8082-pro3"
# 显示出test.conf文件中output标签配置的四个key则表示成功
for i in $(echo 'keys *' |redis-cli -h 192.168.77.100 -p 7000)
do
  echo "llen $i"|redis-cli -h 192.168.77.100 -p 7000
done
# 通过查看对应key的长度来确定消息是否堆积,es的处理是否及时
  • 在Redis主机之上安装配置Logstash,将数据发送到elasticsearch中
cd /tmp
yum -y install java-11-openjdk
yum -y localinstall logstash-6.8.4.rpm
echo 'export LS_HOME=/usr/share/logstash'>>/etc/profile
echo 'export PATH=$LS_HOME/bin:$PATH'>>/etc/profile
source /etc/profile
# 修改logstash启动的jvm所占用的内存
sed -i 's/^-Xms1g/-Xms1g/g' /etc/logstash/jvm.options
sed -i 's/^-Xmx1g/-Xmx1g/g' /etc/logstash/jvm.options

# 创建配置文件,启动logstash将本地文件信息发送到redis中
mkdir /usr/share/logstash/conf
cd /usr/share/logstash/conf
cat >redis2es.conf<<EOF
input {
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8080-pro1"
    data_type => "list"
    key  => "192.168.77.10-8080-pro1"
  }
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8081-pro2"
    data_type => "list"
    key  => "192.168.77.10-8081-pro2"
  }
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8082-pro3"
    data_type => "list"
    key  => "192.168.77.10-8082-pro3"
  }
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8083-pro4"
    data_type => "list"
    key  => "192.168.77.10-8083-pro4"
  }
}
output {
  if [type] == "192.168.77.10-8080-pro1" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8080-pro1-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "192.168.77.10-8081-pro2" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8081-pro2-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "192.168.77.10-8082-pro3" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8082-pro3-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "192.168.77.10-8083-pro4" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8083-pro4-%{+YYYY.MM.dd}"
    }
  }
}
EOF
# index命名要符合elasticsearch中部署的index清理脚本要求
/usr/share/logstash/bin/logstash -f redis2es.conf &
  • logstash 开机自动启动配置:
echo 'source /etc/profile;$LS_HOME/bin/logstash -f $LS_HOME/conf/redis2es.conf &'>>/etc/rc.local
  • redis主机验证:
for i in $(echo 'keys *' |redis-cli -h 192.168.77.100 -p 7000)
do
  echo "llen $i"|redis-cli -h 192.168.77.100 -p 7000
done
# 通过查看对应key的长度变化来确定redis写入es是否成功
  • es主机验证:
curl -s -XGET 'http://127.0.0.1:9200/_cat/indices/?v'|column -t 
# 观察docs.count不断增加,store.size不断增大,表示redis写入es成功
# 也可以浏览器打开head插件页面来确认

使用Kibana实时监控日志

  • 浏览器访问:http://192.168.77.110:5601
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

[TOC]

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值