生产环境Elk Stack部署文档

对Linux有兴趣的朋友加入QQ群:476794643 在线交流
本文防盗链:http://blog.51cto.com/zhang789

1、前言

由于公司前期的elk日志系统使用效果比较好,现在开发纷纷要求把线上的日志都加到elk上面去,300多台机器的日志,一台服务器肯定承受不住了,为了解决这个情况,必须要做一个elk的集群架构。

2、拓扑图

生产环境Elk Stack部署文档

拓扑说明

1、Filebeat:轻量级的日志收集工具
2、阿里云Redis:自己搭建的redis可扩张的方式没有阿里云方便
3、Logstash:用来对日志文件的过滤,因为Logstash比较实用性能,和ES服务分开
4、ES:为了保存数据,做两台集群
5、Kibana/nginx:kibana性能要求不高,但是Kibana访问没有限制,为了安全监听在127.0.0.1,然后实用Nginx代理

3、资源申请记录

生产环境Elk Stack部署文档
总计:

Redis:1台(4G单节点)
ESC:4台(2核8G)
域名:log.ops.****.com

4、初始化配置

4.1、购买Redis并初始化

(有的网友推荐我用docker跑Redis,但考虑到docker性能,而且阿里云的Redis并不算贵,就选了阿里云的Redis)

1、配置4G单节点

生产环境Elk Stack部署文档

2、配置Redis并配置安全访问

生产环境Elk Stack部署文档

从上路可以看到redis的连接地址,我们的地址是只能在内网能访问,为了让所有机器都能写到redis我这里设置0.0.0.0/0所有内网地址都可以访问

生产环境Elk Stack部署文档

4.2、购买Ecs并初始化系统

1、选购服务器这里不做解释
2、阿里云的服务器好像没有什么可以初始化的,重要的就是安全设置,然后把机器加入到监控、跳板机

5、部署Elk Stack

5.1、部署Elasticsearch集群!

(1)配置java环境,且版本为8,如果使用7可能会出现警告信息

[root@Ops-Elk-ES-01 ~]# yum -y install java-1.8.0
[root@Ops-Elk-ES-01 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

(2)安装Elasticsearch

[root@Ops-Elk-ES-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@Ops-Elk-ES-01 ~]# cat /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-ES-01 ~]# yum install elasticsearch

(3)配置Elasticsearch集群

[root@Ops-Elk-ES-01 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster        #组播的名称地址 
node.name: "node1"              #节点名称,不能和其他节点重复
path.data: /work/es/data        #数据目录路径
path.logs: /work/es/logs        #日志文件路径
bootstrap.mlockall: true        #内存不向swap交换
network.host: 192.168.90.201    #允许访问的IP(本机IP地址)
http.port: 9200                 #启用http
discovery.zen.ping.unicast.hosts: ["192.168.8.32", "192.168.8.33"] # 集群节点,会自动发现节点
[root@Ops-Elk-ES-01 ~]# mkdir /data/elk/{data.logs} –p
[root@Ops-Elk-ES-01 ~]# chown elasticsearch:elasticsearch /data –R
[root@Ops-Elk-ES-01 ~]# systemctl start elasticsearch
ES-02 的配置只需要把node.name和IP地址改一下即可

(4)查看Elasticsearch集群状态

[root@Ops-Elk-ES-01 ~]# curl -XGET 'http://192.168.8.32:9200/_cat/nodes?v'
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.8.32            3          95   0    0.00    0.02     0.05 mdi       *      node-1
192.168.8.33            3          96   0    0.12    0.09     0.07 mdi       -      node-2

运维API:

1. 集群状态:http:// 192.168.8.32:9200/_cluster/health?pretty
2. 节点状态:http:// 192.168.8.32:9200/_nodes/process?pretty
3. 分片状态:http:// 192.168.8.32:9200/_cat/shards
4. 索引分片存储信息:http:// 192.168.8.32:9200/index/_shard_stores?pretty
5. 索引状态:http:// 192.168.8.32:9200/index/_stats?pretty
6. 索引元数据:http:// 192.168.8.32:9200/index?pretty

5.2、安装Logstash

(1)配置java环境,且版本为8,如果使用7可能会出现警告信息

[root@Ops-Elk-ES-01 ~]# yum -y install java-1.8.0
[root@Ops-Elk-ES-01 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

(1)安装Logstash

[root@Ops-Elk-Logstash-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@Ops-Elk-Logstash-01 ~]# cat /etc/yum.repos.d/logstash.repo
[logstash-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-Logstash-01 ~]# yum install logstash -y

现在已经安装完成,因为我们取日志是在redis取出来使用logstash过滤写到redis,等下会有logstash的案例

5.3、安装Kibana/nginx

(1)配置java环境,且版本为8,如果使用7可能会出现警告信息

[root@Ops-Elk-Kibana-01 ~]# yum -y install java-1.8.0
[root@Ops-Elk-Kibana-01 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

(2)安装Kibana

[root@Ops-Elk-Kibana-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@Ops-Elk-Kibana-01 ~]# cat /etc/yum.repos.d/kibana.repo
[kibana-5.x]
name=Kibana repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-Kibana-01 ~]# yum install kibana

(3)配置Kibana

[root@Ops-Elk-Kibana-01 ~]# grep "^[a-z]" /etc/kibana/kibana.yml
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: "http://192.168.8.32:9200"
kibana.index: ".kibana"
[root@Ops-Elk-Kibana-01 ~]# systemctl start kibana

(4)添加Nginx反向代理

[root@Ops-Elk-Kibana-01 ~]# yum -y install nginx
[root@Ops-Elk-Kibana-01 ~]# cd /etc/nginx/conf.d/
[root@Ops-Elk-Kibana-01 conf.d]# touch elk.ops.qq.com.conf
[root@Ops-Elk-Kibana-01 conf.d]# htpasswd -cm /etc/nginx/kibana-user zhanghe
New password:
Re-type new password:
Adding password for user zhanghe
[root@Ops-Elk-Kibana-01 conf.d]# cat elk.ops.qq.com.conf
server {
        listen 80;
        server_name elk.ops.qq.com;
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/kibana-user;

        location / {
        proxy_pass http://127.0.0.1:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        }
}
[root@Ops-Elk-Kibana-01 conf.d]# yum -y install httpd-tools
[root@Ops-Elk-Kibana-01 conf.d]# htpasswd -cm /etc/nginx/kibana-user zhanghe
New password:
Re-type new password:
Adding password for user zhanghe
[root@Ops-Elk-Kibana-01 conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@Ops-Elk-Kibana-01 conf.d]# systemctl start nginx

(5)访问kibana
生产环境Elk Stack部署文档
生产环境Elk Stack部署文档

5.4、安装Filebeat

(1)安装filebeat

[root@node-01:~]# cat /etc/yum.repos.d/filebeat.repo
[elastic-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@Ops-Elk-Kibana-01 conf.d]# yum -y install filebeat

下面做一个简单案例
生产环境Elk Stack部署文档

5.5、案例:收集Tomcat catalina.out日志

收集日志的顺序:

Tomcat_filebeat->Redis->logstash->Es->Kibana

(1)在两台tomcat上面配置filebeat写到redis

[root@Tomcat-01:~]# cat /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
  paths:
    - /usr/tomcat/apache-tomcat-7.0.78/logs/catalina.out
  document_type: tomcat-01
  multiline.pattern: '^2017-0'
  multiline.negate: true
  multiline.match: after

output.redis:
  hosts: ["r-****.redis.rds.aliyuncs.com:6379"]
  db: 0
  timeout: 5
  key: "tomcat-01"
[root@Tomcat-01:~]# systemctl start filebeat

(2)在logstash上面配置一个文件,从redis取出来数据写到ES里面

[root@Ops-Elk-ES-01 conf.d]# cat tomcat.conf
input {
  redis {
    type => "tomcat-01"
    host => "r-****.redis.rds.aliyuncs.com"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "tomcat-01"
  }
  redis {
    type => "tomcat-02"
    host => "r-****.redis.rds.aliyuncs.com"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "tomcat-02"
  }
}

output {
  if [type] == "tomcat-01" {
    elasticsearch {
      hosts => ["es01:9200","es02:9200"]
      index => "tomcat-01-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "tomcat-02" {
    elasticsearch {
      hosts => ["es01:9200","es02:9200"]
      index => "tomcat-02-%{+YYYY.MM.dd}"
    }
  }
}
[root@Ops-Elk-Logstash-01 conf.d]# systemctl restart logstash

(3)在kibana上面添加ES索引
生产环境Elk Stack部署文档

5.6、案例:收集Nginx Access日志

收集日志的顺序:

Nginx_filebeat->Redis->logstash->Es->Kibana

(1)nginx日志格式修改为json格式
格式1:

log_format access2 '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,'
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'
        '"http_host":"$host",'
        '"url":"$request",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        #'"user_agent":"$http_user_agent",'
        '"status":"$status"}';

格式2:

log_format  access_log_json  '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';

应用日志格式:

access_log  /var/www/logs/access.log  access2;

重启nginx
(2)在nginx机器上面配置filebeat将日志写到redis

[root@Tomcat-01:~]# cat /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
  paths:
- /var/www/logs/access.log
  document_type: nginx-01
  multiline.pattern: '^2017-0'
  multiline.negate: true
  multiline.match: after

output.redis:
  hosts: ["r-****.redis.rds.aliyuncs.com:6379"]
  db: 0
  timeout: 5
  key: "nginx-01"
[root@Tomcat-01:~]# systemctl start filebeat

(3)在logstash上面配置一个文件,从redis取出来数据写到es里面

[root@Ops-Elk-ES-01 conf.d]# cat nginx.conf
input {
  redis {
    type => "nginx-01"
    host => "r-****.redis.rds.aliyuncs.com"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "nginx-01"
  }
}

output {
    elasticsearch {
      hosts => ["es01:9200","es02:9200"]
      index => "logstash-nginx-s4-access-01-%{+YYYY.MM.dd}"
    }
}
[root@Ops-Elk-Logstash-01 conf.d]# systemctl restart logstash

(4)在kibana上面添加ES索引
生产环境Elk Stack部署文档
生产环境Elk Stack部署文档

5.7、效果展示

最好可以根据Elk收集的日志,创建日志分析图形,好了,这就不过多讲解了,有兴趣的可以加入我们QQ群,一起讨论,解决问题,让学习Elk stack不再成为难题

对Linux有兴趣的朋友加入QQ群:476794643在线交流
本文防盗链:http://blog.51cto.com/zhang789

生产环境Elk Stack部署文档
生产环境Elk Stack部署文档

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值