ELK部署

  1. 基础环境安装

准备三台服务器A、B、C(时间必须一致

主机A:jdk1.8、kafka、zookeeper、logstash

主机B:jdk1.8、kafka、zookeeper、es

主机C:jdk1.8、kafka、zookeeper、kibana

被控端:filebeat

1、三台服务器分别关闭防火墙、selinux

   systemctl stop firewalld

   systemctl disable firewalld

   setenforce 0

   sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

2、三台服务器分别修改主机名

   vi /etc/hosts

   重启三台服务器

   reboot

3、三台服务器分别安装jdk

   rpm –qa | grep jdk

   rpm –e 旧版本名称

   rpm –ivh jdk-8u121-linux-x64.rpm

   java -version

  1. 安装zookeeper

1、解压zookeeper安装包

   tar -zxvf apache-zookeeper-3.7.0.tar.gz -C /usr/local/

   cd /usr/local/

   mv apache-zookeeper-3.7.0/ zookeeper

   cd /usr/local/zookeeper/conf/

2、修改配置文件

   mv zoo_sample.cfg zoo.cfg

   vim zoo.cfg

tickTime=2000 #服务器之间心跳时间

initLimit=10      #zk服务器最大连接失败时间

syncLimit=5 #zk同步通信时间

dataDir=/usr/local/zookeeper/data #zk数据存放路径

clientPort=2181 #监听端口号

server.1=192.168.160.100:2888:3888 #服务器编号,ip地址,集群通信端口号,集群选举端口号

server.2=192.168. 160.101:2888:3888

server.3=192.168. 160.102:2888:3888

3、设置myid

   主机A:

   echo '1' > /usr/local/zookeeper/data/myid

   主机B:

   echo '2' > /usr/local/zookeeper/data/myid

   主机C:

   echo '3' > /usr/local/zookeeper/data/myid

4、三台服务器分别启动zookeeper

   /usr/local/zookeeper/bin/zkServer.sh start

Mode: leader为主节点,Mode: follower为从节点,zk集群一般只有一个leader,多个follower,主一般是响应客户端的读写请求,而从主同步数据,当主挂掉之后就会从follower里投票选举一个leader出来。

  1. 安装kafka

1、三台主机分别安装kafka

   tar xvf kafka_2.12-2.8.0.tar –C /usr/local

   cd /usr/local/

   mv kafka_2.12-2.8.0 kafa

2、修改配置文件

   cd /usr/local/kafka/config/

   vi server.properties

   主机A:

broker.id=0  #这里和zookeeper中的myid文件一样,采用的是唯一标识

 listeners=PLAINTEXT://192.168.160.100:9092

advertised.listeners=PLAINTEXT://192.168.160.100:9092

zookeeper.connect=192.168.160.100:2181,192.168.160.101:2181,192.168.160.102:2181 #集群的各个节点的IP地址及zookeeper的端口,在zookeeper集群设置的端口是多少这里的端口就是多少

   主机B:

broker.id=1  #这里和zookeeper中的myid文件一样,采用的是唯一标识

advertised.listeners=PLAINTEXT://192.168.160.101:9092

zookeeper.connect=192.168.160.100:2181,192.168.160.101:2181,192.168.160.102:2181 #集群的各个节点的IP地址及zookeeper的端口,在zookeeper集群设置的端口是多少这里的端口就是多少

   主机C:

broker.id=2  #这里和zookeeper中的myid文件一样,采用的是唯一标识

advertised.listeners=PLAINTEXT://192.168.160.102:9092

zookeeper.connect=192.168.160.100:2181,192.168.160.101:2181,192.168.160.102:2181 #集群的各个节点的IP地址及zookeeper的端口,在zookeeper集群设置的端口是多少这里的端口就是多少

3、启动kafka

   /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

4、测试kafka集群状态

在主机A测试,创建topic,查看当前topic,模拟生产者

#–replication-factor 2 复制2份
#–partitions 3 创建3个分区
#–topic mag 主题为mag

/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.160.100:2181 --replication-factor 2 --partitions 3 --topic msg   #创建topic

Created topic msg.

/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.160.100:2181 msg   #模拟生产者

/usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.160.100:9092 --topic msg   #模拟消费者

>test 

#在主机B上查看,主机A的topic情况

/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.160.101:9092 --topic msg --from-beginning     #查看消费信息

test

#查看集群状态

/kafka-topics.sh --describe --zookeeper 192.168.160.100:2181 --topic nginx

#查看topic

/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.153.179:2181

#查看topic策略

./kafka-configs.sh --zookeeper 192.168.201.100:2181,192.168.201.101:2181,192.168.201.102:2181 --describe --entity-type topics --entity-name nginx001

#删除topic

./kafka-topics.sh --delete --topic nginx001 --zookeeper 192.168.201.100:2181,192.168.201.101:2181,192.168.201.102:2181 

  1. 安装filebeat(监控端理论可以不用装)

1、三台主机分别安装filebeat

      tar -zxvf filebeat-6.5.2-linux-x86_64.tar.gz

      mv filebeat-6.5.2-linux-x86_64 /usr/local/filebeat

2、修改配置文件

      vi /usr/local/ filebeat/filebeat.yml

      主机A:

filebeat.inputs:

- type: log

  enabled: true

  paths:

    - /usr/local/filebeat/log/*.log

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.settings:

  index.number_of_shards: 3

setup.kibana:

output.kafka:

  enabled: true

  hosts: ["192.168.160.100:9092","192.168.160.101:9092","192.168.160.102:9092"]

  topic: msg

processors:

  - add_host_metadata: ~

  - add_cloud_metadata: ~

主机B:

filebeat.inputs:

- type: log

  enabled: true

  paths:

    - /usr/local/filebeat/log/*.log

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.settings:

  index.number_of_shards: 3

setup.kibana:

output.kafka:

  enabled: true

  hosts: ["192.168.160.100:9092","192.168.160.101:9092","192.168.160.102:9092"]

  topic: msg

主机C:

filebeat.inputs:

- type: log

  enabled: true

  paths:

    - /usr/local/filebeat/log/*.log

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.settings:

  index.number_of_shards: 3

setup.kibana:

output.kafka:

  enabled: true

  hosts: ["192.168.160.100:9092","192.168.160.101:9092","192.168.160.102:9092"]

  topic: msg

processors:

  - add_host_metadata: ~

  - add_cloud_metadata: ~

processors:

  - add_host_metadata: ~

  - add_cloud_metadata: ~

3、分别启动三台filebear

      /usr/local/filebeat/filebeat &

  1. 安装logstash

1、主机A安装logstash

      tar -zxvf logstash-6.5.2.tar.gz

      mv logstash-6.5.2 /usr/local/logstash

2、修改配置文件

      vi /usr/local/logstash/config/logstash-sample.conf

input {

        kafka {

        bootstrap_servers => ["192.168.4.124:9092,192.168.4.125:9092,192.168.4.125:9099"]

        group_id => "logstash"

        topics => ["nginx75-access","nginx75-error","server76-spring","server76-druid","server76-access"]

        decorate_events => true

        consumer_threads => 5

        codec => "json"

        auto_offset_reset => "latest"

        }

}

filter {

        json{

                source => "message"

        }

        mutate {

                remove_field => ["host","prospector","fields","input","log"]

        }

        grok {

                match => { "message" => "%{HTTPDATE:logtime}" }

                match => { "message" => "(?<logtime>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}.\d{3}).*\] %{LOGLEVEL:loglevel} " }

        }

        mutate {

                gsub =>[

                       "message", '/', "/"

                ]

        }

        mutate {

                convert => {

                       "usdCnyRate" => "float"

                        "futureIndex" => "float"

                }

        }

        date {

                match => [ "logtime", "YYYY-MM-dd HH:mm:ss.SSS", "dd/MMM/yyyy:HH:mm:ss Z" ]

                target => "@timestamp"

        }

}

output {

       elasticsearch {

                hosts => "192.168.4.124:9200"

                index => "%{[@metadata][topic]}-%{+YYYY-MM-dd}"

       }

}

#修改nginx文件

vi /usr/local/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx

NGX %{IPORHOST:client_ip} (%{USER:ident}|- ) (%{USER:auth}|-) \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} (%{NOTSPACE:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)" %{NUMBER:status} (?:%{NUMBER:bytes}|-) "(?:%{URI:referrer}|-)" "%{GREEDYDATA:agent}"

3、启动logstaash

/usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-sample.conf &

  1. 安装elasticsearch

1、主机B安装elasticsearch

      tar -zxvf elasticsearch-6.5.2.tar.gz -C /usr/local/

      cd /usr/local/

      mv elasticsearch-6.5.2/ elasticsearch

2、修改配置文件

      vi /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: test1

node.name: node-1

path.data: /usr/local/elasticsearch/data

path.logs: /usr/local/elasticsearch/logs

network.host: 192.168.160.101

http.port: 9200

vi /etc/sysctl.conf

vm.max_map_count=655360

sysctl –p

            vi /etc/security/limits.conf

* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

3、创建es用户

      useradd es

      passwd es

      chown -R es:es /usr/local/elasticsearch/

4、启动es

      cd /usr/local/elasticsearch

      ./elasticsearch &

  1. 安装kibana

1、主机C安装kibana

      tar -zxvf kibana-6.5.2-linux-x86_64.tar.gz -C /usr/local/

      cd /usr/local/

      mv kibana-6.5.2-linux-x86_64/ kibana

2、修改配置文件

      vi /usr/local/kibana/config/kibana.yml

server.port: 5601

server.host: "192.168.160.102"

elasticsearch.url: "http://192.168.160.101:9200"

3、启动kibana

/usr/local/kibana/bin/ kibana

  1. 安装nginx被控端

时间必须一致

修改hosts

vi /etc/hosts

1、安装nginx(此包为提前准备好的,请自行安装)

unzip nginx.zip

mv nginx /usr/local/nginx

2、启动nginx

/usr/local/nginx/sbin/nginx

3、安装filebeat

tar –zxvf filebeat-6.5.2-linux-x86_64.tar.gz –C /usr/local

mv /usr/local/filebeat-6.5.2-linux-x86_64 /usr/local/filebeat

4、修改filebeat配置文件

vi /usr/local/filebeat/filebeat.yml

filebeat.inputs:

- type: log

  enabled: true

  paths:

    - /usr/local/nginx/logs/access.log

  fields:

    log_topics: nginx002

output.kafka:

  enabled: true

  hosts: ["192.168.201.100:9092","192.168.201.101:9092","192.168.201.102:9092"]

  topic: nginx002

filebeat.config.modules:

  path: /usr/local/filebeat/modules.d/*.yml

  reload.enabled: true

5、启动filebeat

/usr/local/filebeat/filebeat &

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值