ELK系统搭建

一、基本架构

主机名IP应用角色
kafka110.1.1.4Zookeeper+Kafka1
kafka210.1.1.5Zookeeper+Kafka2
kafka310.1.1.6Zookeeper+Kafka3
logstash10.1.1.7Logstash
es110.1.1.8ES1data/master
es210.1.1.9ES2+Kibanadata/master
es310.1.1.10ES3data/master

二、配置zookeeper集群

1、(三个节点同时操作)解压zookeeper至指定目录:

[root@kafka1 ~]# cd /usr/local/src
[root@kafka1 src]# tar -zxf apache-zookeeper-3.5.7-bin.tar.gz -C /usr/local
[root@kafka1 src]# mv /usr/local/apache-zookeeper-3.5.7-bin /usr/local/zookeeper

创建zookeeper集群的数据目录和日志目录:

[root@kafka1 local]# mkdir -p /data/apache/zookeeper/data/zookeeper
[root@kafka1 local]# mkdir -p /data/apache/zookeeper/datalog/zookeeper

区分到底是第几个实例呢,就要有个id文件,且名字必须是myid:

[root@kafka1 local]# echo 0 >/data/apache/zookeeper/data/zookeeper/myid
[root@kafka2 local]# echo 1 >/data/apache/zookeeper/data/zookeeper/myid
[root@kafka3 local]# echo 2 >/data/apache/zookeeper/data/zookeeper/myid

修改配置文件:
zookeeper1:

[root@kafka1 ~]# cd /usr/local/zookeeper/conf/
[root@kafka1 conf]# cp zoo_sample.cfg zoo.cfg
[root@kafka1 conf]# vim zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
dataDir=/data/apache/zookeeper/data/zookeeper/
dataLogDir=/data/apache/zookeeper/datalog/zookeeper/
server.0=10.1.1.4:2888:2889
server.1=10.1.1.5:2888:2889
server.2=10.1.1.6:2888:2889

zookeeper2:

[root@kafka1 ~]# cd /usr/local/zookeeper/conf/
[root@kafka1 conf]# cp zoo_sample.cfg zoo.cfg
[root@kafka1 conf]# vim zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
dataDir=/data/apache/zookeeper/data/zookeeper/
dataLogDir=/data/apache/zookeeper/datalog/zookeeper/
server.0=10.1.1.4:2888:2889
server.1=10.1.1.5:2888:2889
server.2=10.1.1.6:2888:2889

zookeeper3:

[root@kafka1 ~]# cd /usr/local/zookeeper/conf/
[root@kafka1 conf]# cp zoo_sample.cfg zoo.cfg
[root@kafka1 conf]# vim zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
dataDir=/data/apache/zookeeper/data/zookeeper/
dataLogDir=/data/apache/zookeeper/datalog/zookeeper/
server.0=10.1.1.4:2888:2889
server.1=10.1.1.5:2888:2889
server.2=10.1.1.6:2888:2889

然后启动三个zookeeper:

[root@kafka1 local]# zookeeper/bin/zkServer.sh start
[root@kafka2 local]# zookeeper/bin/zkServer.sh start
[root@kafka3 local]# zookeeper/bin/zkServer.sh start

查看集群中各节点状态:

zookeeper1为从节点
[root@kafka1 local]# zookeeper/bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

zookeeper2为主节点
[root@kafka2 local]# zookeeper/bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader

zookeeper3为从节点
[root@kafka3 local]# zookeeper/bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

三、配置Kafka集群

1、(三个节点同时操作)下载并解压至指定目录:

[root@kafka1 ~]# cd /usr/local/src
[root@kafka1 src]# tar -zxf kafka_2.11-2.4.0.tgz -C /usr/local
[root@kafka1 src]# mv /usr/local/kafka_2.11-2.4.0 /usr/local/kafka

2、修改配置文件:

Kafka1:

[root@kafka1 local]# cat kafka/config/server.properties 
############################# Server Basics #############################
broker.id=0

############################# Socket Server Settings #############################
listeners=SASL_PLAINTEXT://10.1.1.4:9092

num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

############################# Log Basics #############################
log.dirs=/usr/local/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

auto.create.topics.enable=false
#dafault.replication.factor=3
#num.partitions=3

############################# Log Flush Policy #############################

############################# Log Retention Policy #############################
log.retention.hours=6
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
log.cleaner.enable=false
log.cleanup.policy=delete
delete.topic.enable=true

############################# Zookeeper #############################
zookeeper.connect=110.1.1.4:2181,10.1.1.5:2181,10.1.1.6:2181
zookeeper.connection.timeout.ms=600000

############################# Group Coordinator Settings #############################
group.initial.rebalance.delay.ms=0

在10.1.1.5上配置Kafka2节点,步骤同kafka1一样,只不过配置文件中需要修改broker.id和host.name,如下:

[root@kafka2 local]# cat kafka/config/server.properties
############################# Server Basics #############################
broker.id=1

############################# Socket Server Settings #############################
listeners=SASL_PLAINTEXT://10.1.1.5:9092

num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

############################# Log Basics #############################
log.dirs=/usr/local/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

auto.create.topics.enable=false
#dafault.replication.factor=3
#num.partitions=3

############################# Log Flush Policy #############################

############################# Log Retention Policy #############################
log.retention.hours=6
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
log.cleaner.enable=false
log.cleanup.policy=delete
delete.topic.enable=true

############################# Zookeeper #############################
zookeeper.connect=110.1.1.4:2181,10.1.1.5:2181,10.1.1.6:2181
zookeeper.connection.timeout.ms=600000

############################# Group Coordinator Settings #############################
group.initial.rebalance.delay.ms=0

Kafka3:

############################# Server Basics #############################
broker.id=2

############################# Socket Server Settings #############################
listeners=SASL_PLAINTEXT://10.1.1.6:9092

num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

############################# Log Basics #############################
log.dirs=/usr/local/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

auto.create.topics.enable=false
#dafault.replication.factor=3
#num.partitions=3

############################# Log Flush Policy #############################

############################# Log Retention Policy #############################
log.retention.hours=6
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
log.cleaner.enable=false
log.cleanup.policy=delete
delete.topic.enable=true

############################# Zookeeper #############################
zookeeper.connect=110.1.1.4:2181,10.1.1.5:2181,10.1.1.6:2181
zookeeper.connection.timeout.ms=600000

############################# Group Coordinator Settings #############################
group.initial.rebalance.delay.ms=0

启动Kafka集群,需要在Kafka1、Kafka2和Kafka3上执行以下命令,无报错即成功:

[root@kafka1 ~]# cd /usr/local/kafka
[root@kafka1 ~]# bin/kafka-server-start.sh config/server.properties

四、安装Logstash

1、安装JDK和Logstash

[root@logstash ~]# rpm -ivh jdk-8u241-linux-x64.rpm 
[root@logstash ~]# tar -zxf logstash-7.6.2.tar.gz -C /usr/local

2、修改logstash配置文件:

[root@logstash ~]# cd /usr/local/logstash-7.6.2
[root@logstash logstash-7.6.2]# vim config/logstash.yml
node.name: logstash_filebeat
pipeline.id: logstash_filebeat_p
pipeline.workers: 3 
pipeline.batch.size: 5000
pipeline.batch.delay: 200
http.host: "0.0.0.0"
http.port: 19600-19700

[root@logstash logstash-7.6.2]# vim config/jvm.options
将最大内存和最小内存调整为物理内存的一半

3、创建logstash的数据处理文件:

[root@logstash ~]# cat /etc/logstash/conf.d/kafka-es.conf 
input {
    kafka  {
        codec => "json"
        topics_pattern => "nginx"
        bootstrap_servers => "10.1.1.4:9092,10.1.1.5:9092,10.1.1.6:9092"
        auto_offset_reset => "latest"
   }
}

filter {
    ruby {
		code => "event.timestamp.time.localtime"
	}
}

output {
    elasticsearch {
        hosts => ["10.1.1.8:9200","10.1.1.9:9200","10.1.1.10:9200"]
        index => "nginx"
        }
}

启动logstash:

[root@logstash ~]# systemctl start logstash

五、配置ES集群:

ES1\ES2\ES3上操作以下内容:
新建用户elk,用于启动es集群,es官方规定,不能以root账号启动,所以我们需要创建用户和用户组,设置密码,并加入到sudoers中:

[root@es1 ~]# useradd elk
[root@es1 ~]# passwd elk
[root@es1 ~]# vim /etc/sudoers
root    ALL=(ALL)       ALL
elk     ALL=(ALL)       ALL

在三台服务器创建目录,压缩包放置目录:/opt/zip,安装软件目录:/opt/soft,es数据存放目录:/var/es/data,es日志存放目录:/var/es/logs:

[root@es1 ~]# mkdir -p /opt/soft /opt/zip
[root@es1 ~]# mkdir -p /var/es/{data,logs}
[root@es1 ~]# chown -R elk:elk /var/es/{data,logs}

安装JDK和ES:

[root@es1 ~]# rpm -ivh jdk-8u241-linux-x64.rpm 
[root@es1 ~]# tar -xvf /opt/zip/elasticsearch-7.6.0-linux-x86_64.tar.gz -C /opt/soft/
[root@es1 ~]# chown -R elk:elk /opt/soft/elasticsearch-7.6.0

在三台服务器修改系统配置文件:
加大线程数限制,Elasticsearch通过将请求分解为多个阶段并将这些阶段交给不同的线程池执行程序来执行请求。Elasticsearch中的各种任务有不同的线程池执行程序。因此,Elasticsearch需要能够创建大量线程。检查的最大线程数确保Elasticsearch进程有权在正常使用下创建足够的线程,至少需要4096个线程

[root@mb ~]# vim /etc/security/limits.conf
*                soft    nofile          65536
*                hard    nofile          65536
*                soft    nproc           4096
*                hard    nproc           4096

修改虚拟内存大小:

[root@es1 ~]# vim /etc/sysctl.conf
vm.max_map_count=262144

使修改后的配置生效:

[root@es1 ~]# sysctl -p
vm.max_map_count = 262144

ES单节点配置文件:

path.data: /var/es/data
path.logs: /var/es/logs
network.host: 10.1.1.8
http.port: 9200
discovery.type: single-node

ES集群配置如下:
ES1节点配置:

[root@es1 ~]# cat /opt/soft/elasticsearch-7.6.0/elasticsearch.yml
cluster.name: my-app
node.name: es1
node.master: true
node.data: true
path.data: /var/es/data
path.logs: /var/es/logs
network.host: 10.1.1.8
http.port: 9200
transport.tcp.port: 9300
cluster.initial_master_nodes: ["es1"]
discovery.zen.ping.unicast.hosts: ["10.1.1.8:9300","10.1.1.9:9300", "10.1.1.10:9300"]
discovery.zen.minimum_master_nodes: 2

ES2节点配置:

[root@es2 ~]# cat /opt/soft/elasticsearch-7.6.0/elasticsearch.yml
cluster.name: my-app
node.name: es2
node.master: true
node.data: true
path.data: /var/es/data
path.logs: /var/es/logs
network.host: 10.1.1.9
http.port: 9200
transport.tcp.port: 9300
cluster.initial_master_nodes: ["es1"]
discovery.zen.ping.unicast.hosts: ["10.1.1.8:9300","10.1.1.9:9300", "10.1.1.10:9300"]
discovery.zen.minimum_master_nodes: 2 

ES3节点配置:

[root@es3 ~]# cat /opt/soft/elasticsearch-7.6.0/elasticsearch.yml
cluster.name: my-app
node.name: es3
node.master: true
node.data: true
path.data: /var/es/data
path.logs: /var/es/logs
network.host: 10.1.1.10
http.port: 9200
transport.tcp.port: 9300
cluster.initial_master_nodes: ["es1"]
discovery.zen.ping.unicast.hosts: ["10.1.1.8:9300","10.1.1.9:9300", "10.1.1.10:9300"]
discovery.zen.minimum_master_nodes: 2

在三个ES节点分别启动ES,必须用非root用户启动:

[root@es1 ~]# su - elk
[elk@es1 ~]$ cd /opt/soft/elasticsearch-7.6.0/
[elk@es1 elasticsearch-7.6.0]$ bin/elasticsearch
#启动后检查没问题的话,可以进行后台启动:
[elk@es3 elasticsearch-7.6.0]$ bin/elasticsearch -d

启动完成后,在浏览器输入http://10.1.1.8:9200/_cat/nodes?v,可以看到:

ip          heap.percent ram.percent cpu  load_1m load_5m load_15m node.role master name
10.1.1.8           14          94     0    0.06    0.03     0.05   dilm       *      es1
10.1.1.9           12          97     3    0.14    0.09     0.11   dilm       -      es2
10.1.1.10          12          93     0    0.32    0.08     0.07   dilm       -      es3

至此,ES集群搭建完了。

以上ES集群搭建过程是每台服务器都搭建一个节点,如果只有一个服务器,需要搭建多个节点的话,搭建过程可参考zookeeper、mysql或redis的单机多实例部署。

六、安装Kibana

在ES2上安装JDK和Kibana:

[root@es2 ~]# rpm -ivh jdk-8u241-linux-x64.rpm 
[root@es2 ~]# tar -zxf /opt/zip/kibana-7.6.0-linux-x86_64.tar.gz -C /opt/soft/

修改配置:

[root@es2 soft]$ cat kibana-7.6.0-linux-x86_64/config/kibana.yml |grep -v "#"
server.port: 5601
server.host: "10.1.1.9"
server.name: "es2"
elasticsearch.hosts: ["http://10.1.1.8:9200","http://10.1.1.9:9200","http://10.1.1.10:9200"]
i18n.locale: "zh-CN"

启动Kibana:

[root@es2 soft]$ cd /opt/soft/kibana-7.6.0-linux-x86_64/
[root@es2 kibana-7.6.0-linux-x86_64]$ bin/kibana 
#启动后无报错的话,可以后台启动
#[root@es2 kibana-7.6.0-linux-x86_64]$ nohup bin/kibana & 

七、Kibana可视化界面使用

Kibana启动完成后,可以访问http://10.1.1.9:5601:
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Dustin.Hoffman

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值