如何部署日志系统:ElasticSearch、Logstash、Kibanna及kafka


一、基础环境

CPU:Intel® Celeron® CPU G550 @ 2.60GHz × 2
内存:8M
OS:CentOS7-1804
JDK:1.8.0_181-amd64

二、安装ElasticSearch

我们采用安装方法 Install Elasticsearch with RPM (官方安装说明

1、导入ElasticSearch PGP Key

Download and install the public signing key

 rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

2、从RPM repository安装

Create a file called elasticsearch.repo in the /etc/yum.repos.d/ directory for RedHat based distributions,containing:

[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

And your repository is ready for use. You can now install Elasticsearch with one of the following commands:

sudo yum install elasticsearch 

安装过程
文件安装路径如下
在这里插入图片描述

3、修改配置使外网可访问

nano /etc/elasticsearch/elasticsearch.yml

找到一行

#network.host : "localhost"

去掉注释符#,把host改为:0.0.0.0

network.host:0.0.0.0

在这里插入图片描述

4、Running Elasticsearch with systemd

To configure Elasticsearch to start automatically when the system boots up, run the following commands:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

Elasticsearch can be started and stopped as follows:

sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service

These commands provide no feedback as to whether Elasticsearch was started successfully or not. Instead, this information will be written in the log files located in /var/log/elasticsearch/.

By default the Elasticsearch service doesn’t log information in the systemd journal. To enable journalctl logging, the --quiet option must be removed from the ExecStart command line in the elasticsearch.service file.

5、检查是否成功运行

You can test that your Elasticsearch node is running by sending an HTTP request to port 9200 on localhost(using curl command):

curl -X GET "localhost:9200/"

which should give you a response something like this:

{
  "name" : "Cp8oag6",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
  "version" : {
    "number" : "6.4.2",
    "build_flavor" : "default",
    "build_type" : "zip",
    "build_hash" : "f27399d",
    "build_date" : "2016-03-30T09:51:41.449Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "1.2.3",
    "minimum_index_compatibility_version" : "1.2.3"
  },
  "tagline" : "You Know, for Search"
}

二、安装Kibana

1、Installing from the RPM

Create a file called kibana.repo in the /etc/yum.repos.d/ directory,containing:

[kibana-6.x]
name=Kibana repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

And your repository is ready for use. You can now install Kibana with one of the following commands:

sudo yum install kibana 

文件安装路径如下
在这里插入图片描述

2、修改配置文件使其外网能访问

打开配置文件

nano /etc/kibana/kibana.yml

找到一行

#server.host:"localhost"

改为

server.host:0.0.0.0

在这里插入图片描述

3、Running Kibana with systemd

To configure Kibana to start automatically when the system boots up, run the following commands:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

Kibana can be started and stopped as follows:

sudo systemctl start kibana.service
sudo systemctl stop kibana.service

These commands provide no feedback as to whether Kibana was started successfully or not. Instead, this information will be written in the log files located in /var/log/kibana/.

4、Accessing Kibana

Kibana is a web application that you access through port 5601. All you need to do is point your web browser at the machine where Kibana is running and specify the port number. For example,
localhost:5601 or http://YOURDOMAIN.com:5601.

When you access Kibana, the Discover page loads by default with the default index pattern selected. The time filter is set to the last 15 minutes and the search query is set to match-all (*).

If you don’t see any documents, try setting the time filter to a wider time range. If you still don’t see any results, it’s possible that you don’t have any documents.

Checking Kibana Status

You can reach the Kibana server’s status page by navigating to localhost:5601/status. The status page displays information about the server’s resource usage and lists the installed plugins.


三、安装Logstash

Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for example logstash.repo

[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

And your repository is ready for use. You can install it with:

sudo yum install logstash

安装路径后的各类文件路径如下

在这里插入图片描述

2、Running Logstash by Using Systemd

Distributions like Debian Jessie, Ubuntu 15.10+, and many of the SUSE derivatives use systemd and the systemctl command to start and stop services. Logstash places the systemd unit files in /etc/systemd/system for both deb and rpm. After installing the package, you can start up or stop Logstash with:

sudo systemctl start logstash.service
sudo systemctl stop logstash.service

To configure Logstash to start automatically when the system boots up, run the following commands:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable logstash.service

3、测试是否安装成功Stashing Your First Event

First, let’s test your Logstash installation by running the most basic Logstash pipeline.

A Logstash pipeline has two required elements, input and output, and one optional element, filter. The input plugins consume data from a source, the filter plugins modify the data as you specify, and the output plugins write the data to a destination.

在这里插入图片描述

To test your Logstash installation, run the most basic Logstash pipeline. For example:

cd /usr/share/logstash
bin/logstash -e  'input {stdin{}} output {stdout{}}' --path.settings '/etc/logstash/'

在这里插入图片描述


四、安装ZooKeeper

1、下载

进入下载页面
https://www.apache.org/dyn/closer.cgi/zookeeper/
会根据你的网络,推荐最佳的镜像地址。我的最佳下载镜像地址是:
http://mirrors.shu.edu.cn/apache/zookeeper/
获取下载地址:
https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.4-beta/zookeeper-3.5.4-beta.tar.gz
使用下面命令下载

wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.4-beta/zookeeper-3.5.4-beta.tar.gz

解压缩

tar -zxvf zookeeper-3.5.4-beta.tar.gz

把加压缩的目录移动到 /usr/share/

 sudo mv zookeeper-3.5.4-beta /usr/share/

2、修改配置文件

安装的ZooKeeper本身自带一个配置的例子,我们在这个基础上copy修改

cd /usr/share/zookeeper-3.5.4-beta/conf
mv zoo_sample.cfg zoo.cfg
nano zoo.cfg

主要修改dataDir参数
在这里插入图片描述

这种配置为standalone 模式,用于开发环境。

3、启动

cd /usr/share/zookeeper-3.5.4-beta/bin
./zkServer.sh start


五、安装Kafka

1、下载

下载地址
https://www.apache.org/dyn/closer.cgi?path=/kafka/2.0.0/kafka_2.11-2.0.0.tgz
选择最近的镜像地址进行下载

wget http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.11-2.0.0.tgz

解压缩,移动到/usr/share/

tar -zxvf kafka_2.11-2.0.0.tgz 
sudo mv kafka_2.11-2.0.0 /usr/share

2、以daemon方式启动

cd /usr/share/kafka_2.11-2.0.0
bin/kafka-server-start.sh -daemon config/server.properties

注意参数:
-daemon

3、创建一个主题 logger-channel

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic logger-channel

查看是否创建成功

bin/kafka-topics.sh --list --zookeeper localhost:2181

用两个终端测试生产和消费消息:一个终端用于生产消息,一个终端用于消费消息

1、消息生产终端

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic logger-channel
>message1
>message2

2、消息消费终端

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic logger-channel --from-beginning

会依次出现message1、message2的消息

而且可以测试,同时开着2个终端,在消息生产终端输入一段text,很快就在消费终端显示出来。


六、连接kafka与logstash

在这里插入图片描述

logstash的所有pipeline configuration files 要放在 /etc/logstash/conf.d 目录中

cd /etc/logstash/
sudo cp logstash-sample.conf conf.d/first-pipline.conf
nano conf.d/first-pipline.conf

修改配置文件为

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
    kafka {
        id => "my_plugin_id"
        bootstrap_servers => "localhost:9092"
        topics => ["logger-channel"]
        auto_offset_reset => "latest"
    }
  }

filter {

        json {
                source => "message"
        }
}

output {
        if "_jsonparsefailure" not in [tags] {
                stdout { codec => rubydebug }
                elasticsearch {
                        hosts =>["localhost:9200"]
                }
        }
}

修改logstash.yml文件,修改config.reload.automatic:true,是logstash不用重启自动加载配置文件的改动。

通过命令,重启logstash服务

sudo systemctl stop logstash.service
sudo systemctl start logstash.service

七、Logstash与ElasticSearch

logstash默认情况下会在ES中建⽴logstash-* 的索引, * 代表了 yyyy-MM-dd 的格式,根据
上述logstash配置filter的示例,其会在ES中建⽴json数据对应的属性作为⽂档字段。⽐⽅说,如下的⽇志格式:

{
	"key1":"po",
	"key2":"poice",
	"key3":"",
	"key4":"",
	"key5":"",
	"log":"出错了,赶紧解决问题",
	"logType":2,
	"project":"qusu-core-service",
	"source":"qusu_logger_service"
}

JSON数据的属性如 logtype , log 等都会在ES作为搜索条件进⾏存储


八、发布一个SpringBoot写的微服务接口,接收JSON格式日志

微服务接口已经打好jar包,名称:logger-service-0.0.1-SNAPSHOT.jar

把jar放到/opt/logservice/

使用如下命令启动

java -jar /opt/logservice/logger-service-0.0.1-SNAPSHOT.jar --spring.profiles.active=default

如果成功启动,打开命令窗⼝可以运⾏如下命令:

curl -G http://localhost:8081/actuator/

得到如下结果时证明服务已启动:

{"_links":
	{"self":
		{"href":"http://localhost:8081/actuator","templated":false},"acm":
		{"href":"http://localhost:8081/actuator/acm","templated":false},"health":
		{"href":"http://localhost:8081/actuator/health","templated":false},"info":
		{"href":"http://localhost:8081/actuator/info","templated":false},"refresh":
		{"href":"http://localhost:8081/actuator/refresh","templated":false}
	}
}

九、总结:重启操作系统之后,各组件启动顺序及方法

1、ZooKeeper

cd /usr/share/zookeeper-3.5.4-beta/bin
./zkServer.sh start

2、Kafka

cd /usr/share/kafka_2.11-2.0.0
bin/kafka-server-start.sh -daemon config/server.properties

3、ElasticSearch、Logstash、Kibana

这3个组件,配置的都是随着系统开机自动启动,如果没有启动成功,使用下述命令启动

sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
sudo systemctl start logstash.service

可以通过这个命令去查看服务状态是否正常

systemctl status XXXX.service

可以通过下述命令查询日志

# 查看某个 Unit 的日志
  sudo journalctl -u XXX.service
  sudo journalctl -u XXX.service --since today
# 实时滚动显示某个 Unit 的最新日志
  sudo journalctl -u XXX.service -f

4、接口服务

以后台方式运行

nohup java -jar qusu-logger-service-0.0.1-SNAPSHOT.jar --spring.profiles.active=default >/dev/null 2>&1 &

十、参考

  1. 聂晨:SpringBoot+kafka+ELK分布式日志收集
  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值