单机版ELK搭建,测试和数据备份

ELK:Elasticsearch、Logstash、Kibana的简称。

作用:统一收集系统日志。

安装:

安装JDK1.8 ; Elasticsearch、Logstash、Kibana(版本最好保持一致),解压即用。

修改配置文件:

Elasticsearch修改后如下:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
transport.tcp.port: 9300
#数据仓库,备份数据用
path.repo: ["/usr/local/cellar/elasticsearch-5.5.0/backup/agwms_backup"]

Logstash新增配置文件:

input {
  tcp {
    mode => "server"
    host => "127.0.0.1"
    port => 4560
    codec => json_lines
  }
}
filter {
  mutate {
    rename => { "[host][name]" => "host" }
  }
}
output {
    elasticsearch {
    action => "index"          
    hosts  => "localhost:9200"   #可以为数组增加多个
    index  => "xxx"  #索引名称
    user => elastic
    password => changeme    
  }
}

启动:

Elasticsearch: bin/elasticsearch       默认端口为:9200

Logstash: bin/logstash -f /usr/local/cellar/logstash/config/logstash-my.conf     //新增的配置文件  默认端口为:9600

Kibana: bin/kbana  默认端口为:5601

 

测试:

1.在springboot项目中 新增依赖包:

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>4.11</version>
 </dependency>

 

2.在resources目录下新增配置文件 logback.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>127.0.0.1:4560</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder charset="UTF-8"> <!--encoder 可以指定字符集,对于中文输出有意义-->
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="LOGSTASH" />
        <appender-ref ref="STDOUT" />
    </root>

</configuration>

3.在springboot项目中 logger.info("")写日志数据

 

查看测试效果:

1.启动elk三个服务和springboot项目

2.在localhost:5601左侧菜单栏中

Management ---> index patterns -->creat index --> 输入 xxx --> next step -->选择@timestamp -->菜单栏Discover便能看到数据

 

 

备份数据

1.在elasticsearch的配置文件中已经配置了备份数据的仓库

2.在localhost:5601左侧菜单栏中找到dev tools

3.备份数据之前,要创建一个仓库来保存数据,仓库的类型支持Shared filesystem, Amazon S3, HDFS和Azure Cloud。

put /_snapshot/backup/
{
  "type":"fs",
  "settings": {
  "location": "/usr/local/cellar/elasticsearch-5.5.0/backup/backup"
  }
}

4.备份全部索引

put /_snapshot/backup/xxx   xxx是备份别名

5.备份单个索引

PUT  /_snapshot/backup/backup
{
"indices": "xxx"
}

6.查看是否备份成功

GET /_snapshot/backup/backup

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值