ELK(0)-- ELK搭建及应用

 

 

一、ELK搭建及应用

环境准备

参照 http://www.cnblogs.com/xushuyi/articles/7098566.html,从master主机clone一台虚机(192.168.56.105

Linux服务器准备:

1192.168.56.105elasticsearchlogstashKibana由于本虚机安装logstash有问题,废弃

110.28.37.65elasticsearchlogstashKibana

安装

https://www.elastic.co/downloads

将下载后的tar放置到 /opt/myinstall 路径下:

 

 

1、安装elasticsearch

https://www.elastic.co/downloads/elasticsearch

https://www.elastic.co/guide/cn/elasticsearch/guide/current/running-elasticsearch.html

安装前 先检查java环境(java环境必须是1.8版本以上的)

 

 

执行:tar -zxvf elasticsearch-6.1.2.tar.gz

修改elasticsearch.yml、jvm.options

# ======================== Elasticsearch Configuration =========================

#

# NOTE: Elasticsearch comes with reasonable defaults for most settings.

#       Before you set out to tweak and tune the configuration, make sure you

#       understand what are you trying to accomplish and the consequences.

#

# The primary way of configuring a node is via this file. This template lists

# the most important settings you may want to configure for a production cluster.

#

# Please consult the documentation for further information on configuration options:

# https://www.elastic.co/guide/en/elasticsearch/reference/index.html#

# ---------------------------------- Cluster -----------------------------------

#

# Use a descriptive name for your cluster:

找到配置文件中的cluster.name,打开该配置并设置集群名称

cluster.name: elasticsearch

#

# ------------------------------------ Node ------------------------------------

#

# Use a descriptive name for the node:

找到配置文件中的node.name,打开该配置并设置节点名称

node.name: elk-1

#

# Add custom attributes to the node:

#

#node.attr.rack: r1

#

# ----------------------------------- Paths ------------------------------------

#

# Path to directory where to store the data (separate multiple locations by comma):

修改data存放的路径

path.data: /data/es-data

#

# Path to log files:

#

修改logs日志的路径

path.logs: /var/log/elasticsearch/

#

# ----------------------------------- Memory -----------------------------------

#

# Lock the memory on startup:

配置内存使用交换分区

bootstrap.memory_lock: true

#

# Make sure that the heap size is set to about half the memory available

# on the system and that the owner of the process is allowed to use this

# limit.

#

# Elasticsearch performs poorly when the system is swapping the memory.

#

# ---------------------------------- Network -----------------------------------

#

# Set the bind address to a specific IP (IPv4 or IPv6):

监听的网络地址

network.host: 192.168.56.105

#

# Set a custom port for HTTP:

开启监听的端口

http.port: 9200

#

# For more information, consult the network module documentation.

#

# --------------------------------- Discovery ----------------------------------

#

# Pass an initial list of hosts to perform discovery when new node is started:

# The default list of hosts is ["127.0.0.1", "[::1]"]

#

#discovery.zen.ping.unicast.hosts: ["host1", "host2"]

#

# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#

#discovery.zen.minimum_master_nodes:

#

# For more information, consult the zen discovery module documentation.

#

# ---------------------------------- Gateway -----------------------------------

#

# Block initial recovery after a full cluster restart until N nodes are started:

#

#gateway.recover_after_nodes: 3

#

# For more information, consult the gateway module documentation.

#

# ---------------------------------- Various -----------------------------------

#

# Require explicit names when deleting indices:

#

#action.destructive_requires_name: true

增加新的参数,这样head插件可以访问es (5.x版本,如果没有可以自己手动加,目的是解决跨域访问问题)

http.cors.enabled: true

http.cors.allow-origin: "*"

bootstrap.system_call_filter: false
## JVM configuration

 

################################################################

## IMPORTANT: JVM heap size

################################################################

##

## You should always set the min and max JVM heap

## size to the same value. For example, to set

## the heap to 4 GB, set:

##

## -Xms4g

## -Xmx4g

##

## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

## for more information

##

################################################################

 

# Xms represents the initial size of total heap space

# Xmx represents the maximum size of total heap space

-Xms512m

-Xmx512m

 

################################################################

## Expert settings

################################################################

##

## All settings below this section are considered

## expert settings. Don't tamper with them unless## you understand what you are doing

##"jvm.options" 103L, 2676C

创建elasticsearch data的存放目录,并修改该目录的属主属组

# mkdir -p /data/es-data   (自定义用于存放data数据的目录)

 

 

创建elasticsearch log的存放文件,修改elasticsearch的日志属主属组

mkdir -p /var/log/elasticsearch

 

创建用户组

#groupadd elsearch

创建用户

#useradd elsearch -g elsearch -p elasticsearch

elsearch用户赋权限

#cd /opt/

#ls

#chown -R elsearch:elsearch elasticsearch-6.1.2

#chown -R elsearch:elsearch /data/es-data

#chown -R elsearch:elsearch /var/log/elasticsearch/

切换到elsearch用户再启动

su elsearch

cd /opt/elasticsearch-6.1.2

./elasticsearch

ElasticSearch后端启动命令

./elasticsearch -d

验证是否正常启动:

ps -ef | grep elasticsearch

curl http://192.168.56.105:9200/

 

 

启动elasticsearch遇到问题汇总如下:

启动问题1

 

 

解决方法:

http://blog.csdn.net/lahand/article/details/78954112

 

 

启动问题2

 

 

解决办法:

[root@elk /]# chown -R elsearch:elsearch /data/es-data

[root@elk /]# chown -R elsearch:elsearch /var/log/elasticsearch/

 

 

启动问题3

 

 

解决办法:

需要修改几个参数,不然启动会报错

 

vim /etc/security/limits.conf

在末尾追加以下内容(elk为启动用户,当然也可以指定为*elsearch soft nofile 65536

elsearch hard nofile 65536

elsearch soft nproc 4096

elsearch hard nproc 4096

elsearch soft memlock unlimited

elsearch hard memlock unlimited

 

继续再修改一个参数

vim /etc/security/limits.d/90-nproc.conf

将里面的1024改为2048(ES最少要求为2048)

*          soft    nproc     2048

 

另外还需注意一个问题(在日志发现如下内容,这样也会导致启动失败,这一问题困扰了很久)

[2017-06-14T19:19:01,641][INFO ][o.e.b.BootstrapChecks    ] [elk-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks

[2017-06-14T19:19:01,658][ERROR][o.e.b.Bootstrap          ] [elk-1] node validation exception

[1] bootstrap checks failed

[1]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk    

解决:修改配置文件,在配置文件添加一项参数(目前还没明白此参数的作用)

vim /etc/elasticsearch/elasticsearch.yml

bootstrap.system_call_filter: false

 

启动问题4

 

 

解决办法:

参考:http://blog.csdn.net/jiankunking/article/details/65448030

切换到root用户修改配置sysctl.conf

#vi /etc/sysctl.conf

添加如下配置:

vm.max_map_count=655360

执行如下命令,使其生效:

sysctl -p

然后,重新启动elasticsearch,即可启动成功。

 

启动问题5java.lang.IllegalStateException: failed to obtain node locks

 

 

解决办法:

参考:http://www.zhimengzhe.com/linux/339880.html

在同一个节点(linux系统)上启动多个elasticsearch时出现failed to obtain node locks错误,启动失败

解决方法:

elasticsearch.yml中设置如node.max_local_storage_nodes: 3

node.max_local_storage_nodes的数值大于1即可。

启动问题6unable to load JNA native support library, native methods will be disabled.

 

 

解决办法:降低jna-4.4.0.jar jna-4.2.2.jar

 

 

如何和elasticsearch交互

curl -i -XGET '192.168.56.105:9200/_count?pretty'

 

 

2、插件的安装和使用

参考教程:https://www.cnblogs.com/xiwang/p/5854967.html

 

elasticsearch-head是elastic search集群的一个web前端。源代码托管在github.com,地址是:https://github.com/mobz/elasticsearch-head。这是一个学习elasticsearch的利器。

有两种方式来使用head。一种方式是作为ealsticsearch的插件,另一种是将其作为一个独立的webapp来运行。这里将其作为elasticsearch插件来使用。

2.1 在这里我采用离线安装

检查Linux是否unzip命令,我本地服务器没有unzip命令,因为yum没法执行联网下载并安装unzip,所以进行手工下载安装

下载:unzip-6.0-16.el7.x86_64.rpm,并放置 /opt

执行安装命令:

[root@elk opt]# chmod +x unzip-6.0-16.el7.x86_64.rpm

[root@elk opt]# rpm -ivh unzip-6.0-16.el7.x86_64.rpm

通过https://github.com/mobz/elasticsearch-head/archive/master.zip

https://github.com/mobz/elasticsearch-head#running-with-built-in-server

安装参照:https://www.bysocket.com/?p=1744

由于elasticsearch-head安装需要 gitnpm等支持、暂不安装。

3、IK分词插件

 

4、安装logstash

参考资料:https://www.cnblogs.com/Orgliny/p/5579832.html

执行:tar -zxvf logstash-6.1.2.tar.gz

 

 

启动logstash,进入bin目录下:

4.1 启动命令方式测试logstash

./logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'

 

 

输入 hello world

 

 

4.2 配置文件方式测试logstash,并将数据进行控制台打出

config 下建立 logstash-simple.conf

 

 

启动:./logstash -f ../config/logstash-simple.conf

输入 hello world

 

 

4.2 配置文件方式测试logstash,并将数据推送到elasticsearch

config 下建立 logstash.conf

input { stdin { } }

output {

  elasticsearch { hosts => ["10.28.37.65:9200"] }

  stdout { codec => rubydebug }

}

 

 

启动:./logstash -f ../config/logstash.conf

输入 hello world

 

 

验证 是否真的推送到elasticsearch

浏览器输入:http://10.28.37.65:9200/_search?q=hello%20world

 

 

直到现在logstash已经验证ok.可以进行下一步 kibana安装配置

5、安装kibana

执行解压:tar -zxvf kibana-6.1.2-linux-x86_64.tar.gz

 

 

进入 config下 编辑 kibana.yml 文件

#vi kibana.yml

 

 

检查5601端口是否被占用

#netstat -tunlp | grep 5601

 

 

运行kibana

#./kibana

 

 

浏览器敲入:http://10.28.37.65:5601

 

已经创建好的 *

 

 

可以查看到我之前往elasticsearch推送的 hello world

 

 

后台启动 kibana

命令如下:nohup ./kibana >kibana.out &

 

 

到目前 整个 elasticsearchlogstashkibana 已经安装完成。

模拟logstash 推送数据到elasticsearch,然后通过kibana查看

通过logstash推送hello

 

 

然后再 kibana刷新 查看

 

 

6、Java程序链接logstash推送

首先logstash转为后台启动

启动命令:nohup ./logstash -f ../config/logstash.conf >logstash.out &

 

 

基于springcloud项目,创建一个elk服务

 

 

Pom文件中引入 logstash

<!-- logstash -->
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>4.11</version>
</dependency>

 

logback.xml整体配置,重点关注LOGSTASH

控制日志输入路径:/opt/myinstall/logs/elk/

 

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <include resource="org/springframework/boot/logging/logback/defaults.xml"/><springProperty scope="context" name="springAppName" source="spring.application.name"/>

    <!-- You can override this to have a custom pattern -->
    <property name="CONSOLE_LOG_PATTERN"
              value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

    <!--
      Appender
           FILEERROR对应error级别,文件名以log-error-xxx.log形式命名
           FILEWARN对应warn级别,文件名以log-warn-xxx.log形式命名
           FILEINFO对应info级别,文件名以log-info-xxx.log形式命名
           FILEDEBUG对应debug级别,文件名以log-debug-xxx.log形式命名
           stdout将日志信息输出到控制上,为方便开发测试使用
    -->
    <contextName>SpringBootDemo</contextName>
    <property name="LOG_PATH" value="/opt/myinstall/logs"/>
    <!--设置系统日志目录-->
    <property name="APPDIR" value="elk"/>

    <!-- 彩色日志依赖的渲染类 -->
    <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter"/>
    <conversionRule conversionWord="wex"
                    converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter"/>
    <conversionRule conversionWord="wEx"
                    converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter"/>
    <!-- 彩色日志格式 -->
    <property name="CONSOLE_LOG_PATTERN"
              value="${CONSOLE_LOG_PATTERN:-%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
    <!-- 文件日志格式 -->
    <property name="FILE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n"/>

    <!--Appender to log to file in a JSON format-->
    <appender name="LOGSTASH" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_PATH}/${APPDIR}/logstash_info.json</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_PATH}/${APPDIR}/logstash-info-%d{yyyy-MM-dd}.%i.json</fileNamePattern>
            <maxHistory>7</maxHistory>
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>50MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "parent": "%X{X-B3-ParentSpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>

    <logger name="org.springframework" level="warn"/>
    <logger name="org.hibernate" level="warn"/>

    <!-- 生产环境下,将此级别配置为适合的级别,以免日志文件太多或影响程序性能 -->
    <root level="INFO">
        <appender-ref ref="LOGSTASH"/>
    </root>
</configuration>

 

 

package com.sinosoft.service.impl;

import com.sinosoft.common.RequestInfo;
import com.sinosoft.common.ResponseInfo;
import com.sinosoft.dto.AccountDTO;
import com.sinosoft.service.ElkService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;

import java.util.HashMap;
import java.util.Map;

/**
 * Created by xushuyi on 2018/1/30.
 */
@Service
public class ElkServiceImpl implements ElkService {

    /**
     * 日志打印
     */
    private static final Logger LOGGER = LoggerFactory.getLogger(ElkServiceImpl.class);

    /**
     * logstash数据推送至elasticsearch
     *
     * @param message 消息
     * @return res
     */
    @Override
    public ResponseInfo<AccountDTO> pushData(RequestInfo<String> message) {
        try {
            Map resMap = new HashMap();
            resMap.put("message", message.getQuery());
            ResponseInfo responseInfo = new ResponseInfo(true, "success", resMap);
            LOGGER.info(responseInfo.toString());
            return responseInfo;
        } catch (Exception e) {
            LOGGER.error("测试推送异常,原因:" + e);
            return null;
        }
    }
}

 

 

package com.sinosoft.service;

import com.sinosoft.common.RequestInfo;
import com.sinosoft.common.ResponseInfo;
import com.sinosoft.dto.AccountDTO;

/**
 * Created by xushuyi on 2018/1/30.
 */
public interface ElkService {
    ResponseInfo<AccountDTO> pushData(RequestInfo<String> message);
}

 

 

package com.sinosoft.service.impl;

import com.sinosoft.common.RequestInfo;
import com.sinosoft.common.ResponseInfo;
import com.sinosoft.dto.AccountDTO;
import com.sinosoft.service.ElkService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;

import java.util.HashMap;
import java.util.Map;

/**
 * Created by xushuyi on 2018/1/30.
 */
@Service
public class ElkServiceImpl implements ElkService {

    /**
     * 日志打印
     */
    private static final Logger LOGGER = LoggerFactory.getLogger(ElkServiceImpl.class);

    /**
     * logstash数据推送至elasticsearch
     *
     * @param message 消息
     * @return res
     */
    @Override
    public ResponseInfo<AccountDTO> pushData(RequestInfo<String> message) {
        try {
            Map resMap = new HashMap();
            resMap.put("message", message.getQuery());
            ResponseInfo responseInfo = new ResponseInfo(true, "success", resMap);
            LOGGER.info(responseInfo.toString());
            return responseInfo;
        } catch (Exception e) {
            LOGGER.error("测试推送异常,原因:" + e);
            return null;
        }
    }
}

然后打包部署到环境中:elk-1.0-SNAPSHOT.jar

 

 

启动elk服务:

nohup java -jar elk-1.0-SNAPSHOT.jar >elk.out &

通过swagger-ui.html测试接口

 

 

Logstash 调整 logstash.conf 配置:

cd /opt/myinstall/logstash-6.1.2/config/

vi logstash.conf

 

 

重新启动logstashnohup ./logstash -f ../config/logstash.conf >logstash.out &

 

 

7、kafka装

http://mirrors.hust.edu.cn/apache/kafka/1.0.0/kafka_2.12-1.0.0.tgz

没必要装的那么复杂,考虑kafka不用也可以,有logstash

 

 

转载于:https://www.cnblogs.com/xushuyi/articles/8335605.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值