zookeeper+kafka+logstash+elasticsearc+kibana

研究背景

1、之所以选用kafka是因为量起来的话单台logstash的抗压能力比较差

2、为了解决整个链路查询的问题,多个Feign传层的话,可以按照一个ID进行穿层,所以采用logback的MDC进行对唯一标识存储并且在Feign的调用链放在Header里,这里命名为TID

下载地址:

ZK+Kafka

https://mirrors.bfsu.edu.cn/apache/kafka/2.7.0/kafka_2.13-2.7.0.tgz

https://mirrors.bfsu.edu.cn/apache/zookeeper/zookeeper-3.7.0/apache-zookeeper-3.7.0-bin.tar.gz

ELK

https://artifacts.elastic.co/downloads/kibana/kibana-7.12.0-windows-x86_64.zip

https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.12.0-windows-x86_64.zip

https://artifacts.elastic.co/downloads/logstash/logstash-7.12.0-windows-x86_64.zip

 

在拦截器里增加相对应的拦截代码

@Component
@Slf4j
public class ContextInterceptor implements HandlerInterceptor {

        RequestContext context = RequestContext.getCurrentContext();
        context.reset();
        log.debug("traceId:" + MDC.get("traceId"));
     String requestId = MDC.get("traceId");
        requestId = StringUtils.isEmpty(requestId) ? request.getHeader(RequestContext.REQUEST_ID) : requestId;
        requestId = StringUtils.isEmpty(requestId) ? request.getParameter(RequestContext.REQUEST_ID) : requestId;
        requestId = StringUtils.isEmpty(requestId) ? UUIDUtil.uuid() : requestId;
        MDC.put("TID", requestId);

}

配置日志配置文件logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <!-- springProfile用于指定当前激活的环境,如果spring.profile.active的值是哪个,就会激活对应节点下的配置 -->
    <springProfile name="local">
        <!-- configuration to be enabled when the "staging" profile is active -->
        <springProperty scope="context" name="module" source="spring.application.name"
                        defaultValue="undefinded"/>
        <!-- 该节点会读取Environment中配置的值,在这里我们读取application.yml中的值 -->
        <springProperty scope="context" name="bootstrapServers" source="spring.kafka.bootstrap-servers"
                        defaultValue="127.0.0.1:9092"/>
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
            <!-- encoders are assigned the type
                 ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
            <encoder>
                <pattern>%boldYellow(${module})|%d|%highlight(%-5level)|%X{TID}|%cyan(%logger{15}) - %msg %n</pattern>
            </encoder>
        </appender>
        <!-- kafka的appender配置 -->
        <appender name="kafka" class="com.github.danielwegener.logback.kafka.KafkaAppender">
            <encoder>
                <pattern>${module}|%d|%-5level|%X{TID}|%logger{15} - %msg</pattern>
            </encoder>
            <topic>test</topic>
            <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
            <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/>

            <!-- Optional parameter to use a fixed partition -->
            <!-- <partition>0</partition> -->

            <!-- Optional parameter to include log timestamps into the kafka message -->
            <!-- <appendTimestamp>true</appendTimestamp> -->

            <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
            <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
            <!-- bootstrap.servers is the only mandatory producerConfig -->
            <producerConfig>bootstrap.servers=${bootstrapServers}</producerConfig>

            <!-- 如果kafka不可用则输出到控制台 -->
            <appender-ref ref="STDOUT"/>

        </appender>
        <!-- 指定项目中的logger -->
        <!--<logger name="org.springframework.test" level="INFO" >
            <appender-ref ref="kafka" />
        </logger>-->
        <logger name="com.springcloudsite" level="INFO" >
            <appender-ref ref="kafka" />
        </logger>
        <root level="info">
            <appender-ref ref="STDOUT" />
        </root>
    </springProfile>
</configuration>

正则配置说明    

    pattern:为正则表达

    %boldYellow(${module}) : 黄色的模块名称

    %d :日期时间

   %highlight(%-5level):高亮的日志级别,如info error trace登

   %X{TID} : traceID 追踪使用的ID

   %cyan(%logger{15}) :简写类名路径

  %msg %n :具体日志信息

打印出来的效果如下:

 

配置zk+kafka

1. 安装JDK

1.1 安装文件:http://www.oracle.com/technetwork/java/javase/downloads/index.html 下载JDK
1.2 安装完成后需要添加以下的环境变量(右键点击“我的电脑” -> "高级系统设置" -> "环境变量" ):

  • JAVA_HOME: C:\Program Files\Java\jdk1.8.0_171 (jdk的安装路径)
  • Path: 在现有的值后面添加"; %JAVA_HOME%\bin"

1.3 打开cmd运行 "java -version" 查看当前系统Java的版本:

2. 安装ZOOKEEPER

Kafka的运行依赖于Zookeeper,所以在运行Kafka之前我们需要安装并运行Zookeeper

2.1 下载安装文件: http://zookeeper.apache.org/releases.html

2.2 解压文件 

2.3 打开zookeeper-3.4.13\conf,把zoo_sample.cfg重命名成zoo.cfg

2.4 从文本编辑器里打开zoo.cfg

2.5 把dataDir的值改成“./zookeeper-3.4.13/data”

2.6 添加如下系统变量:

  • ZOOKEEPER_HOME: C:\Users\localadmin\CODE\zookeeper-3.4.13 (zookeeper目录)
  • Path: 在现有的值后面添加 ";%ZOOKEEPER_HOME%\bin;"

2.7 运行Zookeeper: 打开cmd然后执行 zkserver

cmd 窗口不要关闭

3. 安装并运行KAFKA

3.1 下载安装文件: http://kafka.apache.org/downloads.html

3.2 解压文件

3.3 打开kafka_2.11-2.0.0\config

3.4 从文本编辑器里打开 server.properties

3.5 把 log.dirs的值改成 “./logs”

3.6 打开cmd

3.7 进入kafka文件目录: cd C:\Users\localadmin\CODE\kafka_2.11-2.0.0(kafka目录)

3.8 输入并执行:  .\bin\windows\kafka-server-start.bat .\config\server.properties

cmd 窗口不要关闭

4. 创建TOPICS

4.1 打开cmd 并进入cd C:\Users\localadmin\CODE\kafka_2.11-2.0.0\bin\windows

4.2 创建一个topic: kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

5. 打开一个PRODUCER:

cd C:\Users\localadmin\CODE\kafka_2.11-2.0.0\bin\windows
kafka-console-producer.bat --broker-list localhost:9092 --topic test

6. 打开一个CONSUMER:

cd C:\Users\localadmin\CODE\kafka_2.11-2.0.0\bin\windows
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning

7. 测试:

配置ELK

kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

然后到对应bin目录下启动,直接点击 kibana.bat启动即可,或者在CMD命令启动

之后是启动效果

配置elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
cluster.name: "docker-cluster"
node.name: "node-1"
node.master: true
network.host: 0.0.0.0
 
#xpack.license.self_generated.type: trial
#xpack.security.enabled: true
#xpack.monitoring.collection.enabled: true 

#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

启动bin目录下的elasticsearch.bat

以下是启动效果

配置logstash.conf

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  kafka {
    bootstrap_servers => "localhost:9092"
    topics => ["test"]
    group_id => "test"
  }
}

filter { 
   
       mutate {
      split => { "message" => "|" }
      }

    if [message][0] {
        mutate {                
            add_field =>   {
                "apiname" => "%{[message][0]}"
            }
        }
    }
    
    if [message][1] {
        mutate {                
            add_field =>   {
                "current_time" => "%{[message][1]}"
            }
        }
    } 
    if [message][2] {
        mutate {                
            add_field =>   {
                "current_level" => "%{[message][2]}"
            }
        }
    }  	
	 if [message][3] {
        mutate {                
            add_field =>   {
                "traceid" => "%{[message][3]}"
            }
        }
    }
 
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
	#index => "local-purchase-order | %{+YYYY-MM-dd}"
	index => "logstash-%{+YYYY-MM-dd}"
    #template_name => "logstash"
    #template_overwrite => true
    #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
  stdout{
       codec => rubydebug
      }
}


配置logstash.yml

#/usr/share/logstash/config/logstash.yml
#jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options
http.host: "0.0.0.0"
# [ "http://elasticsearch:9200" ]
xpack.monitoring.elasticsearch.hosts: ${ELASTICSEARCH_URL}

启动使用命令

可以进到bin下

D:\app\elk\logstash\bin

输入命令:logstash -f D:\app\elk\logstash\config\logstash.conf

最后打开地址

http://localhost:9600/

http://localhost:9200/

http://localhost:5601/

分别验证结果

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

杨航 AI

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值