logback连接kafka正常日志

正常日志如下:

连接logback.xml里面的 bootstrap.servers=10.57.137.131:9092

09:46:59,953 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
09:46:59,953 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
09:46:59,954 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/F:/study_src/test_log/target/scala-2.11/classes/logback.xml]
09:47:00,106 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - Setting ReconfigureOnChangeFilter scanning period to 30 seconds
09:47:00,106 |-INFO in ReconfigureOnChangeFilter{invocationCounter=0} - Will scan for changes in [[F:\study_src\test_log\target\scala-2.11\classes\logback.xml]] every 30 seconds. 
09:47:00,106 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - Adding ReconfigureOnChangeFilter as a turbo filter
09:47:00,119 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [com.github.danielwegener.logback.kafka.KafkaAppender]
09:47:00,127 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [KafkaAppender]
09:47:00,265 |-INFO in com.github.danielwegener.logback.kafka.encoding.LayoutKafkaMessageEncoder@396f6598 - No charset specified for PatternLayoutKafkaEncoder. Using default UTF8 encoding.
09:47:00,277 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [LogbackIntegrationITxxx] to INFO
09:47:00,277 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting additivity of logger [LogbackIntegrationITxxx] to false
09:47:00,278 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [KafkaAppender] to Logger[LogbackIntegrationITxxx]
09:47:00,278 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
09:47:00,281 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
09:47:00,286 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
09:47:00,321 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to DEBUG
09:47:00,322 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
09:47:00,322 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
09:47:00,323 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@7a765367 - Registering current configuration as safe fallback point
----  false
0
09:47:00.414 [main] INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig values: 
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [10.57.137.131:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id = 
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0


09:47:00.438 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
09:47:00.569 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
09:47:00.573 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, centos7, 9092)], partitions = [])
09:47:00.687 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-closed:client-id-producer-1
09:47:00.688 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-created:client-id-producer-1
09:47:00.688 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:client-id-producer-1
09:47:00.689 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:client-id-producer-1
09:47:00.691 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-received:client-id-producer-1
09:47:00.692 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name select-time:client-id-producer-1
09:47:00.692 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name io-time:client-id-producer-1
09:47:00.699 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name batch-size
09:47:00.699 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name compression-rate
09:47:00.701 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name queue-time
09:47:00.702 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name request-time
09:47:00.702 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
09:47:00.702 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name records-per-request
09:47:00.703 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name record-retries
09:47:00.703 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name errors
09:47:00.703 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name record-size-max
09:47:00.706 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.producer.internals.Sender - Starting Kafka producer I/O thread.
09:47:00.713 [main] INFO  o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.9.0.0
09:47:00.713 [main] INFO  o.a.kafka.common.utils.AppInfoParser - Kafka commitId : fc7243c2af4b2b4a
09:47:00.715 [main] DEBUG o.a.k.clients.producer.KafkaProducer - Kafka producer started
09:47:00.717 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Initialize connection to node -1 for sending metadata request
09:47:00.718 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at centos7:9092.
09:47:00.724 [kafka-producer-network-thread | producer-1] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent
09:47:00.725 [kafka-producer-network-thread | producer-1] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received
09:47:00.726 [kafka-producer-network-thread | producer-1] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node--1.latency
09:47:00.726 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Completed connection to node -1
09:47:00.858 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1}, body={topics=[logs]}), isInitiatedByNetworkClient, createdTimeMs=1459216020829, sendTimeMs=0) to node -1
09:47:00.875 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 2 to Cluster(nodes = [Node(0, centos77, 9092)], partitions = [Partition(topic = logs, partition = 0, leader = 0, replicas = [0,], isr = [0,]])
09:47:00.892 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Adding node 0 to nodes ever seen
09:47:00.921 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at centos77:9092.


首先连接 10.57.137.131:9092

再连接返回的host.name = centos7,

最后继续连接advertised.host.name=centos77

实际上上面都是同一个ip,如果调试时不同的机器, kafka的server配置文件,请注意修改一下,不要使用localhost


官方文档是这么说的,消费者连接任意一个有效的节点,请求leader的元数据, 消费者直接把消息丢给leader broker.

The producer connects to any of the alive nodes and requests metadata about the
leaders for the partitions of a topic. This allows the producer to put the message
directly to the lead broker for the partition.


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值