KAFKA日志管理

kafka启动后,会产生会多日志,经常会将磁盘撑爆。所以kafka日志清理很有必要


log4j.properties

该文件为kafka日志管理的配置文件,位于$KAFKA_HOME/config/log4j.properties

默认该配置文件中日志存放路径为$KAFKA_HOME/logs,可以修改为其他容量较大的数据盘,比如我自己设置为/data/kafka/logs


注意:如果只是改了这个配置,是不生效的,还需要修改脚本$KA_FKA_HOME/bin/kafka-run-class.sh,加入以下配置

1
LOG_DIR= "/data/kafka/logs"


log4j.properties配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
 
kafka.logs. dir = /data/kafka/logs
 
log4j.rootLogger=INFO, default
 
log4j.appender.default=org.apache.log4j.RollingFileAppender
log4j.appender.default.File=${kafka.logs. dir } /default .log
log4j.appender.default.MaxBackupIndex = 10
log4j.appender.default.MaxFileSize = 100MB
log4j.appender.default.layout=org.apache.log4j.PatternLayout
log4j.appender.default.layout.ConversionPattern=[%d] %p %m (%c)%n
 
 
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.File=${kafka.logs. dir } /server .log
log4j.appender.kafkaAppender.MaxBackupIndex = 10
log4j.appender.kafkaAppender.MaxFileSize = 100MB
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
 
 
log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppender
log4j.appender.stateChangeAppender.File=${kafka.logs. dir } /state-change .log
log4j.appender.stateChangeAppender.MaxBackupIndex = 10
log4j.appender.stateChangeAppender.MaxFileSize = 100MB
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
 
 
log4j.appender.requestAppender=org.apache.log4j.RollingFileAppender
log4j.appender.requestAppender.File=${kafka.logs. dir } /kafka-request .log
log4j.appender.requestAppender.MaxBackupIndex = 10
log4j.appender.requestAppender.MaxFileSize = 100MB
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
 
 
log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.cleanerAppender.File=${kafka.logs. dir } /log-cleaner .log
log4j.appender.cleanerAppender.MaxBackupIndex = 10
log4j.appender.cleanerAppender.MaxFileSize = 100MB
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
 
 
log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.controllerAppender.File=${kafka.logs. dir } /controller .log
log4j.appender.controllerAppender.MaxBackupIndex = 10
log4j.appender.controllerAppender.MaxFileSize = 100MB
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
 
# Turn on all our debugging info
#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender
#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender
#log4j.logger.kafka.perf=DEBUG, kafkaAppender
#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender
#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG
 
log4j.logger.kafka=INFO, kafkaAppender
log4j.additivity.kafka= false
 
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$= false
 
#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false
 
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger= false
 
log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller= false
 
log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner= false
 
log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger= false










本文转自 曾哥最爱 51CTO博客,原文链接:http://blog.51cto.com/zengestudy/2054225,如需转载请自行联系原作者

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值