limitations under the License.
see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
The id of the broker. This must be set to a unique integer for each broker.
broker.id=2
############################# Socket Server Settings #############################
The port the socket server listens on
port=9092
Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost
host.name=10.100.6.177
Hostname the broker will advertise to producers and consumers. If not set, it uses the
value for “host.name” if configured. Otherwise, it will use the value returned from
java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=
The port to publish to ZooKeeper for clients to use. If this is not set,
it will publish the same port that the broker binds to.
#advertised.port=
The number of threads handling network requests
num.network.threads=2
The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
The default number of log partitions per topic. More partitions allow greater
parallelism for consumption, but this will also result in more files across
the brokers.
num.partitions=2
############################# Log Flush Policy #############################
Messages are immediately written to the filesystem but by default we only fsync() to sync
the OS cache lazily. The following configurations control the flush of data to disk.
There are a few important trade-offs here:
1. Durability: Unflushed data may be lost if you are not using replication.
3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
The settings below allow one to configure the flush policy to flush data after a period of time or
The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
The following configurations control the disposal of log segments. The policy can
be set to delete segments after a period of time, or after a given size has accumulated.
A segment will be deleted whenever either of these criteria are met. Deletion always happens
from the end of the log.
The minimum age of a log file to be eligible for deletion
log.retention.hours=168
A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
segments don’t drop below log.retention.bytes.
#log.retention.bytes=1073741824
The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=536870912
The interval at which log segments are checked to see if they can be deleted according
to the retention policies
log.retention.check.interval.ms=60000
By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false
############################# Zookeeper #############################
Zookeeper connection string (see zookeeper docs for details).
This is a comma separated host:port pairs, each corresponding to a zk
server. e.g. “127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002”.
You can also append an optional chroot string to the urls to specify the
root directory for all kafka znodes.
zookeeper.connect=10.100.6.147:2181,10.100.6.176:2181,10.100.6.177:2181
Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
测试
启动Kafka server:
bin/kafka-server-start.sh config/server.properties &
停止Kafka server
bin/kafka-server-stop.sh
停止Zookeeper server:
bin/zookeeper-server-stop.sh
jps查看进程
总结
总的来说,面试是有套路的,一面基础,二面架构,三面个人。
最后,小编这里收集整理了一些资料,其中包括面试题(含答案)、书籍、视频等。希望也能帮助想进大厂的朋友
r:
bin/zookeeper-server-stop.sh
jps查看进程
总结
总的来说,面试是有套路的,一面基础,二面架构,三面个人。
最后,小编这里收集整理了一些资料,其中包括面试题(含答案)、书籍、视频等。希望也能帮助想进大厂的朋友
[外链图片转存中…(img-Z9zFYjkR-1718771209534)]
[外链图片转存中…(img-RHMS19aZ-1718771209535)]