Kafka单机模式配置,非集群
01.解压安装:
(base) root@LAPTOP-P1LA53KS:/mnt/e# tar zxvf kafka_2.11-2.4.0.tgz ^C
(base) root@LAPTOP-P1LA53KS:/mnt/e# pwd
/mnt/e
(base) root@LAPTOP-P1LA53KS:/mnt/e# ls
'$RECYCLE.BIN' Scala-2.11.0 conda-env jdk1.8-linux-64位 scala-2.11.0.tgz 配置Jars
DeliveryOptimization 'System Volume Information' docker-17.03.0-ce.tgz kafka_2.11-2.4.0 spark-2.4.5-bin-hadoop2.7
Gadaite WindowsApps hadoop-2.7.7 kafka_2.11-2.4.0.tgz spark-2.4.5-bin-hadoop2.7.tgz
Miniconda3-latest-Linux-x86_64.sh WpSystem hadoop-2.7.7.tar.gz postgis.tar zookeeper-3.4.14
'Program Files' ZookeeperData javawin postgres.tar zookeeper-3.4.14.tar.gz
02.配置环境变量,并激活
添加内容为:
#kafka
export KAFKA_HOME=/mnt/e/kafka_2.11-2.4.0
export PATH=$KAFKA_HOME/bin:$PATH
操作如下:
(base) root@LAPTOP-P1LA53KS:/mnt/e/kafka_2.11-2.4.0# pwd
/mnt/e/kafka_2.11-2.4.0
(base) root@LAPTOP-P1LA53KS:/mnt/e/kafka_2.11-2.4.0# vim /etc/profile
(base) root@LAPTOP-P1LA53KS:/mnt/e/kafka_2.11-2.4.0# source /etc/profile
root@LAPTOP-P1LA53KS:/mnt/e/kafka_2.11-2.4.0# vim /root/.bashrc
root@LAPTOP-P1LA53KS:/mnt/e/kafka_2.11-2.4.0# source /root/.bashrc
03.配置 server.properties,创建kafka 数据目录
(base) root@LAPTOP-P1LA53KS:/mnt/e# mkdir KafkaData
(base) root@LAPTOP-P1LA53KS:/mnt/e# cd KafkaData/
(base) root@LAPTOP-P1LA53KS:/mnt/e/KafkaData# pwd
/mnt/e/KafkaData
04.创建Kafka日志目录
(base) root@LAPTOP-P1LA53KS:/mnt/e# mkdir KafkaLogs
05.启动kafka,后台启动方式
(SSpark) root@LAPTOP-P1LA53KS:/mnt/e/kafka_2.11-2.4.0/bin# ls
connect-distributed.sh kafka-consumer-perf-test.sh kafka-reassign-partitions.sh trogdor.sh
connect-mirror-maker.sh kafka-delegation-tokens.sh kafka-replica-verification.sh windows
connect-standalone.sh kafka-delete-records.sh kafka-run-class.sh zookeeper-security-migration.sh
kafka-acls.sh kafka-dump-log.sh kafka-server-start.sh zookeeper-server-start.sh
kafka-broker-api-versions.sh kafka-leader-election.sh kafka-server-stop.sh zookeeper-server-stop.sh
kafka-configs.sh kafka-log-dirs.sh kafka-streams-application-reset.sh zookeeper-shell.sh
kafka-console-consumer.sh kafka-mirror-maker.sh kafka-topics.sh
kafka-console-producer.sh kafka-preferred-replica-election.sh kafka-verifiable-consumer.sh
kafka-consumer-groups.sh kafka-producer-perf-test.sh kafka-verifiable-producer.sh
(SSpark) root@LAPTOP-P1LA53KS:/mnt/e/kafka_2.11-2.4.0/bin# ./kafka-server-start.sh -daemon ../config/server.properties
06.创建主题topic
# ./kafka-topics.sh --create -zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
主题的的位置就在上面的KafkaLogs目录下:
(SSpark) root@LAPTOP-P1LA53KS:/mnt/e/KafkaLogs# ls -al
total 0
drwxrwxrwx 1 root root 4096 Jan 19 16:15 .
drwxrwxrwx 1 root root 4096 Jan 18 23:25 ..
-rwxrwxrwx 1 root root 0 Jan 19 13:01 .lock
-rwxrwxrwx 1 root root 0 Jan 19 13:01 cleaner-offset-checkpoint
-rwxrwxrwx 1 root root 4 Jan 19 16:14 log-start-offset-checkpoint
-rwxrwxrwx 1 root root 88 Jan 19 13:34 meta.properties
drwxrwxrwx 1 root root 4096 Jan 19 15:54 my_favorite_topic-0
-rwxrwxrwx 1 root root 35 Jan 19 16:14 recovery-point-offset-checkpoint
-rwxrwxrwx 1 root root 35 Jan 19 16:15 replication-offset-checkpoint
drwxrwxrwx 1 root root 4096 Jan 19 13:35 test-0