Zookeeper启动报错kafka-run-class.sh: 第 342 行:exec: java: 未找到

报错信息:

[root@erlang logs]# systemctl status zookeeper
● zookeeper.service
Loaded: loaded (/usr/lib/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since 四 2021-11-25 17:42:01 CST; 2s ago
Process: 16706 ExecStop=/usr/local/kafka_2.13-3.0.0/bin/zookeeper-server-stop.sh (code=exited,   status=1/FAILURE)
Process: 16399 ExecStart=/usr/local/kafka_2.13-3.0.0/bin/zookeeper-server-start.sh   /usr/local/kafka_2.13-3.0.0/config/zookeeper.properties (code=exited, status=127)
Main PID: 16399 (code=exited, status=127)

1125 17:42:01 erlang systemd[1]: Started zookeeper.service.
1125 17:42:01 erlang systemd[1]: Starting zookeeper.service...
1125 17:42:01 erlang zookeeper-server-start.sh[16399]: /usr/local/kafka_2.13-3.0.0/bin/kafka-run-class.sh: 第 342 行:exec: java: 未找到
1125 17:42:01 erlang systemd[1]: zookeeper.service: main process exited, code=exited, status=127/n/a
1125 17:42:01 erlang zookeeper-server-stop.sh[16706]: No zookeeper server to stop
1125 17:42:01 erlang systemd[1]: zookeeper.service: control process exited, code=exited status=1
1125 17:42:01 erlang systemd[1]: Unit zookeeper.service entered failed state.
1125 17:42:01 erlang systemd[1]: zookeeper.service failed.

处理思路:

1.首先我们打开报错的文件。

# vim /usr/local/kafka_2.13-3.0.0/bin/kafka-run-class.sh

2.使用:set nu显示行号,并找到报错的342行。

342   exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"

看到这里,最直观的联想就是没有找到java命令。

3.查看是否运行java

[root@erlang]# java -version
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)

明明是有java命令,并且/etc/profile中也已经配置了环境变量

export JAVA_HOME=/usr/local/jdk1.8.0_171
export CLASSPATH=$:CLASSPATH:$JAVA_HOME/lib/
export PATH=$PATH:$JAVA_HOME/bin

4.百度找到解决方法(但是无果)
原帖是这样写的:
大概猜到需要使用java命令,所以我们在配置文件中加入该环境变量。
于是照做,将我服务器的环境变量更改后如下:

 export JAVA_HOME=/usr/local/jdk1.8.0_171
 export JAVA=/usr/local/jdk1.8.0_171/bin/java
 export CLASSPATH=$:CLASSPATH:$JAVA_HOME/lib/
 export PATH=$PATH:$JAVA_HOME/bin

再次执行systemctl start zookeeper还是报相同的错误。

5.最终大招(解决问题)
将java的路径软链接到/usr/bin和/usr/sbin目录下

# ln -s export JAVA=/usr/local/jdk1.8.0_171/bin/java  /usr/bin/java
# ln -s export JAVA=/usr/local/jdk1.8.0_171/bin/java  /usr/sbin/java

再次执行systemctl start zookeeper,成功运行。

6.查看结果。

[root@erlang logs]# systemctl status zookeeper
● zookeeper.service
 Loaded: loaded (/usr/lib/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
 Active: active (running) since 四 2021-11-25 17:43:43 CST; 2s ago
 Process: 16706 ExecStop=/usr/local/kafka_2.13-3.0.0/bin/zookeeper-server-stop.sh (code=exited, status=1/FAILURE)
 Main PID: 16726 (java)
 Memory: 70.8M
 CGroup: /system.slice/zookeeper.service
       └─16726 java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -X...

1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,660] INFO Reading snapshot /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileSnap)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,662] INFO The digest value is empty in snapshot (org.apache.zookeeper.server.DataTree)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,715] INFO 139 txns loaded in 49 ms (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,715] INFO Snapshot loaded in 60 ms, highest zxid is 0x8b, digest is 295484105282 (org.apache.zookeeper.server.ZKDatabase)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,716] INFO Snapshotting: 0x8b to /tmp/zookeeper/version-2/snapshot.8b (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,720] INFO Snapshot taken in 4 ms (org.apache.zookeeper.server.ZooKeeperServer)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,731] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,731] INFO zookeeper.request_throttler.shutdownTimeout = 10000 (org.apache.zookeeper.server.RequestThrottler)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,745] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server....inerManager)
1125 17:43:44 erlang zookeeper-server-start.sh[16726]: [2021-11-25 17:43:44,746] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider)
Hint: Some lines were ellipsized, use -l to show in full.
### Kafka 3.9 KRaft Mode SASL Configuration and Setup Guide #### Overview of KRaft Mode Apache Kafka introduced the KRaft (Kafka Raft Metadata) protocol as an alternative to ZooKeeper for managing cluster metadata starting from version 2.8, with improvements continuing through later versions including 3.9. In this new mode, all brokers participate directly in electing leaders for partitions responsible for storing topic data and internal metadata. #### Prerequisites Before Installation Before proceeding with setting up a Kafka cluster using KRaft mode along with SASL security configurations: - Ensure Java Development Kit is installed on each node. - Verify network connectivity between nodes within the intended Kafka cluster. - Confirm that firewalls allow traffic over required ports such as `9092` for client connections or `9093` when SSL/TLS encryption is enabled[^1]. #### Installing Kafka Without Zookeeper Support To deploy Kafka without relying on ZooKeeper, download the appropriate tarball specifically built for KRaft operation instead of traditional packages which include dependencies on ZooKeeper services. For example: ```bash wget https://archive.apache.org/dist/kafka/3.9.0/kafka_2.13-3.9.0.tar.gz tar -xf kafka_2.13-3.9.0.tar.gz cd kafka_2.13-3.9.0/ ``` Modify the startup script similar to what was done previously but ensure paths point towards correct locations where JAAS files reside relative to your installation directory structure[^2]: ```bash sed -i 's|exec $base_dir/kafka-run-class.sh.*$|exec $base_dir/kafka-run-class.sh \ -Djava.security.auth.login.config=$base_dir/config/kraft/server.properties \ kafka.Kafka "$@"|' bin/kafka-server-start.sh ``` #### Configuring Security Settings Using SASL Mechanisms For securing communication channels among broker peers plus external clients connecting into topics hosted inside clusters running under KRaft control plane management, apply settings outlined below per server properties file located typically at `$KAFKA_HOME/config/kraft/server.properties`. Enable listener authentication mechanisms by specifying desired protocols like PLAINTEXT/SASL_PLAINTEXT depending upon whether encrypted transport layer protection will be utilized alongside message-level confidentiality measures provided via GSSAPI/Kerberos tickets exchange mechanism or SCRAM-SHA algorithms family members supporting password-based exchanges securely authenticated against remote LDAP directories etc.[^4]: ```properties listeners=SASL_PLAINTEXT://:9092 listener.name.sasl_plaintext.scram-sha-512.sasl.enabled.mechanisms=SCRAM-SHA-512 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512 authorizer.class.name=kafka.security.authorizer.AclAuthorizer allow.everyone.if.no.acl.found=true super.users=User:admin;User:kafka ``` Create corresponding entries inside `/etc/kafka/jaas.conf`, adjusting principal names according to actual usernames defined earlier during ACL rules creation process[^3]: ```conf KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="yourpassword"; }; Client { org.apache.kafka.common.security.plain.PlainLoginModule required username="clientuser" password="anothersecret"; }; ``` After completing these steps restart affected daemons ensuring changes take effect properly before attempting any further operations involving producer/consumer applications interacting programmatically across secured endpoints established above. --related questions-- 1. How does one verify successful deployment after configuring Kafka Cluster in KRaft mode? 2. What are common pitfalls encountered while enabling SASL authentication methods on Apache Kafka installations? 3. Can you provide guidance regarding performance tuning parameters available once switching away from legacy ZooKeeper coordination service models toward modernized approaches offered since recent releases? 4. Is there support for mutual TLS verification combined together with other forms of access controls implemented through custom plugin extensions developed externally outside official distributions?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

二郎5

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值