kafka_2.11 安装配置 (详细图文)

准备工作

先从官网上下载好该版本的安装包,并上传到集群中任意一台主机(我上传到了master上) 
这里写图片描述

1 安装配置

1.1 解压

我是放到 /usr/hadoop 目录下的,解压之后会多出来一个 kafka_2.11-0.10.1.0 的目录 
这里写图片描述

1.2 配置环境变量

在 /etc/profile 中添加如下配置 
这里写图片描述

使环境变量生效

[root@master hadoop]# source /etc/profile
[root@master hadoop]#
  • 1
  • 2
1.3 发送到集群中其他节点

这时候把我们的 /etc/profile 发送到 slave1、slave2 节点上:

[root@master hadoop]# scp /etc/profile root@slave1:/etc/
...
[root@master hadoop]# scp /etc/profile root@slave2:/etc/
...
  • 1
  • 2
  • 3
  • 4
  • 5

发送 kafka 整个文件夹到 slave1、slave2 节点上:

[root@master hadoop]# scp -r kafka_2.11-0.10.1.0 root@slave1:/usr/hadoop
...
[root@master hadoop]# scp -r kafka_2.11-0.10.1.0 root@slave2:/usr/hadoop
...
  • 1
  • 2
  • 3
  • 4
  • 5
1.4 配置 server.properties

其实单节点的 kafka 是不用修改这个文件的,直接仿照官网的介绍就可以了,但是我们搭建的是一个集群。这里的配置文件需要修改,有三四处吧。

1.4.1 打开监听端口
############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = security_protocol://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092     # 取消这一行的注释
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
1.4.2 修改 zookeeper.connect
############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:2181,slave1:2181,slave2:2181   # 修改成为我们搭建的zookeeper集群

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
1.4.3 配置 broker 的ID
############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0     # broker 的唯一标示,集群中不能有重复的ID

# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
1.4.4 修改 log 的目录
############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/usr/hadoop/kafka_2.11-0.10.1.0/kafka-logs-server  # 这里的设置最好也具有标示性

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

这样就配置好一个 server.properties 了,如果想要启动多个节点,我们多配置几个 server.properties 就行了,保证 broker.id 和 log.dirs 的唯一性就可以了。 
这里写图片描述

官网如是说:

The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.
  • 1
1.5 启动 server

在启动之前我们需要先将zookeeper集群启动,哈哈,打个广告,可以使用我写的那个脚本来”一键“启动哦。 
这里写图片描述

相信大家也都看见了,有两个 Kafka 的 java 进程,那是因为我在这个节点上已经启动了两个 kafka server 了。

我们可以通过以下的命令来启动 server :

[root@master config]# pwd
/usr/hadoop/kafka_2.11-0.10.1.0/config

[root@master config]# kafka-server-start.sh server1.properties &
...
[root@master config]# kafka-server-start.sh server2.properties &
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

这里我们需要指定 server.properties 的路径,所以干脆进入到 config 目录下,使用上面的命令就可以启动 kafka 的服务。至于在命令后面加的 ”&“ 是表示后台运行,不在控制台输出一系列信息。 
然后再使用 jps 就可以查看到我们的 kafka 了。

1.6 关闭 server

我们可以进入到 bin 目录下,看看有什么命令或者脚本是用来关闭的:

[root@master bin]# pwd
/usr/hadoop/kafka_2.11-0.10.1.0/bin
[root@master bin]# ll
total 116
-rwxr-xr-x 1 root root 1335 Oct  5 03:30 connect-distributed.sh
-rwxr-xr-x 1 root root 1332 Oct  5 03:30 connect-standalone.sh
-rwxr-xr-x 1 root root  861 Oct  5 03:30 kafka-acls.sh
-rwxr-xr-x 1 root root  864 Oct  5 03:30 kafka-configs.sh
-rwxr-xr-x 1 root root  945 Oct  5 03:30 kafka-console-consumer.sh
-rwxr-xr-x 1 root root  944 Oct  5 03:30 kafka-console-producer.sh
-rwxr-xr-x 1 root root  871 Oct  5 03:30 kafka-consumer-groups.sh
-rwxr-xr-x 1 root root  872 Oct  5 03:30 kafka-consumer-offset-checker.sh
-rwxr-xr-x 1 root root  948 Oct  5 03:30 kafka-consumer-perf-test.sh
-rwxr-xr-x 1 root root  862 Oct  5 03:30 kafka-mirror-maker.sh
-rwxr-xr-x 1 root root  886 Oct  5 03:30 kafka-preferred-replica-election.sh
-rwxr-xr-x 1 root root  959 Oct  5 03:30 kafka-producer-perf-test.sh
-rwxr-xr-x 1 root root  874 Oct  5 03:30 kafka-reassign-partitions.sh
-rwxr-xr-x 1 root root  868 Oct  5 03:30 kafka-replay-log-producer.sh
-rwxr-xr-x 1 root root  874 Oct  5 03:30 kafka-replica-verification.sh
-rwxr-xr-x 1 root root 6901 Oct  5 03:30 kafka-run-class.sh
-rwxr-xr-x 1 root root 1376 Oct  5 03:30 kafka-server-start.sh
-rwxr-xr-x 1 root root  975 Oct  5 03:30 kafka-server-stop.sh
-rwxr-xr-x 1 root root  870 Oct  5 03:30 kafka-simple-consumer-shell.sh
-rwxr-xr-x 1 root root  945 Oct  5 03:30 kafka-streams-application-reset.sh
-rwxr-xr-x 1 root root  863 Oct  5 03:30 kafka-topics.sh
-rwxr-xr-x 1 root root  958 Oct  5 03:30 kafka-verifiable-consumer.sh
-rwxr-xr-x 1 root root  958 Oct  5 03:30 kafka-verifiable-producer.sh
drwxr-xr-x 2 root root 4096 Oct  5 03:30 windows
-rwxr-xr-x 1 root root  867 Oct  5 03:30 zookeeper-security-migration.sh
-rwxr-xr-x 1 root root 1393 Oct  5 03:30 zookeeper-server-start.sh
-rwxr-xr-x 1 root root  978 Oct  5 03:30 zookeeper-server-stop.sh
-rwxr-xr-x 1 root root  968 Oct  5 03:30 zookeeper-shell.sh
[root@master bin]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

似乎有一个是用来关闭服务的,”kafka-server-stop.sh“,于是我们运行这个脚本:

[root@master bin]# kafka-server-stop.sh 
No kafka server to stop

[root@master bin]# 
  • 1
  • 2
  • 3
  • 4

what ? 没有服务要被关闭?我们可以看一下这个脚本到底是怎么写的,是不是我们的参数不正确还是怎么滴。

[root@master bin]# cat kafka-server-stop.sh 
#!/bin/sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')

if [ -z "$PIDS" ]; then
  echo "No kafka server to stop"
  exit 1
else 
  kill -s TERM $PIDS
fi

[root@master bin]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

好吧,这么看来也就是我们用这样的方法是不行了。干脆直接 kill -9

[root@master kafka_2.11-0.10.1.0]# jps
3448 Kafka
2136 NodeManager
3033 QuorumPeerMain
1772 DataNode
5757 Jps
3711 Kafka
[root@master kafka_2.11-0.10.1.0]# kill -9 3448
...
[1]-  Killed                  kafka-server-start.sh server4.properties
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

就是这么简单粗暴。

请看下一篇:kafka_2.11简单使用

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/M_SIGNALs/article/details/53201595
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值