使用三台主机,对应的主机名和IP地址是:
192.168.154.158 Slave1
192.168.154.159 Slave2
192.168.154.160 Slave3
文件配置:
(注:修改配置文件需要使用root用户,否则修改失败)
vim /etc/sysconfig/network
修改主机名为:HOSTNAME=Slave1
vim /etc/hosts
配置hosts文件,主机和IP地址映射
对Slave2、Slave3做同样的修改
下载zookeeper:
下载地址:http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
安装zookeeper:
将zookeeper-3.4.6.tar.gz拿U盘复制到/home/Hadoop目录下
cd /home/hadoop
打开目录/home/hadoop
tar -zxvf zookeeper-3.4.6.tar.gz -C /usr/local
解压到指定目录/usr/local
cd /usr/local
打开目录/usr/local
mv zookeeper-3.4.6 zookeeper
将zookeeper-3.4.6名字变为zookeeper
配置zookeeper:
cd /usr/local/zookeeper
进入zookeeper目录
cp -rf conf/zoo_sample.cfg conf/zoo.cfg
复制一个备份
cd /usr/local/zookeeper/conf
进入conf目录
vim zoo.cfg
编辑zoo.cfg
修改后的内容如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/zookeeper/zkdata #这个目录是预先创建的
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
# zookeeper cluster
server.1=Slave1:2888:3888
server.2=Slave2:2888:3888
server.3=Slave3:2888:3888
红色文字原来的内容
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
上面两行的意思是:
不要使用/tmp/zookeeper作为目录
因此,需要对它做出修改
建立上面需要的文件夹:
mkdir /usr/local/zookeeper/zkdata
红色背景内容为新增内容
下载Kafka:
下载地址:http://apache.fayea.com/kafka/0.8.2.1/kafka_2.10-0.8.2.1.tgz
安装Kafka:
将kafka_2.10-0.8.2.1.tgz拿U盘复制到/home/hadoop目录下
cd /home/hadoop
打开目录/home/hadoop
tar -zxvf kafka_2.10-0.8.2.1.tgz -C /usr/local
解压到指定目录/usr/local
cd /usr/local
打开目录/usr/local
mv kafka_2.10-0.8.2.1 kafka
将kafka_2.10-0.8.2.1名字变为kafka
授权:
chown -R hadoop:hadoop /usr/local/kafka
chown -R hadoop:hadoop /usr/local/zookeeper
要给"Slave1"服务器上的用户hadoop添加对"/usr/local/kafka、/usr/local/zookeeper"读权限。
将zookeeper和Kafka拷贝到Slave2、Slave3上:
cd /usr/local
打开路径
sudo tar -zcf ./kafka.tar.gz ./kafka
sudo tar -zcf ./zookeeper.tar.gz ./zookeeper
压缩
scp ./kafka.tar.gz Slave2:/home/hadoop
scp ./kafka.tar.gz Slave3:/home/hadoop
scp ./zookeeper.tar.gz Slave2:/home/hadoop
scp ./zookeeper.tar.gz Slave3:/home/hadoop
复制到Slave2、Slave3主机上
在Slave2、Slave3上操作:
sudo tar -zxf /home/hadoop/kafka.tar.gz -C /usr/local
sudo tar -zxf /home/hadoop/zookeeper.tar.gz -C /usr/local
解压到/usr/local路径下
chown -R hadoop:hadoop /usr/local/kafka
chown -R hadoop:hadoop /usr/local/zookeeper
要给用户hadoop添加对"/usr/local/kafka、/usr/local/zookeeper"读权限。
修改配置文件:
vim /etc/profile
在文档的最后方加入下面内容:
export KAFKA_HOME=/usr/local/kafka
export ZK_HOME=/usr/local/zookeeper
在每台机器上运行:
source /etc/profile
使配置生效
每台机器上生成myid:
向/usr/local/zookeeper/zkdata目录操作
Slave1:
echo "1" > /usr/local/zookeeper/zkdata/myid
Slave2:
echo "2" >/usr/local/zookeeper/zkdata/myid
Slave3:
echo "3" > /usr/local/zookeeper/zkdata/myid
关闭防火墙:
service iptables stop
启动zookeeper:
cd /usr/local/zookeeper/bin
打开目录
sh zkServer.sh start
启动
sh zkServer.sh status
查看状态
分别显示如下:
[root@Slave1 bin]# sh zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@Slave2 bin]# sh zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@Slave3 bin]# sh zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower