Zookeeper+Kafka集群安装

Zookeeper+Kafka集群安装

1.1 Zookeeper

1.1.1  SSH免密码

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

ssh-keygen-t rsa -P ''

 

==============================================

ssh-keygen-t rsa -P '' -f /root/.ssh/id_rsa

 

chmod 700 -R /root/.ssh

chmod 600  /root/.ssh/authorized_keys

cat  /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

===========================================

scp/root/.ssh/authorized_keysroot@172.16.10.148:/root/.ssh/authorized_keys

 

Xyh@3613571_147

ssh 192.168.0.2

1.1.2  每台机器hosts配置如下:

vi /etc/hosts

... 127.0.0.1               localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

# zookeeper hostnames:

172.16.10.141               zk1

172.16.10.147               zk2

172.16.10.148               zk3

 

1.1.3  每台机器network配置如下:

hostname zk2

vi /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=zk2

1.1.4  每台机器上安装jdk, zookeeper, kafka, 配置如下

 

tar -zxvf /root/zookeeper-3.4.6.tar.gz

 

tar -zxvf /root/kafka_2.10-0.9.0.1.tgz

 

 

1.1.4.1 设置环境变量

vi /etc/profile

 

# jdk, zookeeper, kafka

export KAFKA_HOME=/root/kafka_2.10

export ZK_HOME=/root/zookeeper-3.4.8

export JAVA_HOME=/usr/java/jdk1.7.0_71

export PATH=$JAVA_HOME/bin:$PATH

exportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

exportPATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$KAFKA_HOME/bin:$ZK_HOME/bin:$PATH

export KAFKA_HEAP_OPTS="-Xmx1G-Xms512M"

source /etc/profile

 

1.1.4.2                配置修改

1.1.1.2.1  Zoo.cfg

 

 

cd $ZK_HOME/conf

 

cp zoo_sample.cfg zoo.cfg

 

 

$ vi zoo.cfg

 

# The number of milliseconds of eachtick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can passbetween

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot isstored.

# do not use /tmp for storage, /tmp hereis just

# example sakes.

dataDir=/root/zookeeper-3.4.8/data

# the port at which the clients willconnect

clientPort=2181

# the maximum number of clientconnections.

# increase this if you need to handlemore clients

#maxClientCnxns=60

#

# Be sure to read the maintenancesection of the

# administrator guide before turning onautopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain indataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable autopurge feature

#autopurge.purgeInterval=1

server.1=zk1:2888:3888

server.2=zk2:2888:3888

server.3=zk3:2888:3888

 

1.1.1.2.2  配置 observer

并在所有server的配置文件中,配置成observer模式的server的那行配置追加:observer,例如:

peerType=observer

server.4=zk3:2888:3888:observer

1.1.1.2.3  每台机器上生成myid:

 

mkdir -p /root/zookeeper-3.4.8/data

 

 

#zk1:

echo "1" >/root/zookeeper-3.4.8/data/myid

#zk2:

echo "2" >/root/zookeeper-3.4.8/data/myid

#zk3:

echo "3">/root/zookeeper-3.4.8/data/myid

 

1.1.4.3 复制上传目录到服务器

 

scp /root/zookeeper-3.4.8/conf/zoo.cfg root@172.16.10.147:/root/zookeeper-3.4.8/conf/

scp /root/zookeeper-3.4.8/conf/zoo.cfgroot@172.16.10.148:/root/zookeeper-3.4.8/conf/

 

 

     scp  -r /root/kafka_2.10root@172.16.10.147:/root/kafka_2.10

     scp  -r /root/kafka_2.10 root@172.16.10.148:/root/kafka_2.10

 

 

      scp  -r/root/zookeeper-3.4.8 root@172.16.10.147:/root/zookeeper-3.4.8

     scp  -r /root/zookeeper-3.4.8 root@172.16.10.148:/root/zookeeper-3.4.8

 

 

      scp  /root/zookeeper-3.4.8/data/myid2root@172.16.10.147:/root/zookeeper-3.4.8/data/myid

      scp /root/zookeeper-3.4.8/data/myid3 root@172.16.10.148:/root/zookeeper-3.4.8/data/myid

 

 

 

      scp  /root/zookeeper-3.4.8/conf/zoo.cfgroot@172.16.10.147:/root/zookeeper-3.4.8/conf/zoo.cfg

      scp  /root/zookeeper-3.4.8/conf/zoo.cfgroot@172.16.10.148:/root/zookeeper-3.4.8/conf/zoo.cfg

 

 

/root/zookeeper-3.4.8/conf

 

scp /root/kafka_2.10/config/server2.properties root@172.16.10.147:/root/kafka_2.10/config/server.properties

 

scp /root/kafka_2.10/config/server3.properties root@172.16.10.148:/root/kafka_2.10/config/server.properties

 

 

scp /root/kafka_2.10/config/producer.properties root@172.16.10.147:/root/kafka_2.10/config/producer.properties

 

scp /root/kafka_2.10/config/producer.properties root@172.16.10.148:/root/kafka_2.10/config/producer.properties

 

 

scp /root/kafka_2.10/config/consumer.properties root@172.16.10.147:/root/kafka_2.10/config/consumer.properties

scp /root/kafka_2.10/config/consumer.properties root@172.16.10.148:/root/kafka_2.10/config/consumer.properties

 

 

scp /root/zookeeper-3.4.8/config/zoo.cfg root@172.16.10.147:/root/zookeeper-3.4.8/config/zoo.cfg

 

scp /root/zookeeper-3.4.8/config/zoo.cfg root@172.16.10.148:/root/zookeeper-3.4.8/config/zoo.cfg

 

1.1.4.4 防火墙Linux下防火墙的开启与关闭及配置

linux刚装上,可能会发生apache不能访问,mysql不能远程链接,ssh不能远程登陆,ftp不能链接上,以上种种现象很有可能是因为防火墙的缘故。
不过因为是新手,未必就能马上把防火墙学通。所以,对于初学者最好的方法是先关闭防火墙,等日后学了防火墙的配置规则后再去配置。

那么如何关闭防火墙呢:

1
永久性生效,重启后不会复原

开启: chkconfig iptables on

关闭: chkconfig iptables off

2即时生效,重启后复原

开启: service iptables start

关闭: service iptables stop

需要说明的是对于Linux下的其它服务都可以用以上命令执行开启和关闭操作。

也可以做简单的修改。在开启了防火墙时,做如下设置,开启相关端口,

修改/etc/sysconfig/iptables 文件,添加以下内容:
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 80 -j ACCEPT

-ARH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 21 -j ACCEPT

-ARH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT

-A RH-Firewall-1-INPUT-m state --state NEW -m tcp -p tcp --dport 2181 -j ACCEPT

80端口是http服务,21端口是ftp服务,22端口是ssh服务,23端口是telnet服务且均是采用TCP协议

通过本文你了解到了Linux关闭防火墙命令,以及怎样安装好Linux关闭防火墙命令

1.1.4.5 连接各结点

cd $ZK_HOME

bin/zkCli.sh -server 172.16.10.147:2181

ls /

ls /brokers

ls /brokers/topics

 

cd $KAFKA_HOME

 

147上,开一个终端,发送消息至kafka(zk2模拟producer)

cd $KAFKA_HOME

bin/kafka-console-producer.sh --broker-list zk1:9092, zk2:9092,zk3:9092--topic test448

1231
1231fsfasff

23412313123

 

148上,开一个终端,显示消息的消费(zk3模拟consumer)

cd $KAFKA_HOME


bin/kafka-console-consumer.sh --zookeeper zk1:2181 --topic test448 --from-beginning

 

 

cd $KAFKA_HOME

bin/kafka-console-consumer.sh --zookeeper zk1:2181,zk2:2181,zk3:2181--topic test448 --from-beginning

 

 

cd $KAFKA_HOME

bin/kafka-topics.sh --create --zookeeperzk1:2181 --replication-factor 3 --partitions 2 --topic test448

 

 

 

cd $ZK_HOME

bin/zkServer.sh stop

cd $KAFKA_HOME

bin/zookeeper-server-stop.sh  &

cd $KAFKA_HOME

bin/kafka-server-stop.sh

 

1.1.4.6 每台机器上启动zookeeper:

 

后台运行
nohup /opt/kafka_2.10-0.8.2.1/bin/kafka-server-start.sh/opt/kafka_2.10-0.8.2.1/config/server.properties & (用&是为了能退出命令行)

nohup /opt/kafka_2.10-0.8.2.1/bin/kafka-server-start.sh/opt/kafka_2.10-0.8.2.1/config/server.properties > /opt/myout.file2>&1 &

--------------------------

cd $ZK_HOME

bin/zkServer.sh stop

cd $KAFKA_HOME

bin/zookeeper-server-stop.sh  &

bin/kafka-server-stop.sh

 

 

 

cd /root/kafka_2.10/zookeeper/version-2

rm -rf *

 

cd /root/kafka_2.10/kafka-logs

rm -rf *

cd /root/kafka_2.10/logs

rm -rf *.*

 

netstat -apn | grep 9092

netstat -apn | grep 2181

kill -9 $(netstat -nlp | grep :2181 |awk '{print $7}' | awk -F"/" '{ print $1 }')

kill -9 $(netstat -nlp | grep :9092 |awk '{print $7}' | awk -F"/" '{ print $1 }')

 

 

 

cd /root/zookeeper-3.4.8

-----------------------------------

cd $ZK_HOME

bin/zkServer.sh stop

cd $KAFKA_HOME

bin/zookeeper-server-stop.sh  &

 

cd $ZK_HOME

 

bin/zkServer.sh start

#tailf zookeeper.out

zkServer.sh status

cd $KAFKA_HOME

bin/kafka-server-stop.sh

bin/kafka-server-start.shconfig/server.properties &

------------------------------------

cd $ZK_HOME

bin/zkServer.sh stop

 

cd $KAFKA_HOME

bin/zookeeper-server-stop.sh  &

bin/kafka-server-stop.sh

bin/zookeeper-server-start.shconfig/zookeeper.properties &

bin/kafka-server-start.shconfig/server.properties &

1.1.4.7 查看状态:

cd $ZK_HOME

zkServer.sh status

 

1.1.4.8 停止Zookeeper server:

cd $KAFKA_HOME

bin/kafka-server-stop.sh

cd $ZK_HOME

bin/zkServer.sh stop

1.2    kafka 集群

1.2.1  准备工作:

1.2.1.1  3台机器

IP地址分别为:192.168.0.10192.168.0.11192.168.0.12

rpm -ivh /root/jdk-7u71-linux-x64.rpm

scp /root/jdk-7u71-linux-x64.rpmroot@172.16.10.147: /root/

 scp  -r/usr/java/jdk1.7.0_71 root@172.16.10.147: usr/java/jdk1.7.0_71

1.2.1.2 下载kafka稳定版本

tar -xzf kafka_2.9.2-0.8.1.1.tgz

1.2.1.3 搭建kafka broker集群
1.2.1.2.1  server.properties

进入/root/kafka_2.10/config目录,修改

broker.id=1

port=9092

host.name=zk1

advertised.host.name=zk1

zookeeper.connect=zk1:2181,zk2:2181,zk3:2181

#zookeeper.connect=172.16.10.141:2181,172.16.10.147:2181,172.16.10.148:2181

log.dirs=/root/kafka_2.10/kafka-logs

 

1.2.1.2.2  meta.properties

进入/root/kafka_2.10/kafka-logs/

#

#Thu May 12 17:25:25 CST 2016

version=0

broker.id=1

 

1.2.1.2.3  修改生产者配置 producer.properties

metadata.broker.list=zk1:9092,zk2:9092,zk3:9092

producer.type=async

 

1.2.1.2.4  修改消费者配置consumer.properties

zookeeper.contact=zk1:2181,zk2:2181,zk3:2181

 

1.2.1.4 杀死进程
1.2.1.2.1  杀死指定的进程

netstat -apn | grep 2181

netstat -apn | grep 9092

ps -ef | grep 2.10

netstat -apn | grep 9092

netstat -apn | grep 2181

 

netstat -nlp | grep :2181 | awk '{print$7}'

netstat -nlp | grep :2181 | awk '{print$7}' | awk -F"/" '{ print $1 }'

 

kill -9 $(netstat -nlp | grep :2181 |awk '{print $7}' | awk -F"/" '{ print $1 }')

kill -9 $(netstat -nlp | grep :9092 |awk '{print $7}' | awk -F"/" '{ print $1 }')

kill -9 $(netstat -tlnp | grep 9092 | awk '{print $7}' | awk -F '/' '{print$1}')

 

kill -9 $(lsof -i:2181 |awk '{print $2}' | tail -n 2)

kill -9 $(lsof -i:9092 |awk '{print $2}' | tail -n 2)

 

1.2.1.2.2  杀死批量进程

for pid in $(ps -ef | grep curl | grep-v grep | cut -c 15-20);

do(获取进程id数组,并循环杀死所有进程)

   echo $pid

   kill -9 $pid

done

 

1.2.1.5 启动每台服务器的kafka:

 

cd $ZK_HOME

bin/zkServer.sh stop

 

cd $KAFKA_HOME

bin/zookeeper-server-stop.sh  &

bin/kafka-server-stop.sh

bin/zookeeper-server-start.shconfig/zookeeper.properties &

bin/kafka-server-start.shconfig/server.properties &

 

1.2.1.6 停止每台服务器的kafka:

cd $KAFKA_HOME

bin/kafka-server-stop.sh

 

测试集群

1.2.2  测试集群

1.2.2.1 创建一个topic

cd $KAFKA_HOME

bin/kafka-topics.sh --create --zookeeper172.16.10.141:2181,172.16.10.147:2181,172.16.10.148:2181 --replication-factor 3--partitions 2 --topic test447

1.2.2.2 查看创建的topic

cd $KAFKA_HOME

bin/kafka-topics.sh --describe--zookeeper 172.16.10.141:2181,172.16.10.147:2181,172.16.10.148:2181 --topic test447

 

topic: my-replicated-topic partition: 0leader: 2 replicas: 2,0,1 isr: 2,0,1
topic: test partition: 0 leader: 0 replicas: 0 isr: 0

partition:同一个topic下可以设置多个partition,将topic下的message存储到不同的partition下,目的是为了提高并行性

leader:负责此partition的读写操作,每个broker都有可能成为某partitionleader

replicas:副本,即此partition在那几个broker上有备份,不管broker是否存活

isr:存活的replicas

1.2.2.3 查看topic列表

 

bin/kafka-topics.sh --list --zookeeper 172.16.10.141:2181,172.16.10.147:2181,172.16.10.148:2181

 

 

1.2.2.4 查看集群情况:

bin/kafka-topics.sh --describe--zookeeper 172.16.10.141:2181,172.16.10.147:2181,172.16.10.148:2181 --topic test447

1.2.2.5 生产者发送消息:

cd $KAFKA_HOME

bin/kafka-console-producer.sh--broker-list 172.16.10.141:9092,172.16.10.147:9092,172.16.10.148:9092 --topic test447

1.2.2.6 消费者接收消息:

 

cd $KAFKA_HOME

bin/kafka-console-consumer.sh--zookeeper 172.16.10.141:2181,172.16.10.147:2181,172.16.10.148:2181 --topic test447--from-beginning

 

 

1.2.3  JAVA kafka调用

1.2.3.1  java端生产数据,
1.2.3.2  kafka集群消费数据

 

 

Zookeeper+Kafka集群安装
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值