HBase1.2.5+Hadoop2.7.3+ZooKeeper3.4.6分布式搭建

1、环境准备

3台RedHat6.8

10.9.44.60 master

10.9.44.61 salve1

10.9.44.62 salve2

—— IP和对应关系加入hosts

包:HBase1.2.5、Hadoop2.7.3、ZooKeeper3.4.6、jdk-8u91-linux-x64.rpm

* 集群间和本地登陆本机要可以使用ssh双向无密码登陆执行

2、在3台机器上安装JDK ,添加环境变量/etc/profile

# rpm -ivh jdk-8u91-linux-x64.rpm
export JAVA_HOME=/usr/java/jdk1.8.0_91  # 注意路径是否正确
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/data/yunva/hadoop-2.7.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

3、Hadoop集群安装(安装一台,cp至另外两台)

解压至安装目录

# tar -xzvf hadoop-2.7.3.tar.gz -C /data/

更改配置

① /data/hadoop-2.7.3/etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
    </property>
</configuration>

② /data/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

确认JDK路径

export JAVA_HOME=/usr/java/jdk1.8.0_91

③ /data/hadoop-2.7.3/etc/hadoop/hdfs-site.xml

创建Hadoop的数据目录和用户目录

mkdir /data/hadoop-2.7.3/hadoop/data
mkdir /data/hadoop-2.7.3/hadoop/name
<configuration>
   <property>
        <name>dfs.name.dir</name>
        <value>/data/hadoop-2.7.3/hadoop/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/data/hadoop-2.7.3/hadoop/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>

④ /data/hadoop-2.7.3/etc/hadoop/mapred-site.xml

cp mapred-site.xml.template mapred-site.xml

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
</configuration>

⑤ 修改slaves

 

# cat /data/hadoop-2.7.3/etc/hadoop/slaves
slave1
slave2

三台机器配置相同、环境相同直接拷贝

scp -r /data/hadoop-2.7.3/ slave1:/data/
scp -r /data/hadoop-2.7.3/ slave2:/data/

启动Hadoop

./bin/hadoop namenode -format		第一次启动前格式化namenode
./sbin/start-all.sh	启动
# jps		//通过jps命令可以查看进程

4、Zookeeper集群安装(安装一台,cp至另外两台)

   ①解压至安装目录

# tar -xzvf zookeeper-3.4.6.tar.gz -C /data/
# cd /data/zookeeper-3.4.6/conf/
# cp zoo_sample.cfg zoo.cfg
# mkdir /data/zookeeper-3.4.6/zkdata
# mkdir /data/zookeeper-3.4.6/logs
[root@master conf]# grep -v ^# zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper-3.4.6/zkdata
dataLogDir=/data/zookeeper-3.4.6/logs
clientPort=2181
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

 

② 在创建的zkdata目录下,创建myid文件,例:myid内容是1 表示server.1、myid文件内容是2 表示sever.2 ,以此类推

启动zk ,在每台zk机器上执行 ./bin/zkServer.sh start

//启动期间可能会报错

2019-01-13 17:56:06,208 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-01-13 17:56:06,209 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  2, proposed zxid=0x0
2019-01-13 17:56:06,214 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.roun
d), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-01-13 17:56:06,215 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.roun
d), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-01-13 17:56:06,215 [myid:2] - WARN  [WorkerSender[myid=2]:QuorumCnxManager@382] - Cannot open channel to 3 at election address slave2/10.9.44.62:3888
java.net.ConnectException: 拒绝连接
  at java.net.PlainSocketImpl.socketConnect(Native Method)
  at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
  at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  at java.net.Socket.connect(Socket.java:589)
  at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
  at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
  at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
  at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
  at java.lang.Thread.run(Thread.java:745)
2019-01-13 17:56:06,216 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.roun
d), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state

........................

① 检查防火墙、 zoo.cfg配置文件中的server.1=master:2888:3888 和/etc/hosts文件

② zk集群节点启动时都会试图去连接集群中的其它节点,先启动是连不上其他没启动的,所以上面日志前面部分的异常可以忽略

5、Hbase集群安装

在/data/hbase-1.2.5/conf/hbase-env.sh中添加

export JAVA_HOME=/usr/java/jdk1.8.0_91
export HBASE_CLASSPATH=/data/hadoop-2.7.3/etc/hadoop/
export HBASE_MANAGES_ZK=false

更改 /data/hbase-1.2.5/conf/hbase-site.xml

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master:9000/hbase</value>
    </property>
    <property>
        <name>hbase.master</name>
        <value>master</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>master,slave1,slave2</value>
    </property>
    <property>
        <name>zookeeper.session.timeout</name>
        <value>60000000</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
</configuration>

更改/data/hbase-1.2.5/conf/regionservers

# cat /data/hbase-1.2.5/conf/regionservers 
slave1
slave2

同步安装包

scp -r /data/hbase-1.2.5/ slave1:/data

scp -r /data/hbase-1.2.5/ slave2:/data

6、启动集群

启动zk

/data/zookeeper-3.4.6/bin/zkServer.sh start

启动Hadoop

/data/hadoop-2.7.3/sbin/start-all.sh

启动Hbase

/data/hbase-1.2.5/bin/start-hbase.sh

---
验证:
hadoop:
http://IP:8088/cluster/cluster
hbase:
http://IP:16010/master-status
hdfs:
http://IP:50070/dfshealth.html#tab-overview

 

***  https://www.code007.net

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值