Hbase完全分布式部署

前言

关于hadoop完全分布式部署,我在https://blog.csdn.net/zisefeizhu/article/details/84317520已经有详细步骤

https://blog.csdn.net/zisefeizhu/article/details/84317520继续部署hbase分布式

关于有关原理请关注这个大佬https://www.cnblogs.com/edisonchou/p/4440107.html

环境和此链接一致,在这链接出现的内容在此不在声明https://www.cnblogs.com/edisonchou/p/4440107.html

本次所需软件:zookeeper-3.4.9       hbase-1.2.4

网盘分享:

链接:https://pan.baidu.com/s/1KBfPO9D1UpTlwX5STVryVw 
提取码:0xrp 

开始正式部署

部署前的jps环境检查

[hadoop@hadoop01 ~]$ jps

1445 Jps

[hadoop@hadoop01 ~]$ /home/hadoop/hadoop/sbin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

Starting namenodes on [hadoop01]

hadoop01: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-hadoop01.out

hadoop02: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-hadoop02.out

hadoop03: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-hadoop03.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-hadoop01.out

starting yarn daemons

starting resourcemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-hadoop01.out

hadoop02: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-hadoop02.out

hadoop03: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-hadoop03.out

[hadoop@hadoop01 ~]$ jps

1940 ResourceManager

1782 SecondaryNameNode

2199 Jps

1584 NameNode

[hadoop@hadoop02 ~]$ jps

1726 Jps

1568 NodeManager

1455 DataNode

[hadoop@hadoop03 ~]$ jps

1750 Jps

1593 NodeManager

1480 DataNode

Zookeeper安装配置

在hadoop01上上传zookeeper压缩文件

[hadoop@hadoop01 ~]$ rz

[hadoop@hadoop01 ~]$ ls |grep zookeeper

zookeeper-3.4.9.tar.gz

解压zookeeper文件

tar xf zookeeper-3.4.9.tar.gz

设置软链接

[hadoop@hadoop01 ~]$ ln -s zookeeper-3.4.9 zookeeper

配置环境变量

[hadoop@hadoop01 ~]$ vim ~/.bash_profile

PATH=$PATH:$HOME/.local/bin:$HOME/bin

#PATH=$PATH:$HOME/bin:/home/hadoop/java/bin

#java

export JAVA_HOME=/home/hadoop/java

export PATH=$JAVA_HOME/bin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

#hadoop

export HADOOP_HOME=/home/hadoop/hadoop

export PATH=$PATH:$HADOOP_HOME/bin

export PATH

#zookeeper

export ZOOKEEPER=/home/hadoop/zookeeper

export PATH=$PATH:$ZOOKEEPER/bin

使修改生效

source ~/.bash_profile

[hadoop@hadoop01 ~]$ zk

zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh

到zookeeper的conf目录下面,新增一个zoo.cfg文件

[hadoop@hadoop01 conf]$ cp zoo_sample.cfg zoo.cfg

修改zoo.cfg

dataDir=/home/hadoop/zookeeper/data

在zoo.cfg中添加

server.1=hadoop01:2888:3888

server.2=hadoop02:2888:3888

server.3=hadoop03:2888:3888

在hadoop01 的/home/hadoop/zookeeper 目录下 新建data目录,并在data目录下创建myid文件。

[hadoop@hadoop01 zookeeper]$ mkdir data

[hadoop@hadoop01 zookeeper]$ cd data/

[hadoop@hadoop01 data]$touch myid

将hadoop01上的有关配置拷贝给hadoop02,hadoop03这里以hadoop02为例】

[hadoop@hadoop01 conf]$ scp -r /home/hadoop/zookeeper hadoop@hadoop02:/home/hadoop/

[hadoop@hadoop01 conf]$ scp ~/.bash hadoop@hadoop02:/home/hadoop/

[hadoop@hadoop01 conf]$ scp ~/.bash_profile hadoop@hadoop02:~/.

在hadoop02,hadoop03服务器上使环境变量生效

source ~/.bash_profile

在hadoop01 hadoop02 hadoop03的/home/hadoop/zookeeper/data/myid中分别添加1,2,3

注:此数字对应于zoo.cfg中的server号

在三台服务器上分别启动zookeeper,并查看启动状态,并使用jps查看【这里以hadoop01为例】

[hadoop@hadoop01 ~]$ zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[hadoop@hadoop01 ~]$ zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg

Mode: leader

[hadoop@hadoop01 ~]$ jps

2475 QuorumPeerMain

1940 ResourceManager

1782 SecondaryNameNode

2539 Jps

1584 NameNode

至此zookeeper配置完成!

HBase安装配置

【依旧在hadoop01上配置,然后拷贝】

[hadoop@hadoop01 ~]$ rz

[hadoop@hadoop01 ~]$ tar xf hbase-1.2.4-bin.tar.gz

[hadoop@hadoop01 ~]$ ln -s hbase-1.2.4 hbase

[hadoop@hadoop01 ~]$ ll | grep hbase

lrwxrwxrwx. 1 hadoop hadoop 11 11月 29 16:07 hbase -> hbase-1.2.4

drwxrwxr-x. 7 hadoop hadoop 160 11月 29 16:07 hbase-1.2.4

-rw-r--r--. 1 hadoop hadoop 104497899 11月 29 14:47 hbase-1.2.4-bin.tar.gz

修改环境变量

[hadoop@hadoop01 ~]$ vim ~/.bash_profile

#hbase

export HBASE_HOME=/home/hadoop/hbase

export PATH=$PATH:$HBASE_HOME/bin

使环境变量生效

[hadoop@hadoop01 ~]$ source ~/.bash_profile

拷贝给从服务器hadoop02,hadoop03【以hadoop02为例】

[hadoop@hadoop01 conf]$ scp ~/.bash_profile hadoop@hadoop02:~/.

在hadoop02,hadoop03服务器上使环境变量生效

source ~/.bash_profile

验证环境变量是否生效

[hadoop@hadoop01 ~]$ hbase

hbase hbase-common.sh hbase-daemon.sh hbase-jruby

hbase-cleanup.sh hbase-config.sh hbase-daemons.sh

进入hbase的conf目录,修改三个文件:hbase-env.sh hbase-site.xml 和regionservers

hbase-env.sh文件

#export JAVA_HOME=/usr/java/jdk1.6.0/

export JAVA_HOME=/home/hadoop/java

# Extra Java CLASSPATH elements. Optional.

# export HBASE_CLASSPATH=

# Seconds to sleep between slave commands. Unset by default. This

# can be useful in large clusters, where, e.g., slave rsyncs can

# otherwise arrive faster than the master can service them.

# export HBASE_SLAVE_SLEEP=0.1

# Tell HBase whether it should manage it's own instance of Zookeeper or not.

export HBASE_MANAGES_ZK=false

hbase-site.xml文件

[hadoop@hadoop01 conf]$ cat hbase-site.xml

<configuration>

<property>

<name>hbase.zookeeper.quorum</name>

<value>hadoop01,hadoop02,hadoop03</value>

<description>The directory shared by RegionServers.</description>

</property>

<property>

<name>hbase.zookeeper.property.dataDir</name>

<value>/home/hadoop/hbase/zookeeperdata</value>

<description>Property from ZooKeeper config zoo.cfg.

The directory where the snapshot is stored.

</description>

</property>

<property>

<name>hbase.tmp.dir</name>

<value>/home/hadoop/hbase/tmpdata</value>

</property>

<property>

<name>hbase.rootdir</name>

<value>hdfs://hadoop01:9000/hbase</value>

<description>The directory shared by RegionServers.</description>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

<description>The mode the cluster will be in. Possible values are

false: standalone and pseudo-distributed setups with managed Zookeeper

true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)

</description>

</property>

</configuration>

regionservers文件中添加服务器的ip或者hostname:【这里我添加hostname】

hadoop01

hadoop02

hadoop03

把hbase的整个文件夹拷贝到hadoop01 hadoop02服务器

[hadoop@hadoop01 hbase]$ jps

2475 QuorumPeerMain

1940 ResourceManager

1782 SecondaryNameNode

2859 Jps

1584 NameNode

[hadoop@hadoop01 hbase]$ jps

2475 QuorumPeerMain

1940 ResourceManager

1782 SecondaryNameNode

2859 Jps

1584 NameNode

[hadoop@hadoop01 hbase]$ start-hbase.sh

starting master, logging to /home/hadoop/hbase/logs/hbase-hadoop-master-hadoop01.out

hadoop03: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-hadoop03.out

hadoop02: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-hadoop02.out

hadoop01: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-hadoop01.out

[hadoop@hadoop01 hbase]$ jps

3130 HRegionServer

2475 QuorumPeerMain

1940 ResourceManager

2990 HMaster

3178 Jps

1782 SecondaryNameNode

1584 NameNode

[hadoop@hadoop03 ~]$ jps

1922 QuorumPeerMain

1593 NodeManager

2361 Jps

2194 HRegionServer

1480 DataNode

至此hbase配置完成!!!

友情提示

          启动顺序

         hadoop-hdfs --->hadoop-yarn --->zookeeper -->hbase

                                                                         -----zisefeizhu

                                                                        -----2018/11/29/16:40

 

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值