目录
一、搭建ZooKeeper
下载最新的zookeeper包,上传到服务器上
配置apache-zookeeper-3.6.3-bin/conf/zoo.cfg
tickTime=2000
dataDir=/home/zxhy/hadoop-3.3.3/data/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
分别在三台节点的设置的dataDir目录下新建myid文件。
mkdir zookeeper
cd zookeeper
touch myid
echo 1 >> myid
cat myid
相同的方式设置第二个节点和第三个节点为2和3
mkdir zookeeper
cd zookeeper
touch myid
echo 2 >> myid
cat myid
mkdir zookeeper
cd zookeeper
touch myid
echo 2 >> myid
cat myid
二、配置文件修改
切换到hbase的conf目录下并查看。发现conf目录下有配置文件hbase-env.sh、hbase-site.xml、regionservers
配置hbase-env.sh
export HBASE_MANAGES_ZK=false
export JAVA_HOME=/opt/jdk1.8.0_202
配置hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop01:19000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
</configuration>
配置regionservers,每行一个信任的hostname
hadoop01
hadoop02
hadoop03
配置备用master,添加一个backup-masters文件
touch back-masters
echo hadoop02 > back-masters
三、配置环境变量
加入环境变量
#编辑文件,该文件是我们在hadoop中搭建是创建的,可以直接在里面添加
vim /etc/profile.d/hadoop.sh
export HBASE_HOME=/home/zxhy/hadoop-3.3.3/app/hbase-2.4.13
export PATH=$PATH:$HBASE_HOME/bin
export ZK_HOME=/home/zxhy/hadoop-3.3.3/app/apache-zookeeper-3.6.3-bin
export PATH=$PATH:$ZK_HOME/bin
#刷新参数
source /etc/profile
四、启动ZooKeeper
分别进入三个节点到zookeeper bin文件下,通过命令【./zkServer.sh start】来启动服务。
在三台节点上利用【zkServer.sh status】查看Zookeeper节点状态。
#启动服务
./zkServer.sh start
#查看状态
./zkServer.sh status
五、启动Hbase
在主节点master进入目录/hbase/bin/,执行命令【./start-hbase.sh】
./start-hbase.sh
添加启动zookeeper和hbase的脚本到整体集群启动脚本中,完整脚本如下:
#!/bin/bash
if [ $# -lt 1 ]
then
echo "No Args Input..."
exit ;
fi
this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)
if [[ -n "${HADOOP_HOME}" ]]; then
HADOOP_HOME_DIR="${HADOOP_HOME}"
else
HADOOP_HOME_DIR="${bin}/../"
fi
case $1 in
"start")
echo " =================== 启动 hadoop集群 ==================="
echo " --------------- 启动 hdfs ---------------"
ssh hadoop01 "${HADOOP_HOME_DIR}/sbin/start-dfs.sh"
echo " --------------- 启动 yarn ---------------"
ssh hadoop01 "${HADOOP_HOME_DIR}/sbin/start-yarn.sh"
echo " --------------- 启动 historyserver ---------------"
ssh hadoop01 "${HADOOP_HOME_DIR}/bin/mapred --daemon start historyserver"
echo " --------------- 启动 zookeeper ---------------"
for host in hadoop01 hadoop02 hadoop03
do
echo starting $host zookeeper
ssh $host "${ZK_HOME}/bin/zkServer.sh start"
done
echo " --------------- 启动 hbase ---------------"
ssh hadoop01 "${HBASE_HOME}/bin/start-hbase.sh"
;;
"stop")
echo " =================== 关闭 hadoop集群 ==================="
echo " --------------- 关闭 hbase ---------------"
ssh hadoop01 "${HBASE_HOME}/bin/stop-hbase.sh"
echo " --------------- 关闭 zookeeper ---------------"
for host in hadoop01 hadoop02 hadoop03
do
echo stopping $host zookeeper
ssh $host "${ZK_HOME}/bin/zkServer.sh stop"
done
echo " --------------- 关闭 historyserver ---------------"
ssh hadoop01 "${HADOOP_HOME_DIR}/bin/mapred --daemon stop historyserver"
echo " --------------- 关闭 yarn ---------------"
ssh hadoop01 "${HADOOP_HOME_DIR}/sbin/stop-yarn.sh"
echo " --------------- 关闭 hdfs ---------------"
ssh hadoop01 "${HADOOP_HOME_DIR}/sbin/stop-dfs.sh"
;;
*)
echo "Input Args Error..."
;;
esac
六、问题记录
问题一,HBase的jar包和Hadoop的jar包有冲突
1、HBase启动时异常如下
java.lang.IllegalArgumentException: object is not an instance of declaring class
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.<init>(ProtobufDecoder.java:69)
2、尝试了各种解决方法,最后找到了根源:
问题原因:HBase的jar包和Hadoop的jar包有冲突,导致服务没有起来
修改方法:
cd /home/hadoop/app/hbase/conf
vi hbase-env.sh
export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true" #把这行的注释打开
问题二,各个节点的和主节点相差太大
regionserver启动时异常如下
org.apache.hadoop.hbase.ClockOutOfSyncException: Server hadoop03,16020,1658940643229 has been rejected; Reported time is too far out of sync with master. Time difference of 28724032ms > max allowed of 30000ms
修改方法将节点主机的时间和主节点保持同步即可。