1. 软件列表
hadoop 2.6.0 、 zookeeper 3.4.6和hbase 1.2.0
2. 机器环境
序号 | 主机名 | IP | 安装软件 |
1 | d-hdp-client | 192.1.131.199 | hadoop hbase 不运行 |
2 | d-hdp-01 | 192.1.131.201 | hadoop namenode zookeeper server-1 hbase hmaster |
3 | d-hdp-02 | 192.1.131.202 | hadoop datanode zookeeper server-2 hbase HRegionServer |
3 | d-hdp-03 | 192.1.131.203 | hadoop datanode zookeeper server-3 hbase HRegionServer |
3. hadoop安装
参考《adoop编程入门学习笔记-1 安装运行hadoop》。多部署一台机器用作client,通过他提交hadoop mapreduce job和对hbase的访问,这台机器上拷贝d-hdp-01上的程序,在hdfs-site.xml增加一个属性。
<property>
<name>hadoop.job.ugi</name>
<value>hadoop,supergroup</value>
</property>
4. zookeeper安装
在/home/hadoop/cloud目录下解压缩zookeeper-3.4.6.tar.gz
tar -xvf zookeeper-3.4.6.tar.gz
mv zookeeper-3.4.6 zookeeper
cd zookeeper
mkdir tmp
mkdir logs
cd conf
cp zoo_template.cfg zoo.cfg
在conf 目录编辑zoo.cfg文件zoo.cfg,在最后增加一下内容
dataDir=/home/hadoop/cloud/zookeeper/tmp
dataLogDir=/home/hadoop/cloud/zookeeper/logs
server.1=d-hdp-01:2388:3888
server.2=d-hdp-02:2388:3888
server.3=d-hdp-03:2388:3888
cd ../tmp
touch myid
编辑myid 增加已行1
编辑~/目录下的.bashrc文件
#add for hadoop
export JAVA_HOME="/usr/lib/jvm/java-7-openjdk-amd64/jre"
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin
export HADOOP_HOME=/home/hadoop/cloud/hadoop
export PATH=.:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export ZOOKEEPER_HOME=/home/hadoop/cloud/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf
cd /home/hadoop/cloud
tar cvf zookeeper.tar zookeeper
scp zookeeper.tar hadoop@d-hdp-02:~/cloud
scp zookeeper.tar hadoop@d-hdp-03:~/cloud
ssh分别登录到d-hdp-02和d-hdp-03机器上应用tar解压缩文件,并修改tmp下的myid里的id为2和3
ssh d-hdp-02
cd ~/zookeeper/tmp
完成后用zkServer.sh start 在三台机器上分别启动zookeeper
5.hbase安装
在/home/hadoop/cloud目录下解压缩hbase-1.2.0-bin.tar.gz
cd /home/hadoop/cloud
tar -xvf hbase-1.2.0-bin.tar.gz
mv hbase-1.2.0 hbase
cd conf
编辑hbase-env.sh
export HADOOP_HOME=/home/hadoop/cloud/hadoop
export HBASE_HOME=/home/hadoop/cloud/hbase
export JAVA_HOME="/usr/lib/jvm/java-7-openjdk-amd64/jre"
export HBASE_CLASSPATH=/home/hadoop/cloud/hbase/conf
export HBASE_LOG_DIR=${HBASE_HOME}/logs
export HBASE_MANAGES_ZK=false
编辑hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://d-hdp-01:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>d-hdp-01:60000</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/opt/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>d-hdp-01,d-hdp-02,d-hdp-03</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
编辑regionservers
d-hdp-02
d-hdp-03
拷贝分发文件
cd ~/cloud
tar cvf hbase.tar hbase
scp hbase.tar hadoop@d-hdp-02:~/cloud
scp hbase.tar hadoop@d-hdp-03:~/cloud
cd ~
编辑.bashrc(三台机器上都执行)
#add for hadoop
export JAVA_HOME="/usr/lib/jvm/java-7-openjdk-amd64/jre"
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin
export HADOOP_HOME=/home/hadoop/cloud/hadoop
export PATH=.:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export ZOOKEEPER_HOME=/home/hadoop/cloud/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf
export HBASE_HOME=/home/hadoop/cloud/hbase
export PATH=$PATH:$HBASE_HOME/bin
登录到d-hdp-02和d-hdp-03上解压hbase.tar tar -xvf hbase.tar
在d-hdp-01上启动hbase
start-hbase.sh
用检查运行结果