5.2 配置环境变量 vi /etc/profile (所有机器)
export JAVA_HOME=/usr/java/default
export CLASSPATH= C L A S S P A T H : CLASSPATH: CLASSPATH:JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH= J A V A _ H O M E / b i n : JAVA\_HOME/bin: JAVA_HOME/bin:JAVA_HOME/lib: J A V A _ H O M E / j r e / b i n : JAVA\_HOME/jre/bin: JAVA_HOME/jre/bin:PATH:$HOME/bin
export HADOOP_HOME=/home/cbcloud/hadoop
export HADOOP_CONF_DIR=/home/cbcloud/hdconf
export PATH= P A T H : PATH: PATH:HADOOP_HOME/bin
把hadoop的配置文件目录从源目录拿出来,方便以后升级hadoop
mv hadoop的conf目录内文件到/home/cbcloud/hdconf内
5.3 编辑 hadoop 配置文件 core-site.xml
加入
fs.default.name
hdfs://hd203:9000
fs.checkpoint.dir
/home/cbcloud/hdtmp/dfs/namesecondary
Determines where on the local filesystem the DFS secondary
name node should store the temporary images to merge.
If this is a comma-delimited list of directories then the image is
replicated in all of the directories for redundancy.
fs.checkpoint.period
60
Determines where on the local filesystem the DFS secondary
name node should store the temporary images to merge.
If this is a comma-delimited list of directories then the image is
replicated in all of the directories for redundancy.
5.4 编辑hdfs-site.xml
加入
dfs.replication
3
dfs.data.dir
/home/cbcloud/hddata
hadoop.tmp.dir
/home/cbcloud/hdtmp/
dfs.balance.bandwidthPerSec
10485760
dfs.hosts.exclude
/home/cbcloud/hdconf/excludes
true
5.5 编辑mapred-site.xml
加入
mapred.job.tracker
hd203:9001
5.6 编辑 hadoop-env.sh
export JAVA_HOME=/usr/java/default
5.7 编辑masters 该文件指定 secondary name 机器,
加入
hd202
编辑 slaves
加入
hd204
hd205
hd206
5.8 拷贝hd203的hadoop和hdconf到所有机器
# scp -r /home/cbcloud/hadoop cbcloud@hd204:/home/cbcloud
# scp -r /home/cbcloud/hdconf cbcloud@hd204:/home/cbcloud
完成后,在hd203 格式化hadoop文件系统
执行
hadoop namenode -format
启动
start-all.sh
查看集群内datanode的机器
执行jps
5764 Jps
18142 DataNode
18290 TaskTracker
看到以上结果 说明启动正确
web方式
http://hd203:50070/dfshealth.jsp
注意 本地PC hosts文件也要配置
192.168.0.203 hd203
192.168.0.204 hd204
192.168.0.205 hd205
192.168.0.206 hd206
192.168.0.202 hd202
web方式可以查看集群状态和job状态等,至此hadoop安装完毕
6 安装zookeeper (hd203)
tar zxvf zookeeper-3.3.3-cdh3u0.tar.gz -C /home/cbcloud
在hd204-hd206上
mkdir /home/cbcloud/zookeeperdata
chown -R cbcloud:cbcloud /home/cbcloud/zookeeperdata
chown -R cbcloud:cbcloud /home/cbcloud/zookeeper-3.3.3-cdh3u0
编辑 /home/cbcloud/zookeeper-3.3.3-cdh3u0/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/home/cbcloud/zookeeperdata
# the port at which the clients will connect
clientPort=2181
server.1=hd204:2888:3888
server.2=hd205:2888:3888
server.3=hd206:2888:3888
scp hd203的zookeeper到hd204,hd205,hd206
# scp -r /home/cbcloud/zookeeper-3.3.3-cdh3u0/ cbcloud@hd205:/home/cbcloud/
在hd204-206 的/home/cbcloud/zookeeperdata目录touch myid文件,
内容分别为1,2,3 和server编号一致 chown cbcloud:cbcloud myid
启动zookeeper,在hd204-206上bin目录下 执行
# zkServer.sh start
启动后 通过
# zkServer.sh status
查看状态 注意 在centos5.6上 执行这个会报错
Error contacting service. It is probably not running.
通过查看脚本 是因为
echo stat | nc -q 1 localhost
nc版本不同,没有-q的参数,更改脚本去掉-q 1 即可
另外 可以通过
echo stat | nc localhost 2181来查看状态
7 安装hbase
7.1 建立目录 (所有机器)
mkdir /home/cbcloud/hbconf
chown -R cbcloud:cbcloud /home/cbcloud/hbconf
tar zxvf hbase-0.90.1-cdh3u0.tar.gz -C /home/cbcloud
cd /home/cbcloud
mv hbase-0.90.1-cdh3u0 hbase
chown -R cbcloud:cbcloud hbase/
7.2 配置环境变量
vi /etc/profile (所有机器) 追加如下内容
export HBASE_CONF_DIR=/home/cbcloud/hbconf
export HBASE_HOME=/home/hadoop/hbase
把hbase的配置文件目录从源目录拿出来,方便以后升级hbase
mv hbase的conf目录内文件到/home/cbcloud/hbconf内
7.3 编辑 hbase-env.sh
export HBASE_OPTS=“$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode”
export JAVA_HOME=/usr/java/default
export HBASE_MANAGES_ZK=false
export HBASE_HOME=/home/cbcloud/hbase
export HADOOP_HOME=/home/cbcloud/hadoop
7.4 编辑hbase-site.xml
加入
hbase.rootdir
hdfs://hd203:9000/hbase
hbase.cluster.distributed
true
hbase.master
hd203:60000
hbase.master.port
60000
The port master should bind to.
hbase.zookeeper.quorum
hd204,hd205,hd206
7.5 编辑regionservers
加入
hd204
hd205
hd206
scp hd203 的hbase到hd204-206,202
# scp -r /home/cbcloud/hbase/ cbcloud@hd204:/home/cbcloud
# scp -r /home/cbcloud/hbconf/ cbcloud@hd204:/home/cbcloud