安装服务
单机版hdfs:namenode、SecondaryNameNode、datanode
环境准备:
节点:10.1.253.178(hostname:cdh1)
创建用户:liulu
本节点自己建立互信(执行ssh localhost输入yes,保证以后不需要输)
节点关闭防火墙
安装包:
jdk1.6.0_31.zip
hadoop-2.0.0-cdh4.2.1.tar.gz
部署步骤:
1. 上传安装包并解压
/home/liulu/app/hadoop-2.0.0-cdh4.2.1
/home/liulu/app/jdk1.6.0_31
2. 修改以下文件
/home/liulu/.bash_profile:
JAVA_HOME=/home/liulu/app/jdk1.6.0_31
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
(注:执行source命令使之生效)
hdfs文件1:/home/liulu/app/hadoop-2.0.0-cdh4.2.1/etc/hadoop/core-site.xml:
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost</value>
</property>
hdfs文件2:/home/liulu/app/hadoop-2.0.0-cdh4.2.1/etc/hadoop/hdfs-site.xml:
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/liulu/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>
/home/liulu/data/datanode
</value>
</property>
hdfs文件3:/home/liulu/app/hadoop-2.0.0-cdh4.2.1/etc/hadoop/hadoop-env.sh:
export JAVA_HOME=/home/liulu/app/jdk1.6.0_31
hdfs文件4:/home/liulu/app/hadoop-2.0.0-cdh4.2.1/etc/hadoop/slaves:
localhost
localhost
注:$dfs.namenode.name.dir指定目录需要事先创建。
3. 格式化hdfs文件系统的namenode
cd /home/liulu/app/hadoop-2.0.0-cdh4.2.1/bin
./hdfs namenode -format
注:format后若$dfs.datanode.data.dir指定目录存有老集群的信息,则需要删除。
4. 启动hdfs集群
cd /home/liulu/app/hadoop-2.0.0-cdh4.2.1/sbin
./start-dfs.sh
5. 检查hdfs集群
看进程(NameNode、SecondaryNameNode、DataNode):
[liulu@cdh1 sbin]$ jps -m
14556 Jps -m
14310 SecondaryNameNode
14106 DataNode
13976 NameNode
看namenode监控页面:
http://10.1.253.178:50070/dfshealth.jsp
http://10.1.253.178:50090/status.jsp
6. Hdfs操作
cd /home/liulu/app/hadoop-2.0.0-cdh4.2.1/bin
./hdfs dfs -mkdir /test
./hdfs dfs -put ~/testfile /test
7. 关闭Hdfs
cd /home/liulu/app/hadoop-2.0.0-cdh4.2.1/sbin
./stop-dfs.sh