*********************单机版版
1、环境:VMware虚拟机、LINUX 64位系统、搭建单机版hadoop2.2.0
2、新建hadoop用户、组,用于安装部署hadoop2.2.0
3、下载hadoop-2.2.0.tar.gz(hadoop官网下载)以及jdk-6u45-linux-x64.bin(ORACE官网下载)
4、上传到虚拟机上在hadoop用户下解压、安装
5、配置如下:
5.1 vi /etc/hosts下添加
192.168.153.150 nn01
--去掉127.0.0.1
5.2在hadoop用户环境变量里添加如下:PATH=$PATH:$HOME/bin
export JAVA_HOME=/jdk1.6.0_45
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export HADOOP_HOME=/hadoop/hadoop-2.2.0
export PATH=$PATH:$HADOOP_HOME
# Hadoop
export HADOOP_PREFIX="/hadoop/hadoop-2.2.0"
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
export HADOOP_YARN_HOME=${HADOOP_PREFIX}
export PATH=$PATH:$HADOOP_PREFIX
5.3让profile生效
source .bash_profile
5.4更改hadoop配置文件,hadoop-2.2.0/etc/hadoop下更改core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.153.150:9000</value>
<final>true</final>
</property>
5.5更改hadoop-env.sh
添加export JAVA_HOME=/jdk1.6.0_45
5.6更改hdfs-site.xml,添加
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
5.7更改mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>file:/hadoop/mapred/system</value>
<final>true</final>
</property>
<property>
<name>mapred.local.dir</name>
<value>file:/hadoop/mapred/local</value>
<final>true</final>
</property>
</configuration>
5.8更改yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>192.168.153.150:8088</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>192.168.153.150</value>
<description>hostanem of RM</description>
</property>
</configuration>
5.9更改yarn-env.sh
新增export JAVA_HOME=/jdk1.6.0_45
6.0配置完成后,格式化namenode
hdfs namenode -format
格式化时看是否有报错信息
6.1格式化完成后,启动namenode、datanode、ResourceManager、NodeManager
$ hadoop-daemon.sh start namenode
$ hadoop-daemon.sh start datanode
$yarn-daemon.sh start resourcemanager
$ yarn-daemon.sh start nodemanager
最后查看进程jps
[hadoop@nn01 sbin]$ jps
4097 Jps
3371 ResourceManager
3134 NameNode
3224 DataNode
3607 NodeManager
会有以上几个进程
6.2访问管理页面地址
6.3访问yarn页面地址
6、4测试hdfs是否正常
hadoop fs -ls /
hadoop fs -mkdir /home 创建home文件夹
hadoop fs -put /hadoop/hadoop-2.2.0/LICENSE.txt /home/hadoop/ 上传文件
注:
1、最好别用./start-all.sh或者start-yarn.sh启动。需要一个一个启动。
2.、14/04/27 04:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
这个错误是服务器包和hadoop的lib包不一致。不影响运行。重编译就会正常
*********************集群版本
1、添加2台虚拟机做为datanode节点
2、将单机版本的hadoop目录分别传到新加的机器里。路径要一样
3、新加机器里配置/etc/hosts
4、3台机器都需要配置slaves,将datanode节点机器名字写到里面
5、生成SSH免密码登录认证。
echo"" > .ssh/authorized_keys
ssh-keygen
cat id_rsa.pub >>authorized_keys
将没台机器的authorized_keys文件都追加到里面。保证每台机器都有其它机器的信息
6、ssh进行测试是否正常
7、在namenode上启动
./start-dfs.sh
结果namenode的JPS进程如下:
2050 NameNode
2309 Jps
2207 SecondaryNameNode
datanode进程如下:
3817 DataNode3905 Jps
8、启动YARN进程
./start-yarn.sh
结果namenode的JPS进程如下:
2050 NameNode
2416 ResourceManager
2207 SecondaryNameNode
2482 Jps
多了ResourceManager进程
datanode进程如下:
[hadoop@dn01 hadoop]$ jps
3971 Jps
3817 DataNode
3941 NodeManager
9、查看机器状态
./hdfs dfsadmin -report
Configured Capacity: 39187111936 (36.50 GB)
Present Capacity: 34221793280 (31.87 GB)
DFS Remaining: 34221744128 (31.87 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)
Live datanodes:
Name: 192.168.153.153:50010 (dn03)
Hostname: dn03
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2482696192 (2.31 GB)
DFS Remaining: 17110835200 (15.94 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.33%
Last contact: Sun Apr 27 07:22:14 EDT 2014
Name: 192.168.153.151:50010 (dn01)
Hostname: dn01
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2482622464 (2.31 GB)
DFS Remaining: 17110908928 (15.94 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.33%
Last contact: Sun Apr 27 07:22:13 EDT 2014
10、执行./start-balancer.sh 让datanode均匀分配空间
11、进程测试。上传190M的文件
hadoop fs -put hadoop-2.2.0.tar.gz /home/hadoop/
结果在datanode上发现占用自己的空间了。说明文件已经分布到datanode机器上了