一. 准备工作(搭建环境:Ubuntu 12.04)
假设我们有三台机器。一台做master,另外两台做slave。所有的操作都先只在master上操作,然后scp同步到slaves。
1、创建hadoop用户,建立master到两台slaves的信任关系(如何建立,可以百度一下)
- adduser hadoop
- usermod hadoop -G sudo -a # 添加到超级管理员用户组
adduser hadoop
usermod hadoop -G sudo -a # 添加到超级管理员用户组
2、下载&解压 hadoop 2.3.0 压缩包
到这里 找一个镜像,然后下载 hadoop 2.3.0 压缩包:
http://www.apache.org/dyn/closer.cgi/hadoop/common/
这里我们解压到 /home/hadoop/hadoop
目录结构如下:
- hadoop@master:~/hadoop$ pwd
- /home/hadoop/hadoop
- hadoop@master:~/hadoop$ ls
- bin etc include lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share
hadoop@master:~/hadoop$ pwd
/home/hadoop/hadoop
hadoop@master:~/hadoop$ ls
bin etc include lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share
3、下载jdk
直接到这里下载 jdk:
http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
挑选对应自己操作系统的版本,只需要解压到任意目录,不需要执行任何操作。
这里我们解压到/usr/local/jdk
- hadoop@master:/usr/local/jdk$ pwd
- /usr/local/jdk
- hadoop@master:/usr/local/jdk$ ls
- bin db jre LICENSE README.html src.zip THIRDPARTYLICENSEREADME.txt
- COPYRIGHT include lib man release THIRDPARTYLICENSEREADME-JAVAFX.txt
hadoop@master:/usr/local/jdk$ pwd
/usr/local/jdk
hadoop@master:/usr/local/jdk$ ls
bin db jre LICENSE README.html src.zip THIRDPARTYLICENSEREADME.txt
COPYRIGHT include lib man release THIRDPARTYLICENSEREADME-JAVAFX.txt
4、修改各种配置文件
/etc/hostname:
master 上的这个文件就写master,两个slaves上的这个文件分别写 node1、 node2
/etc/hosts(ip地址根据自己的情况修改)
- 127.0.0.1 localhost
- 192.168.204.128 master
- 192.168.204.129 node1
- 192.168.204.130 node2
127.0.0.1 localhost
192.168.204.128 master
192.168.204.129 node1
192.168.204.130 node2
/etc/profile(在最后面添加):
- #hadoop
- export JAVA_HOME=/usr/local/jdk
- export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
- export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
- export HADOOP_ROOT=/home/hadoop
- export HADOOP_HOME=$HADOOP_ROOT/hadoop
- export PATH=$HADOOP_ROOT/hadoop/bin:$HADOOP_ROOT/hadoop/sbin:$PATH
#hadoop
export JAVA_HOME=/usr/local/jdk
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_ROOT=/home/hadoop
export HADOOP_HOME=$HADOOP_ROOT/hadoop
export PATH=$HADOOP_ROOT/hadoop/bin:$HADOOP_ROOT/hadoop/sbin:$PATH
打开hadoop配置目录(/home/hadoop/hadoop/etc/hadoop)
core-site.xml:
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://master</value>
- </property>
- </configuration>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master</value>
</property>
</configuration>
hdfs-site.xml:
- <configuration>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>/data1/hadoop-nn</value>
- </property>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://master/</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.hosts.exclude</name>
- <value>/home/hadoop/hadoop/etc/hadoop/dfs.exclude</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>/data1/hadoop-dn,/data2/hadoop-dn</value>
- </property>
- </configuration>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/data1/hadoop-nn</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master/</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/home/hadoop/hadoop/etc/hadoop/dfs.exclude</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/data1/hadoop-dn,/data2/hadoop-dn</value>
</property>
</configuration>
从上面的配置文件可以看出,我们把namenode的数据存储目录定位 /data1/hadoop-nn
datanode的存储目录定位为 /data1/hadoop-dn, /data2/hadoop-dn
所以要保证这两个目录正常存在,并且有空闲硬盘挂在上面。
slaves:
- node1
- node2
node1
node2
5、同步各种配置文件到slaves上
二. 启动hdfs
1、在namenode上执行:
- hadoop-daemon.sh --script hdfs start namenode
hadoop-daemon.sh --script hdfs start namenode
2、在两个datanode上分别执行:
- hadoop-daemon.sh --script hdfs start datanode
hadoop-daemon.sh --script hdfs start datanode
三.测试hdfs
在任意一台server上执行
- hadoop fs -ls /
hadoop fs -ls /
- hadoop fs -put ./test /
hadoop fs -put ./test /
hadoop新老版本下载URL:http://archive.apache.org/dist/hadoop/core/