准备事项:
win7下安装VMware,Ubuntu
由于hadoop仅在linux系统作为生产平台
安装JDK1.7
1.打开终端,建立新目录:/usr/local/java,即执行命令:sudo mkdir /usr/local/java
2.将下载好的JDK放入/usr/local/java目录下,进入/usr/local/java目录,解压缩JDK,执行命令:sudo tar -zxvf jdk-7u75-linux-i586.gz
3.配置JDK环境变量,编辑 vim ~/.bashrc
添加:
export JAVA_HOME=/usr/local/java/jdk1.7.0_75
export PATH=${JAVA_HOME}/bin:$PATH
执行命令:source ~/.bashrc 让配置立即生效
4.验证,执行javac命令
安装SSH,并配置SSH免密码登陆
1.安装ssh,执行命令:sudo apt-get install ssh
2.启动ssh服务,执行命令:sudo /etc/init.d/ssh start
3.设置免密码登录,执行命令:ssh-keygen -t rsa 一路回车下去,最后在/home/hadoop/.ssh下生产id_rsa和id_rsa.pub文件,进入/home/hadoop/.ssh目录下,执行命令:cat id_rsa.pub >> authorized_keys
4.至此,免密码登录设置完成,验证是否能免密码登录,执行命令:ssh localhost
安装hadoop
1.建立安装目录:/usr/local/hadoop
2.将下载的hadoop2.6.0安装包放入此目录下,解压缩
3.编辑~/.bashrc,即执行命令:vim ~/.bashrc
添加内容:
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0
export PATH=$HADOOP_HOME/bin:$PATH
编辑vim /etc/hosts
修改内容:
127.0.0.1 localhost
#127.0.1.1 master
192.168.213.128 master
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
4.修改配置文件hadoop-env.sh
添加内容:
export JAVA_HOME=/usr/local/java/jdk1.7.0_75
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0
export PATH=$PATH:$HADOOP_HOME/bin
5.修改配置文件core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
6.修改配置文件hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/hdfs/data</value>
</property>
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<span style="white-space:pre"> </span><value>master:9001</value>
</property>
</configuration>
内容为:master
至此,配置完毕,格式化文件系统,执行命令:hadoop namenode -format
9.启动hadoop,执行命令:./start-all.sh
如图: