虚拟机网络设置为仅主机模式(HostOnly),对应本地网卡VMnet1
虚拟机ip设置如下图:
本机windows系统设置VMnet1:
所以我们的Linux系统的ip地址必须在第一步设置的192.168.122.128与192.168.122.254之间,如下图:
然后虚拟机与主机之间互相ping,ping通网络即可用:
[root@Master ~]# ping 192.168.122.100
PING 192.168.122.100 (192.168.122.100) 56(84) bytes of data.
64 bytes from 192.168.122.100: icmp_seq=1 ttl=64 time=0.445 ms
64 bytes from 192.168.122.100: icmp_seq=2 ttl=64 time=0.328 ms
64 bytes from 192.168.122.100: icmp_seq=3 ttl=64 time=0.349 ms
本地ping虚拟机:ping 192.168.122.130
设置Linux系统Centos的主机名为Master,#vim /etc/sysconfig/network:
NETWORKING=yes
HOSTNAME=Master
接着设置ip与主机名映射,#vim /etc/hosts:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain
192.168.122.130 Master
后面会设置完Java环境变量等再重启系统,使编辑生效:#reboot;
接下来,安装jdk,设置环境变量,#vim /etc/profile:
JAVA_HOME="/home/jdk1.7.0_55"
CLASS_PATH="$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CALSS_PATH"
PATH=".:$JAVA_HOME/bin:$PATH"
更新文件,#source /etc/profile
测试jdk是否安装成功:#java -version
然后,安装hadoop2.5,配置环境变量:
JAVA_HOME="/home/jdk1.7.0_55"
HADOOP_HOME="/home/jzz/hadoop-2.5.0"
CLASS_PATH="$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CALSS_PATH"
PATH=".:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH"
配置hadoop2.5安装目录/etc/hadoop下的5个配置文件:
1.hadoop-env.sh:
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/home/jdk1.7.0_55
2.hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
3.core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://Master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/jzz/hadoop-2.5.0/tmp/hadoop_tmp</value>
</property>
</configuration>
4.mapred-site.xml,将解压的目录中的mapred-site.xml.template更名为mapred-site.xml即可:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
5.yarn-site.xml:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
最后,重启系统reboot.
格式化namenode:
[root@Master bin]# hdfs namenode -format
启动hadoop服务:
[root@Master sbin]# start-all.sh
检测进程:
[root@Master hadoop-2.5.0]# jps
5948 SecondaryNameNode
5724 DataNode
5641 NameNode
10826 Jps
6292 NodeManager
6189 ResourceManager
访问hadoop服务:
http://192.168.122.130:50070