在所有虚拟机的/usr/local/hadoop/etc/hadoop/core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
初始化namenode
[root@master hadoop]# hdfs namenode -format
启动namenode与datanode
[root@master hadoop]# hadoop-daemon.sh start namenode
[root@slave1 hadoop]# hadoop-daemon.sh start datanode
同时启动集群
在master的/usr/local/hadoop/etc/hadoop/slaves
slave1
slave2
slave3
然后
[root@master hadoop]# start-dfs.sh
免密登陆
cd ~/.ssh
[root@master .ssh]# ssh-keygen -t rsa
[root@master .ssh]# ssh-copy-id slave1
此时可直接ssh
[root@master .ssh]# ssh-copy-id master
关于监控
[root@master hadoop]# hdfs dfsadmin -report | more
或访问 http://192.168.56.81:50070
停止
[root@master hadoop]# hadoop-daemon.sh stop namenode
[root@slave1 hadoop]# hadoop-daemon.sh stop datanode
或
··[root@slave1 hadoop]# stop-dfs.sh