在上回书说道我们已经安装4台主机模拟Hadoop工作环境:
详情请见:http://blog.csdn.net/sinat_27902055/article/details/52998442
通过修改/etc/hosts和hadoop的slaves再文件添加三台虚拟机
主机 | IP地址 | 安装软件 | 进程 | 账号密码 |
master | 192.168.40.100 | hadoop/jdk | nm SecondaryNameNode | hadoop/hadoop |
weekend001 | 192.168.40.101 | hadoop/jdk | dn | hadoop/hadoop |
weekend002 | 192.168.40.102 | hadoop/jdk | dn | hadoop/hadoop |
weekend003 | 192.168.40.103 | hadoop/jdk | dn | hadoop/hadoop |
weekend004 | 192.168.40.104 | hadoop/jdk | dn zookeeper | hadoop/hadoop |
weekend005 | 192.168.40.105 | hadoop/jdk | dn zookeeper | hadoop/hadoop |
weekend006 | 192.168.40.106 | hadoop/jdk | dn zookeeper | hadoop/hadoop |
完成配置后启动hadoop集群(不必再格式化namenode会导致无法启动datanode)
在namenode中进行datanode的刷新
sh hadoop-daemon.sh start datanode
在datanode中进行负载均衡操作
start-balancer.sh
再次检查jps和50070确认服务都起来了
如果出现nodemanager stop it first 没有nodemanager进程
先执行stop-all.sh下,然后再执行start-all.sh
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------