•关停集群: stop-dfs.sh
•1, 每台服务器要
–安装jdk
–/etc/hosts (别名,通讯)
–调整时钟
–配置环境变量
–免密钥登陆
•控制节点scp自己的id_dsa.pub分发到其他节点
•cat ~/node1.pub >> ~/.ssh/authorized_keys
–mkdir /opt/sxt
•core-site.xml
• <property>
• <name>fs.defaultFS</name>
• <value>hdfs://node01:9000</value>
• </property>
• <property>
• <name>hadoop.tmp.dir</name>
• <value>/var/sxt/hadoop/full</value>
• </property>
•hdfs-site.xml
• <property>
• <name>dfs.replication</name>
• <value>3</value>
• </property>
• <property>
• <name>dfs.namenode.secondary.http-address</name>
• <value>node02:50090</value>
• </property>
•slaves
• node02
• node03
• node04
•4,确认之前的hadoop进程是否停到了
–jps
•5,hdfs namenode -format (node01)
•6,start-dfs.sh
•7,每个节点jps验证,node01:50070