1.设置相同日期(因为要设置心跳)date -s "yyyy-mm-dd HH-mm--ss"
2.查看别称 cat /etc/sysconfig/network cat /etc/hosts
3.ssh localhost 生成 .ssh文件
4.Linux scp 命令用于 Linux 之间复制文件和目录。 在node01节点上的.ssh文件中添加 scp id_dsa.pub node04:`pwd`/node01.pub
千万记住`pwd`不是单引号而是tab上的引号
5.修改hadoop的core——site.xml文件中的文件存放位置hadoop.tmp.dir
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9001</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/sxt/hadoop/full</value>
</property>
</configuration>
修改hdfs-cite.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node02:50091</value>
</property>
</configuration>
6.配置hadoop环境变量 /etc/profiles
7.启动hadoop hdfs namenode -format
启动集群start-dfs.sh
8.建立文件夹也可以使用默认hdfs dfs -mkdir -p /user/root
9.将文件放入到集群中
hdfs dfs -D dfs.blocksize=1048576 -put 1.text
ss -nal