部署之前我们先打开环境查看
namenode和secondary namenode都是以hadoop001开启,而datanode不是
1.查看IP地址,配置内网IP------为了实现三个进程都是以hadoop001启动
Linux中查看IP地址命令:ifconfig
如图所示---箭头指向的为当前Linux系统的内网IP
而Windows中的命令是:ipconfig
切换到root用户配置内网IP
红色框框内为内网IP,hadoop001为机器名
优点:以后直接换内网IP就行,不需要重新配置三大进程
2.hdfs的三个进程以hadoop001机器名称启动
2.1 namenode 以hadoop001启动:
[hadoop@hadoop001 hadoop]$ vi core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop001:9000</value>
</property>
</configuration>
图中圈起来表示:参数fs.defaultFs配置的hadoop001-----namenode以hadoop001启动
2.2 datanode 以hadoop001启动:
[hadoop@hadoop001 hadoop]$ vi workers #3.0x版本在workers配置文件里, localhost --修改为---hadoop001 #2.0x版本是在slaves配置文件里面.
2.3 secondarynamenode 以hadoop001启动:
默认配置值可以去官网查看:Ctrl+f--输入secondary 回车快速查找https://hadoop.apache.org/docs/r3.2.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
[hadoop@hadoop001 hadoop]$ vi hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>ruozedata001:9868</value>
</property>
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>ruozedata001:9869</value>
</property>
</configuration>
3.pid文件
3.1 位置:每次启动进程,pid