我的“编译的hadoop伪分布式安装与启动”博客中最后启动各个进程的参数显示为下图所示
现在想要NameNode、DataNode和SecondaryNameNode都以主机名启动,具体操作见如下步骤
1、在配置文件hdfs-site.xml中添加配置
[hadoop@hadoop001 hadoop-2.8.3]$ vim /opt/software/hadoop-2.8.3/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.226.138:50090</value>
</property>
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>192.168.226.138:50091</value>
</property>
2、在配置文件slave中添加配置
[hadoop@hadoop001 hadoop-2.8.3]$ vim /opt/software/hadoop-2.8.3/etc/hadoop/slaves
将文件中的localhost改为自己的hostname,我的hostname为hadoop001,所以将localhost改为hadoop001
3、然后重启hdfs
[hadoop@hadoop001 hadoop-2.8.3]$ pwd
/opt/software/hadoop-2.8.3
[hadoop@hadoop001 hadoop-2.8.3]$ sbin/start-dfs.sh
Starting namenodes on [hadoop001]
hadoop001: namenode running as process 17411. Stop it first.
hadoop001: datanode running as process 17550. Stop it first.
Starting secondary namenodes [hadoop001]
hadoop001: secondarynamenode running as process 17722. Stop it first.
[hadoop@hadoop001 hadoop-2.8.3]$ jps
17411 NameNode
18250 Jps
17722 SecondaryNameNode
17550 DataNode
[hadoop@hadoop001 hadoop-2.8.3]$
此时你会发现
NameNode、DataNode和SecondaryNameNode都是以hostname启动的了