Hadoop 3.x hdfs 端口不是50070 是9870
1, 免密钥登陆
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
2, 安装JDK,
3, 修改Hadoop-env.sh 修改Java-path
4, 修改core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://singleton:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/root/hadoop/hadooptmp/local</value>
</property>
</configuration>
second:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop110:9000/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/lijx/app/hadoop-2.7.2/data/</value>
</property>
</configuration>
5, 修改hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>singleton:50090</value>
</property>
6, 修改yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop102</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
7, 启动 start-dfs.sh
start-yarn.sh
测试 wordcount 和pi
data node 在哪启动 workers
but there is no HDFS_NAMENODE_USER defined. Aborting operation.
1、对于start-dfs.sh和stop-dfs.sh文件,添加下列参数:
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
2、对于start-yarn.sh和stop-yarn.sh文件,添加下列参数:
#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
3,测试 搭建 单节点成功 每块大小是1M 进行上传
hdfs dfs -D dfs.blocksize=1048576 -put test.txt /user/root