关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
配置ip映射
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.199.100 standalone
配置ssh
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
重启虚拟机后查看是否可以免密登录
ssh localhost
如果出现下面就是失败
The authenticity of host ‘localhost (::1)’ can’t be established.
ECDSA key fingerprint is SHA256:MJxZUIDNbbnlfxCU+l2usvsIsbc6/NTJ06j/TO4g8G0.
ECDSA key fingerprint is MD5:d1:8f:94:dd:80:e2:cf:6b:a7:45:74:e3:6b:2f:f2:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘localhost’ (ECDSA) to the list of known hosts.
root@localhost’s password:
Last login: Sun Jun 7 12:39:03 2020 from 192.168.199.206
成功的是这样,只有一行
Last login: Sun Jun 7 12:39:03 2020 from 192.168.199.206
解压hadoop
tar -zxvf ./hadoop-3.2.1.tar.gz
mv ./hadoop-3.2.1 /usr/local/bigdata
将hadoop加入环境变量
vim /etc/profile
#hadoop
export HADOOP_HOME=/usr/local/bigdata/hadoop-3.2.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile
修改配置
cd /usr/local/bigdata/hadoop-3.2.1/etc/hadoop/
vim hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8
export HADOOP_HOME=/usr/local/bigdata/hadoop-3.2.1
vim core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>localhost:9000</value>
</property>
</configuration>
vim hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
#3.x将50070改为了9870
<property>
<name>dfs.namenode.http.address</name>
<value>localhost:9870</value>
</property>
</configuration>
vim mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
<description>The runtime framework for executing MapReduce jobs</description>
</property>
</configuration>
vim yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<final>true</final>
</property>
</configuration>
修改启动脚本用户
因为我用的是root用户
修改sbin/start-dfs.sh和sbin/stop-dfs.sh,在文件头加入以下内容
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=root
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
修改sbin/start-yarn.sh和sbin/stop-yarn.sh,在文件头加入以下内容
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=root
YARN_NODEMANAGER_USER=root
格式化namenode
hdfs namenode -format
启动
start-all.sh
验证
jps
得到下列
4132 NodeManager
3494 DataNode
3998 ResourceManager
4494 Jps
3359 NameNode
3727 SecondaryNameNode
访问,3.x将端口50070改为了9870,我们按照官方走
http://192.168.199.100:9870/explorer.html#/
我用hdfs命令创建了文件夹
看起来没问题,但是
9000端口不能访问
看了很多,没有解决
最后回顾了一下安装步骤,查看了一下端口占用 netstat -lntp
觉得是localhost的原因
查看端口号,下面这个图我修改后正确的
一开始9000端口是127.0.0.1:9000
我修改以下配置后变成了192.168.199.100:9000
vim core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>standalone:9000</value>
</property>
</configuration>
vim hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
#3.x将50070改为了9870
<property>
<name>dfs.namenode.http.address</name>
<value>standalone:9870</value>
</property>
</configuration>
- 修改workers
vim workers
把localhost改成standalone