一.安装新的虚拟机
虚拟机可以看我之前的文章,按照自己电脑硬件配置选择它的内存,处理器,硬盘大小。
下面是我之前安装虚拟机的链接:
(63条消息) 虚拟机centOS安装_难以言喻wyy的博客-CSDN博客
二.在新建立的虚拟机上开启远程连接
2.1关闭和禁用防火墙
system stop firewalld
system disable firewalld
2.2修改网络配置文件
跳转到网络服务目录
cd /etc/sysconfig/network-scripts/
修改网络配置
vi ifcfg-ens33
在其中加入以下代码
其中IPADDR和其以下写上自己的ip地址
BOOTPROTO=static
ONBOOT=yes
ONBOOT=yes
IPADDR=192.168.61.143
GATEWAY=192.168.61.2
NETMASK=255.255.255.0
DNS1=8.8.8.8
重启网络服务
systemctl restart network
三.进入xshell进行配置
3.1安装vim工具
yum install -y vim
3.2免密登陆
[root@localhost ~]# ssh-keygen -t rsa -P ""
[root@localhost .ssh] pwd
/root/.ssh
[root@localhost .ssh]# cat ./id_rsa.pub >> authorized_keys
3.3开启远程免密登录配置
[root@localhost .ssh]# ssh-copy-id -i ./id_rsa.pub -p22 root@192.168.61.146
3.4远程登录
[root@localhost .ssh]# ssh -p22 root@192.168.61.146
3.5同步时间
[root@localhost .ssh]# yum install -y ntpdate
3.6定时更新时间
[root@localhost .ssh]# crontab -e
*/5 * * * * /usr/sbin/ntpdate -u time.windows.com
3.7启动定时任务
[root@localhost .ssh]# service crond start/stop/restart/reload/status
3.8修改计算机名
[root@localhost .ssh]# vim /etc/hostname
[root@localhost .ssh]# hostnamectl set-hostname gree143
3.9Ping服务器
[root@localhost .ssh]# ping 192.168.61.143
[root@localhost .ssh]# ping gree143
3.10配置hosts文件
[root@localhost .ssh]# vim /etc/hosts
192.168.78.143 gree143
3.11解压文件到指定目录
[root@localhost install]# tar -zxvf ./jdk-8u321-linux-x64.tar.gz -C ../soft/
[root@localhost install]# tar -zxvf ./hadoop-3.1.3.tar.gz -C ../soft/
3.12修改文件名字
[root@localhost install]# cd /opt/soft
[root@localhost soft]# mv jdk1.8.0_321/ jdk180
[root@localhost soft]# mv hadoop-3.1.3/ hadoop313
3.13配置JDK环境变量
[root@localhost soft]# vim /etc/profile
74 # JAVA_HOME
75 export JAVA_HOME=/opt/soft/jdk180
76 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
77 export PATH=$PATH:$JAVA_HOME/bin
3.14刷新配置文件
[root@localhost soft]# source /etc/profile
再javac出现下图
出现下面的代码则配置成功
[root@localhost soft]# java -version
java version "1.8.0_321"
Java(TM) SE Runtime Environment (build 1.8.0_321-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.321-b07, mixed mode)
四.配置hadoop环境文件并且安装
4.1cd到下面目录
cd /opt/soft/hadoop313/etc/hadoop
4.2开始配置
[root@gree2 hadoop]# vim ./core-site.xml
注:我直接将需要配置的文件所有内贴在下面
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://gree143:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/soft/hadoop313/data</value>
<description>namenode上本地的hadoop临时文件夹</description>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>读写队列缓存:128K</description>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
[root@gree2 hadoop]# vim ./hadoop-env.sh
输入set nu指令:填入以下代码
export JAVA_HOME=/opt/soft/jdk180
[root@gree2 hadoop]# vim ./hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>hadoop中每一个block文件的备份数量</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/soft/hadoop313/data/dfs/name</value>
<description>namenode上存储hdfsq名字空间元数据的目录</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/soft/hadoop313/data/dfs/data</value>
<description>datanode上数据块的物理存储位置目录</description>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
<description>关闭权限验证</description>
</property>
</configuration>
[root@gree2 hadoop]# vim ./mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>job执行框架: local, classic or yarn</description>
<final>true</final>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>/opt/soft/hadoop313/etc/hadoop:/opt/soft/hadoop313/share/hadoop/common/lib/*:/opt/soft/hadoop313/share/hadoop/common/*:/opt/soft/hadoop313/share/hadoop/hdfs/*:/opt/soft/hadoop313/share/hadoop/hdfs/lib/*:/opt/soft/hadoop313/share/hadoop/mapreduce/*:/opt/soft/hadoop313/share/hadoop/mapreduce/lib/*:/opt/soft/hadoop313/share/hadoop/yarn/*:/opt/soft/hadoop313/share/hadoop/yarn/lib/*</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>gree143:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>gree143:19888</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>1024</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>1024</value>
</property>
</configuration>
[root@gree2 hadoop]# vim ./yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.connect.retry-interval.ms</name>
<value>20000</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>
<property>
<name>yarn.nodemanager.localizer.address</name>
<value>gree143:8040</value>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>gree143:8050</value>
</property>
<property>
<name>yarn.nodemanager.webapp.address</name>
<value>gree143:8042</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/opt/soft/hadoop313/yarndata/yarn</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/opt/soft/hadoop313/yarndata/log</value>
</property>
</configuration>
4.3配置变量:
[root@localhost soft]# vim /etc/profile
写在JAVA-HOMEde1配置下
# HADOOP_HOME
export HADOOP_HOME=/opt/soft/hadoop313
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
刷新配置
source /etc/profile
完成以上配置后就可以格式化安装hadoop了
[root@gree2 dfs]# echo $HADOOP_HOME
[root@gree2 dfs]# hdfs namenode -format
启动hadoop
[root@gree2 dfs]# start-all.sh
jps显示启动了几个服务,按照我们的配置应该出现六个
如下图:
没有任何问题后,关闭服务
[root@gree2 dfs]# stop-all.sh