JDK的安装
安装JDK1.8
传输文件下载插件:
yum install lrzsz
(1)上传jdk-8u271-linux-i586.tar.gz到虚拟机
[root@namenode ~]# cd /usr/local/
[root@namenode local]# mkdir java
[root@namenode local]# cd /usr/local/java/
(2)上传文件jdk-8u271-linux-i586.tar.gz
[root@namenode java]# tar -zxvf jdk-8u271-linux-i586.tar.gz
[root@namenode1 java]# vim /etc/profile 拉到最底部找到(按大写字母G)
export JAVA_HOME=/usr/local/java/jdk1.8.0_271
export PATH= J A V A H O M E / b i n : JAVA_HOME/bin: JAVAHOME/bin:PATH
[root@namenode java]# source /etc/profile
(3)验证是否安装成功:
[root@namenode java]# java -version
[root@namenode java]# javac -version
出现这个错误 /lib/ld-linux.so.2: bad ELF interpreter: 没有那个文件或目录
执行 yum install glibc.i686 www.cit.cn
=至此JDK安装完毕=
创建用户
-
创建用户
[root@namenode ~]# useradd hadoop
-
修改用户密码
[root@namenode ~]# passwd hadoop
-
修改ip
[root@namenode ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
重启虚拟机
[root@namenode ~]# reboot
-
修改主机名
[root@namenode ~]# hostnamectl set-hostname [你的主机名]
-
修改电脑的hosts文件
[root@namenode ~]# vim /etc/hosts
127.0.0.1 localhost ::1 localhost 192.168.37.101 namenode 192.168.37.104 datanode3 192.168.37.102 datanode1 192.168.37.103 datanode2
-
赋予hadoop用户权限
[root@namenode ~]# vim /etc/sudoers
#在root ALL=(ALL) ALL下面添加一行 hadoop ALL=(ALL) ALL
-
配置SSH无密码登录
-
用hadoop用户登录
[root@namenode ~]# su hadoop
[hadoop@namenode ~]# ssh-keygen -t rsa
生成的密钥,位于~/.ssh文件夹里面
-
从 namenode 机把生成的公钥文件复制到要访问的机器的 hdfs 的用户目录下的.ssh 目录
[hadoop@namenode ~]# scp ~/.ssh/id_rsa.pub hadoop@namenode:/home/hadoop/.ssh/authorized_keys
[hadoop@namenode ~]# scp ~/.ssh/id_rsa.pub hadoop@datanode1:/home/hadoop/.ssh/authorized_keys
[hadoop@namenode ~]# scp ~/.ssh/id_rsa.pub hadoop@datanode2:/home/hadoop/.ssh/authorized_keys
[hadoop@namenode ~]# scp ~/.ssh/id_rsa.pub hadoop@datanode3:/home/hadoop/.ssh/authorized_keys
-
将每台虚拟机的公钥都复制到每台虚拟机的authorized_keys文件中
[hadoop@namenode ~]vim ~/.ssh/id_rsa.pub
将公钥复制到authorized_keys文件中,最终每台虚拟机的authorized_keys文件都含有四台虚拟机的公钥。
-
从 namenode 机上,逐个检测是否可以不需要密码登陆。用 exit 登出继续尝试下一台:
ssh localhost
ssh hadoop@namenode
ssh hadoop@datanode1
ssh hadoop@datanode2
ssh hadoop@datanode3
-
安装Hadoop
解压压缩包
[root@namenode hadoop]# tar -zxvf hadoop-2.10.1.tar.gz
进入目录
[root@namenode ~]# cd /usr/local/hadoop/hadoop-2.10.1
赋予hadoop用户权限
[root@namenode hadoop]# chown -R hadoop:hadoop ./hadoop-2.10.1
[root@namenode hadoop]# ln -s /usr/local/hadoop/hadoop-2.10.1/usr/local/hadoop
查看hadoop的版本
[root@namenode bin]# /usr/local/hadoop/hadoop/bin/hadoop version
在所有电脑上配置环境文件
[root@namenode ~]# vim /etc/profile
加入一行:
export PATH=$PATH:/usr/local/hadoop/hadoop/bin:/usr/local/hadoop/sbin
hadoop配置文件
-
每台虚拟机用 hadoop 用户登录,输入以下命令增加目录:
[hadoop@namenode ~]$ mkdir /home/hadoop/name
[hadoop@namenode ~]$ mkdir /home/hadoop/data
[hadoop@namenode ~]$ mkdir /home/hadoop/temp -
namenode主机进入 hadoop 核心配置目录
[root@namenode ~]# cd /usr/local/hadoop/etc/hadoop/
-
配置hadoop-env.sh文件,添加行:
export JAVA_HOME=/usr/local/java/jdk1.8.0_271
-
配置 yarn-env.sh,添加行:
export JAVA_HOME=/usr/local/java/jdk1.8.0_271
-
配置文件slaves
slaves 文件中保存所有 DataNode 节点,如果有更多的节点可以编辑该文件添加。
datanode1 datanode2 datanode3
-
按要求配置好 core-site.xml, hdfs-site.xml 和 yarn-site.xml 三个文件
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://namenode:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/hadoop/temp</value> <description>A base for other temporary directories. </description> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>namenode:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hadoop/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>namenode:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>namenode:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>namenode:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>namenode:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>namenode:8088</value> </property> </configuration>
-
-
复制 hadoop 目录到其他 DataNode 节点
[root@namenode ~]# scp -r /usr/local/hadoop root@datanode1:/usr/local/hadoop-2.10.1
[root@namenode ~]# scp -r /usr/local/hadoop root@datanode2:/usr/local/hadoop-2.10.1
[root@namenode ~]# scp -r /usr/local/hadoop root@datanode3:/usr/local/hadoop-2.10.1
在所有电脑终端,输入以下命令修改文件的所有者
[root@datanode1 ~]# cd /usr/local
[root@datanode1 ~]# chown -R hadoop:hadoop ./hadoop-2.10.1
[root@datanode2 ~]# cd /usr/local
[root@datanode2 ~]# chown -R hadoop:hadoop ./hadoop-2.10.1
[root@datanode3 ~]# cd /usr/local
[root@datanode3 ~]# chown -R hadoop:hadoop ./hadoop-2.10.1
-
初始化 hadoop
[root@namenode ~]# cd /usr/local/hadoop
[root@namenode ~]#bin/hdfs namenode -format
启动Hadoop
启动命令
[hadoop@namenode ~]$ /usr/local/hadoop/sbin/start-all.sh
[hadoop@datanode1 ~]$ usr/local/hadoop-2.10.1/sbin/start-all.sh
[hadoop@datanode1 ~]$ usr/local/hadoop-2.10.1/sbin/start-all.sh
[hadoop@datanode1 ~]$ usr/local/hadoop-2.10.1/sbin/start-all.sh
用jps确认主机和节点是否启动成功
Hadoop 的监控
-
查看集群状态命令
[hadoop@namenode ~]# /usr/local/hadoop/bin/hdfs dfsadmin -report
-
Web访问
-
关闭防火墙
[hadoop@namenode ~]# systemctl stop firewalld.service
-
浏览器打开
http://192.168.37.161:50070/
-
对你有帮助请点个赞!