--------------------------虚拟机需要用户自行安装,以下为配置-----------------------------------
一、 Linux系统配置
1. 配置时钟同步(root用户下)
1.1 配置自动时钟同步(master+slave)
[root@master hsn]$ crontab -e
该命令是 vi 编辑命令,按 i 进入插入模式,按 Esc,然后键入:wq 保存退出
键入下面的一行代码,输入 i,进入插入模式(星号之间和前后都有空格),直接复制也可以
0 1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
1.2手动同步时间(master+slave)
[root@master hsn]$ /usr/sbin/ntpdate cn.pool.ntp.org
2. 配置主机名(master+slave,普通用户下)
2.1修改主机名
[root@master hsn]$ vi /etc/hostname
修改文件内容为“master”,保存退出
2.2reboot重启虚拟机
[root@master hsn]$reboot
重启后:
[hsn@master ~]$hostname
可以看到显示 master 说明配置成功,slave同样设置,不过要改成slave
3. 配置网络环境(master+slave,普通用户下)以master为例
- 1查看网络状态,键入“ifconfig”,出现:
[hsn@master ~]$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:f3:77:03 txqueuelen 1000 (Ethernet)
RX packets 75 bytes 4500 (4.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 232 bytes 18520 (18.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 232 bytes 18520 (18.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:f6:9a:95 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
可以看到我们的ens33是配置网络文件
3.2找到并打开ifcfg-ens33文件
[hsn@master ~]$ cd /etc/sysconfig/network-scripts/
[hsn@master network-scripts]$ ls
ifcfg-ens33 ifdown-isdn ifup ifup-plip ifup-tunnel
ifcfg-lo ifdown-post ifup-aliases ifup-plusb ifup-wireless
ifdown ifdown-ppp ifup-bnep ifup-post init.ipv6-global
ifdown-bnep ifdown-routes ifup-eth ifup-ppp network-functions
ifdown-eth ifdown-sit ifup-ib ifup-routes network-functions-ipv6
ifdown-ib ifdown-Team ifup-ippp ifup-sit
ifdown-ippp ifdown-TeamPort ifup-ipv6 ifup-Team
ifdown-ipv6 ifdown-tunnel ifup-isdn ifup-TeamPort
[hsn@master network-scripts]$
第一个文件就是我们要修改的文件,root用户下打开它:
[hsn@master network-scripts]$ su
密码:
[root@master network-scripts]# vi ifcfg-ens33
-------出现文件内容------
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=2ff250dd-35c2-49bf-85f4-76eba1681eac
DEVICE=ens33
ONBOOT=no
我们需要修改
BOOTPROTO=static
ONBOOT=yes
-------增加代码------
IPADDR=192.168.220.138
NETWORK=255.255.255.0
GATEWAY=192.168.97.1
DNS1=218.85.157.4
------保存后退出,重启网络服务------
[root@master network-scripts]# service network restart
Restarting network (via systemctl): [ 确定 ]//绿色的“确定”
重启后,再次键入ifconfig查看ens33,会出现
inet 192.168.220.138 netmask 255.255.255.0 broadcast 192.168.220.255
可以两个虚拟机互相ping一下,能互相ping通就成功了
4. 关闭防火墙(master+slave,普通用户下)以master为例
4.1查看防火墙状态
[hsn@master ~]$ service firewalld status
4.2关闭防火墙+禁止防火墙开机自启动
----------关闭防火墙(需要输入一次用户密码)-------
[hsn@master ~]$ service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
---------禁止防火墙开机自启动(需要输入两次用户密码)---------
[hsn@master ~]$ systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
5. 配置hosts列表(master+slave,root用户下)
5.1命令如下:
[hsn@master ~]$ su
密码:
[root@master hsn]# vi /etc/hosts
------文件内增加代码------
192.168.220.138 master
192.168.220.139 slave
注意:
这里 master 节点对应 IP 地址是 192.168.220.138,slave 对应的 IP 地址是192.168.220.139,而自己在做配置时,需要将这两个 IP 地址改为你的 master 和 slave 对应的 IP 地址
验证是否配置成功则可以键入:ping master/pingslave
---出现如下则配置成功----
[root@master hsn]# ping slave
PING slave (192.168.220.139) 56(84) bytes of data.
64 bytes from slave (192.168.220.139): icmp_seq=1 ttl=64 time=0.439 ms
6. 安装jdk(master+slave,root下)
6.1 将jdk压缩包传入虚拟机
需要SSH Secure Shell Client 3.2.9用于 windows 系统与 Linux 系统之间进行文件传输。
[hsn@master ~]$ ls ---查看文件---
公共 模板 视频 图片 文档 下载 音乐 桌面
[hsn@master ~]$ ls ---传输完成后再次查看---
jdk-8u101-linux-x64.tar.gz 公共 模板 视频 图片 文档 下载 音乐 桌面
6.2将压缩包移入新建文件夹/usr/java
[root@master hsn]# mv jdk-8u101-linux-x64.tar.gz /usr/java
[root@master hsn]# cd /usr/java
[root@master java]# ls
jdk-8u101-linux-x64.tar.gz
6.3解压jdk压缩包
[root@master java]# tar -xvf /usr/java/jdk-8u101-linux-x64.tar.gz
6.4修改环境变量
[root@master java]# vi /root/.bash_profile
---把下面内容粘贴到文件中---
JAVA_HOME=/usr/java/jdk1.8.0_101/
HADOOP_HOME=/home/hadoop/hadoop-2.5.2
PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH:$HOME/bin
export PATH
6.5使改动生效命令
[root@master java]# source /root/.bash_profile
6.6测试配置
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
7. 免密钥登录配置(master+slave,root下)
7.1修改ssh配置文件
[root@master ~]# vi /etc/ssh/sshd_config
-----打开以后删除下列3行内容的#,第一个可能找不到,因为不知道怎么处理我就此留个疑问(!?)----
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
7.2在master和slave分别产生公钥
[root@master ~]# ssh-keygen -t rsa
7.3将各节点机上生成的公钥合并到主节点下 authorized_keys 文件,并用 scp 命令传给其他节点 机,使各节点间可 ssh 免密码登录,具体操作命令如下:
----合并公钥文件----
[root@master ~]# cd .ssh
[root@master .ssh]# cat id_rsa.pub >>authorized_keys
----传给slave----
[root@master .ssh]# ssh root@slave cat ~/.ssh/id_rsa.pub >> authorized_keys
The authenticity of host 'slave (192.168.220.139)' can't be established.
ECDSA key fingerprint is SHA256:qKWUbUl+QsOAI+uWyvT74ahhoQD0vQR77kSX856DMDs.
ECDSA key fingerprint is MD5:2a:35:d5:9e:e4:0f:e7:85:a5:6c:4d:2f:c8:91:72:ab.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave,192.168.220.139' (ECDSA) to the list of known hosts.
root@slave's password:
----将master密钥文件传输给slave----
[root@master .ssh]# scp authorized_keys root@slave:/root/.ssh/authorized_keys
root@slave's password:
authorized_keys 100% 785 571.2KB/s 00:00
----免密钥测试----
[root@master .ssh]# ssh slave
Last login: Fri Apr 5 18:52:59 2019 ----登陆成功----
[root@slave ~]# exit
登出
Connection to slave closed.
二、Hadoop配置部署
1.在网上下载hadoop-2.6.2.tar.gz(https://hadoop.apache.org/release/2.6.2.html)
1.上传到/home/soft目录
[root@master home]# cd /home/soft
[root@master soft]# ls
hadoop-2.6.2.tar.gz
1.2 tar命令解压到/home/hadoop下
[root@master hadoop]#tar -xvf hadoop-2.6.2.tar.gz
[root@master hadoop]# ls
hadoop-2.6.2 hadoop-2.6.2.tar.gz
[root@master hadoop]# mkdir hdfs
[root@master hadoop]# mkdir tmp
[root@master hadoop]# cd hdfs/
[root@master hdfs]# mkdir name
[root@master hdfs]# mkdir data
[root@master hdfs]# ls
data name
1.3配置主要文件
1.3.1首先进入 /home/hadoop/hadoop-2.6.2/etc/hadoop/目录
[root@master hdfs]# cd /home/hadoop/hadoop-2.6.2/etc/hadoop/
[root@master hadoop]# ls
capacity-scheduler.xml kms-env.sh
configuration.xsl kms-log4j.properties
container-executor.cfg kms-site.xml
core-site.xml log4j.properties
hadoop-env.cmd mapred-env.cmd
hadoop-env.sh mapred-env.sh
hadoop-metrics2.properties mapred-queues.xml.template
hadoop-metrics.properties mapred-site.xml.template
hadoop-policy.xml slaves
hdfs-site.xml ssl-client.xml.example
httpfs-env.sh ssl-server.xml.example
httpfs-log4j.properties yarn-env.cmd
httpfs-signature.secret yarn-env.sh
httpfs-site.xml yarn-site.xml
kms-acls.xml
1.3.2 编辑slaves文件,加入从节点slave
[root@master hadoop]# vi slaves
---文件内容---
slave
1.3.3编辑core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
1.3.4编辑hdf-site.xml文件
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
1.3.6配置hadoop-env.sh,mapred-env.sh,yarn-env.sh中的JAVA_HOME,确保集群能正确访问到jdk,均为export JAVA_HOME=…
export JAVA_HOME=/usr/java/jdk1.8.0_101
1.3.7配置mapred-site.xml文件(首先需要复制该文件)
复制文件:
[root@master hadoop]#cp mapred-site.xml.template mapred-site.xml
[root@master hadoop]# vi mapred-site.xml
-------mapred-site.xml文件-------
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
1.4 在主节点配置好的hadoop版本传输到从节点
在这里插入代码片
三、启动进程
1.初始化文件系统
[root@master ~]# hadoop namenode -format ----初始化命令----
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
19/05/01 10:50:39 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/192.168.220.138
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.2
.........
.........
19/05/01 10:51:10 INFO util.ExitUtil: Exiting with status 0
----出现Exiting with status 0就说明初始化成功了----
19/05/01 10:51:10 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.220.138
************************************************************/
2.启动集群
[root@master ~]# cd /home/hadoop/hadoop-2.6.2/sbin/
[root@master sbin]# ls
distribute-exclude.sh start-all.cmd stop-balancer.sh
hadoop-daemon.sh start-all.sh stop-dfs.cmd
hadoop-daemons.sh start-balancer.sh stop-dfs.sh
hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh
hdfs-config.sh start-dfs.sh stop-yarn.cmd
httpfs.sh start-secure-dns.sh stop-yarn.sh
kms.sh start-yarn.cmd yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh
refresh-namenodes.sh stop-all.cmd
slaves.sh stop-all.sh
[root@master sbin]# ./start-all.sh
3.检查是否运行正常
[root@master sbin]# jps
9314 NameNode
9605 ResourceManager
9918 Jps
9471 SecondaryNameNode
----出现以上4个进程即主节点运行正常----