大数据:环境搭建

·根据官方描述,Hadoop是一个有Apache基金会所开发的分布式系统基础架构,用户可以在不了解分布式底层的情况下去开发分布式程序,充分利用集群的威力进行数据存储和计算。Hadoop的框架最核心的设计是HDFS和MapReduce。HDFS为海量的数据提供了分布式存储。MapReduce是对数据进行处理。

环境
配置版本下载地址
jdkoracle jdk 1.8oracle官网
hadoop3.2.1
zookeeper3.5.5
集群搭建

首先创建一个虚拟机,虚拟机软件用的是VMware。在虚拟机中设置基础信息,包括主机名,IP地址,名称解析等。
1)修改主机名

hostnamectl set-hostname hadoop01

2)设置静态ip

vi /etc/sysconfig/network-scripts/ifcfg-ens33 

改写

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
#BOOTPROTO="dhcp"
BOOTPROTO="static" #静态ip
IPADDR="192.168.229.101" #IP地址
NETMASK="225.225.225.0" #子网掩码
GATEWAY="192.168.229.2" #网关 和虚拟网卡VMware Virtual Ethernet Adapter for VMnet8上的网络相同见下图
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="d7f5210b-8217-4f4e-a71e-b07d840fb89d"
DEVICE="ens33"
ONBOOT="yes"

虚拟机网卡信息

3)设置resoulv.conf
该文件是DNS域名解析配置文件修改nameserver使其能够找到DNS服务器。

vi /etc/resolv.conf 

修改

nameserver 192.168.229.2 #重启网络服务后可能会消失 需要再次编辑

4)设置hosts
hosts是Linux中负责IP地址和域名解析的文件配置通过主机名快速访问集群中的其他节点

vi /etc/hosts

插入

192.168.229.101 hadoop01
192.168.229.102 hadoop02
192.168.229.201 hadoop03
192.168.229.202 hadoop04
192.168.229.203 hadoop05

centos7关闭防火墙
1 查看防火墙状态

firewall-cmd --state

2 关闭防火墙

systemctl stop firewalld.service

3 禁止防火墙开机启动

systemctl disable firewalld.service 

5)重启检查ip地址是否正确,是否可以ping的通。
6)安装jdk。 将下载的包解压到指定位置并配置全局环境变量。

 vi /etc/profile

插入:

#java----start
export JAVA_HOME=/usr/share/jdk1.8.0_221
export PATH=$PATH:$JAVA_HOME/bin
#java----end

使环境变量生效

source /etc/profile

7)上传 hadoop-3.2.0.tar.gz并解压
配置全局环境变量

#HADOOP
export HADOOP_HOME=/home/hadoop/BD/hadoop-3.2.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:/home/hadoop/BD/bin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native

8编写hadoop核心配置文件
core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
	<!--指定zookeeper地址 -->
	<property>
		<name>ha.zookeeper.quorum</name>
		<value>hadoop03:2181,hadoop04:2181,hadoop05:2181</value>
	</property>
	<!--指定hadoop临时目录 -->
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/home/hadoop/BD/hadoop-3.2.1/tmp</value>
	</property>
	<!--指定eclipse端口 -->
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://mycluster</value>
	</property>

</configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!--设置备份数量 -->
	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>
		<!-- 指定集群名字空间-->
	<property>
		<name>dfs.nameservices</name>
		<value>mycluster</value>
	</property>
		<!--指定nameNode节点名字空间 -->
	<property>
		<name>dfs.ha.namenodes.mycluster</name>
		<value>nn1,nn2</value>
	</property>
		<!--远程调用,设置第一个远程调用地址和端口 -->
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn1</name>
		<value>hadoop01:9000</value>
	</property>
		<!-- -->
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn2</name>
		<value>hadoop02:9000</value>
	</property>
		<!--设置网页访问端口和地址 -->
	<property>
		<name>dfs.namenode.http-address.mycluster.nn1</name>
		<value>hadoop01:50070</value>
	</property>
		<!-- -->
	<property>
		<name>dfs.namenode.http-address.mycluster.nn2</name>
		<value>hadoop02:50070</value>
	</property>
		<!--设置共享edits存放位置 -->
	<property>
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://hadoop03:8485;hadoop04:8485;hadoop05:8485/QJCluster</value>
	</property>
		<!--设置JournalNode节点的edits文件本地存放路径,是QJournal真实数据存放路径 -->
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/home/hadoop/BD/hadoop-3.2.1/QJEditsData</value>
	</property>
		<!--开启故障自动转移 -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
		<!-- -->
	<property>
		<name>dfs.client.failover.proxy.provider.mycluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
		<!-- 设置一个脚本确保只有一个namenode节点处于活动状态-->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>shell(/bin/true)</value>
	</property>
		<!-- 设置密钥信息-->
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_rsa</value>
	</property>
		<!-- 设置超时时间-->
	<property>
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>30000</value>
	</property>
</configuration>

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<property>
		<name>yarn.app.mapreduce.am.env</name>
		<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
	</property>
	<property>
		<name>mapreduce.map.env</name>
		 <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
	</property>
	<property>
		<name>mapreduce.reduce.env</name>
		 <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
	</property>

</configuration>

workers
该文件指定存储数据的节点,Master会读取该文件做资源平衡
删除:

localhost

插入

hadoop03
hadoop04
hadoop05   

yarn-site.xml

<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->
	<!--设置ResourceManager -->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hadoop01</value>
	</property>
	<!--Reduce取数方式 -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
</configuration>

复制虚拟机
分别修改ip地址和主机名
在这里插入图片描述

9设置免密登录
生成秘钥:在hadoop01和hadoop02上执行下面的命令,让主节点可以免密登录到子节点

ssh-keygen

对所有节点免密

ssh-copy-id hadoop01
ssh-copy-id hadoop02
ssh-copy-id hadoop03
ssh-copy-id hadoop04
ssh-copy-id hadoop05

至此hadoop配置完成。但是还没有配置zookeeper,所以还不能启动。经过前面的操作配置我们已经能够通过ssh登录到虚拟机了。所以接下来在安装zookeeper之前先找一个合适的终端模拟器来连接虚拟方便操作。关于Windows Terminal可以点击下面链接看相关内容。
Windows Terminal
前面已经将haoop01和hadoop02的公钥发送给所有服务器了。下面将hadoop01的私钥复制到宿主机Windows上的指定目录就可以实现WindowsTerminal免密登录到所有服务器。
在这里插入图片描述在这里插入图片描述打开WindowsTerminal的快捷方法。按一下电脑上的win键,输入wt,回车。

下一步安装Zookeeper
ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务。为分布式应用提供一致性服务。提供包括:配置服务,域名服务,分布式同步服务,组服务等。Zookeeper以集群的形式出现,一般为3到5台。它可以通过选举机制来选取Master,保证两个Master中一个为active状态,一个为standby状态。他还存储NameNode的快照,用来减轻Master负担。

分别在hadoop03、04、05上安装zookeeper

1、解压zookeeper到指定目录
2、修改配置文件
进入zookeeper的conf目录,利用模板复制一个配置文件cp zoo_sample.cfg zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#快照存放目录,需要自己创建
dataDir=/home/hadoop/BD/apache-zookeeper-3.5.5-bin/tmp/zookeeper 
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
#服务器编号,服务器地址,LF通信端口,选举端口
server.1=hadoop03:2888:3888
server.2=hadoop04:2888:3888
server.3=hadoop05:2888:3888

创建目录/home/hadoop/BD/apache-zookeeper-3.5.5-bin/tmp/zookeeper
创建myid文件
插入1
在这里插入图片描述
将zookeeper目录传送到其他服务器。

scp  -r ~/BD/apache-zookeeper-3.5.5-bin hadoop04:~BD/
scp  -r ~/BD/apache-zookeeper-3.5.5-bin hadoop05:~BD/

修改其他服务器上的myid
修改 .bashrc文件

export PATH=$PATH:/home/hadoop/BD/apache-zookeeper-3.5.5-bin/bin

至此,zookeeper配置完成。可以启动hadoop程序了。
首先启动三个zookeeper服务器上的zookeeper服务,分别在三台服务器上输入命令zkServer.sh start。由于每次启动需要在三台服务器上启动zookeeper,很麻烦。所以在hadoop01上创建一个脚本,让我们可以一个命令启动三台服务器上的服务。

for i in {3..5};
do
echo "正在启动服务${i}"
ssh hadoop0${i} "zkServer.sh start"
done

脚本命名为:zkServerStart ,给脚本添加执行权限同时将脚本所在的目录配置在PATH环境变量里。
启动了zookeeper后,在hadoop01上执行命令:start-all.sh即可启动Hadoop。
在这里插入图片描述完成。下一篇介绍HDFS

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值