hadoop、spark集群搭建

(一台master,三台slave)
所用的软件版本

os:    centos7
hadoop: hadoop-2.9.2.tar.gz
jdk:    jdk-8u131-linux-x64.rpm
spark:  spark-2.4.7-bin-hadoop2.7.tgz

1. 安装java

将jdk的rpm包上传到在各个节点上,使用rpm安装。

rpm -ivh jdk-8u131-linux-x64.rpm

增加java的环境变量:

[root@master ~]# vi /etc/profile
增加如下信息:
JAVA_HOME=/usr/java/jdk1.8.0_131
CLASSPATH=%JAVA_HOME%/lib:%JAVA_HOME%/jre/lib
PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
export PATH CLASSPATH JAVA_HOME

查看是否安装成功:

[root@master ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

如上,表示jdk安装成功。

2. 用户相关

2.1 增加hadoop用户

每个节点都要进行此操作

[root@master ~]# useradd -m hadoop -s /bin/bash
[root@master ~]# passwd hadoop
hanging password for user hadoop.
New password: 
BAD PASSWORD: The password fails the dictionary check - it is too simplistic/systematic
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@master ~]# su - hadoop
[hadoop@localhost ~]$ 

给hadoop用户增加管理员权限

[root@localhost ~]#visudo

找到 root ALL=(ALL) ALL 这行(应该在第98行,可以先按一下键盘上的 ESC 键,然后输入 :98 (按一下冒号,接着输入98,再按回车键),可以直接跳到第98行 ),然后在这行下面增加一行内容:hadoop ALL=(ALL) ALL (当中的间隔为tab),如下图所示:
所用的软件版本

os:    centos7
hadoop: hadoop-2.9.2.tar.gz
jdk:    jdk-8u131-linux-x64.rpm
spark:  spark-2.4.7-bin-hadoop2.7.tgz

1. 安装java

将jdk的rpm包上传到在各个节点上,使用rpm安装。

rpm -ivh jdk-8u131-linux-x64.rpm

增加java的环境变量:

[root@master ~]# vi /etc/profile
增加如下信息:
JAVA_HOME=/usr/java/jdk1.8.0_131
CLASSPATH=%JAVA_HOME%/lib:%JAVA_HOME%/jre/lib
PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
export PATH CLASSPATH JAVA_HOME

查看是否安装成功:

[root@master ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

如上,表示jdk安装成功。

2. 用户相关

2.1 增加hadoop用户

每个节点都要进行此操作

[root@master ~]# useradd -m hadoop -s /bin/bash
[root@master ~]# passwd hadoop
hanging password for user hadoop.
New password: 
BAD PASSWORD: The password fails the dictionary check - it is too simplistic/systematic
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@master ~]# su - hadoop
[hadoop@localhost ~]$ 

给hadoop用户增加管理员权限

[root@localhost ~]#visudo

找到 root ALL=(ALL) ALL 这行(应该在第98行,可以先按一下键盘上的 ESC 键,然后输入 :98 (按一下冒号,接着输入98,再按回车键),可以直接跳到第98行 ),然后在这行下面增加一行内容:hadoop ALL=(ALL) ALL (当中的间隔为tab),如下图所示: 在这里插入图片描述

2.2 SSH无秘通信

现在四台机器上增加hosts,所有节点都增加

[root@master ~]#vi /etc/hosts
增加如下内容(请注意ip为对应机器的IP)
192.168.1.224 master
192.168.1.225 slave1
192.168.1.226 slave2
192.168.1.227 slave3

配置无秘通信:

[root@master ~]#su - hadoop
[hadoop@master ~]$ ssh localhost    # 如果没有该目录,先执行一次ssh localhost
[hadoop@master ~]$ cd ~/.ssh 
[hadoop@master ~]$ rm ./id_rsa*    # 删除之前生成的公匙(如果有)
[hadoop@master ~]$ ssh-keygen -t rsa  # 一直按回车就可以

让 Master 节点需能无密码 SSH 本机,在 Master 节点上执行:
[hadoop@master .ssh]$ cat ./id_rsa.pub >> ./authorized_keys
完成后可执行 ssh Master 验证一下(可能需要输入 yes,成功后执行 exit 返回原来的终端)。接着在 master 节点将上公匙传输到 slave1,slave2,slave3 节点:
[hadoop@master .ssh]$ scp ~/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/
[hadoop@master .ssh]$ scp ~/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/
[hadoop@master .ssh]$ scp ~/.ssh/id_rsa.pub hadoop@slave3:/home/hadoop/
接着在 slave1,slave2,slave3节点上,将 ssh 公匙加入授权:【分别在其他三个节点上执行以下命令:】
[hadoop@slave03 ~]$ mkdir ~/.ssh   
[hadoop@slave03 ~]$ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys

免密码登录失败分析

配置问题:

  1. 检查配置文件/etc/ssh/sshd_config是否开启了AuthorizedKeysFile选项
  2. 检查AuthorizedKeysFile选项指定的文件是否存在并内容正常

目录权限问题:注意每个节点的权限都要更改

sudo chmod 700 ~/.ssh
sudo chmod 600 ~/.ssh/authorized_keys 

3. 集群核准时间:(如果集群时间一致的话,此步略过!)

在每个节点上执行安装ntp服务

[hadoop@master ~]$ sudo yum install -y ntp
##在每个节点上同时执行 sudo ntpdate us.pool.ntp.org
[hadoop@master ~]$ sudo ntpdate us.pool.ntp.org
5 Oct 18:19:41 ntpdate[2997]: step time server 138.68.46.177 offset -6.006070 sec

4. 安装hadoop

将 hadoop-2.9.2.tar.gz上传到master节点的~下。

本次是将Hadoop安装到 /usr/local 下:

[hadoop@master ~]$ sudo tar -zxf ~/hadoop-2.7.3.tar.gz -C /usr/local   # 解压到/usr/local中
[hadoop@master ~]$ cd /usr/local/
[hadoop@master ~]$ sudo mv ./hadoop-2.9.2/ ./hadoop         # 将文件夹名改为hadoop
[hadoop@master ~]$ sudo chown -R hadoop:hadoop ./hadoop     # 修改文件权限

Hadoop 解压后即可使用。输入如下命令来检查 Hadoop 是否可用,成功则会显示 Hadoop 版本信息:

[hadoop@master local]$ cd /usr/local/hadoop
[hadoop@master hadoop]$ ./bin/hadoop version
Hadoop 2.9.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2020-11-17T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.9.2.jar

4.1 Hadoop单机配置(非分布式)

注:先把一个节点的hadoop装好后,然后再依次拷贝到其他的节点上

Hadoop 默认模式为非分布式模式,无需进行其他配置即可运行 。下面是hadoop使用的一个例子:

在此我们选择运行 grep 例子,我们将 input 文件夹中的所有文件作为输入,筛选当中符合正则表达式 dfs[a-z.]+ 的单词并统计出现的次数,最后输出结果到 output 文件夹中。

[hadoop@master hadoop]$ cd /usr/local/hadoop
[hadoop@master hadoop]$ mkdir ./input
[hadoop@master hadoop]$ cp ./etc/hadoop/*.xml ./input # 将配置文件作为输入文件
[hadoop@master hadoop]$./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep ./input ./output 'dfs[a-z.]+'
[hadoop@master hadoop]$ cat ./output/*       # 查看运行结果
1       dfsadmin

4.2 hadoop集群配置

修改/usr/local/hadoop/etc/hadoop/slaves,这里配置的是三个运行节点,master节点只做master不作为运行节点。

[hadoop@master hadoop]$ vi slaves    #里面内容是:
slave01
slave02
slave03

修改core-site.xml的配置

<configuration>
      <property>
          <name>hadoop.tmp.dir</name>
          <value>/usr/local/hadoop/tmp</value>
          <description>Abase for other temporary directories.</description>
      </property>
      <property>
          <name>fs.defaultFS</name>
          <value>hdfs://master:9000</value>
      </property>
</configuration>

修改 hdfs-site.xml,dfs.replication 一般设为3

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>

修改 mapred-site.xml(需要先重新命名,默认文件名为 mapred-site.xml.template)

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

修改 yarn-site.xml:

<configuration>
  <!-- Site specific YARN configuration properties -->
      <property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
      </property>
      <property>
          <name>yarn.resourcemanager.hostname</name>
          <value>master</value>
      </property>
       <property>
          <name>yarn.nodemanager.pmem-check-enabled</name>
          <value>false</value>
      </property>
      <property>
          <name>yarn.nodemanager.vmem-check-enabled</name>
          <value>false</value>
      </property>
</configuration>

配置好后,将 master 上的 /usr/local/Hadoop 文件夹复制到各个节点上。在 master 节点上执行:

若有跑过伪分布式模式,建议在切换到集群模式前先删除之前的临时文件
[hadoop@master ~]$cd /usr/local
[hadoop@master ~]$sudo rm -r ./hadoop/tmp     # 删除 Hadoop 临时文件
[hadoop@master ~]$sudo rm -r ./hadoop/logs/*   # 删除日志文件
[hadoop@master ~]$tar -zcf ~/hadoop.tar.gz ./hadoop # 先压缩再复制
[hadoop@master ~]$cd ~
[hadoop@master ~]$scp ./hadoop.tar.gz slave1:/home/hadoop
[hadoop@master ~]$scp ./hadoop.tar.gz slave2:/home/hadoop
[hadoop@master ~]$scp ./hadoop.tar.gz slave3:/home/hadoop

在各个节点执行:

在 slave1 节点上执行:

[hadoop@slave1 ~]$ sudo rm -r /usr/local/hadoop       # 删掉旧的(如果存在)
[hadoop@slave1 ~]$ sudo tar -zxf ~/hadoop.tar.gz -C /usr/local
[hadoop@slave1 ~]$ sudo chown -R hadoop /usr/local/hadoop

在 slave2 节点上执行:

[hadoop@slave2 ~]$ sudo rm -r /usr/local/hadoop       # 删掉旧的(如果存在)
[hadoop@slave2 ~]$ sudo tar -zxf ~/hadoop.tar.gz -C /usr/local
[hadoop@slave2 ~]$ sudo chown -R hadoop /usr/local/hadoop

在 slave3 节点上执行:

[hadoop@slave3 ~]$ sudo rm -r /usr/local/hadoop       # 删掉旧的(如果存在)
[hadoop@slave3 ~]$ sudo tar -zxf ~/hadoop.tar.gz -C /usr/local
[hadoop@slave3 ~]$ sudo chown -R hadoop /usr/local/hadoop

首次启动需要先在 Master 节点执行 NameNode 的格式化:

[hadoop@master ~]$ hdfs namenode -format 

注意在执行此命令前,请先增加环境变量:

[root@master ~]$ vi /etc/profile
增加如下内容:
export HADOOP_HOME=/usr/local/hadoop 
export PATH=:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

5. 启动hadoop

启动前请先关闭防火墙:

在 CentOS 6.x 中,可以通过如下命令关闭防火墙:
sudo service iptables stop   # 关闭防火墙服务
sudo chkconfig iptables off  # 禁止防火墙开机自启,就不用手动关闭了

若用是 CentOS 7,需通过如下命令关闭(防火墙服务改成了 firewall):
systemctl stop firewalld.service    # 关闭firewall
systemctl disable firewalld.service # 禁止firewall开机启动

启动hadoop(只需在master节点上执行)

注意:修改vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=${JAVA_HOME}改为:export JAVA_HOME=/usr/java/jdk1.8.0_131/ 所有的节点都要修改。

master节点用户要切换到hadoop用户

#在/usr/local/hadoop/sbin   启动hadoop 
[hadoop@master sbin]$ ./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-master.out
slave03: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave03.out
slave01: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave01.out
slave02: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave02.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-master.out
slave01: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave01.out
slave02: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave02.out
slave03: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave03.out

使用jps查各个节点是否启动正常:

[hadoop@master hadoop]$ jps
6194 ResourceManager
5717 NameNode
5960 SecondaryNameNode
6573 Jps    
[hadoop@slave1 hadoop]$ jps
4888 Jps
4508 DataNode
4637 NodeManager
[hadoop@slave2 hadoop]$ jps
3841 DataNode
3970 NodeManager
4220 Jps    
[hadoop@slave3 hadoop]$ jps
4032 NodeManager
4282 Jps
3903 DataNode

打开hadoop WEBUI
在浏览器中输入http://master的ip:50070 ( http://192.168.1.224:50070)
在这里插入图片描述
在这里插入图片描述

6. 关闭hadoop

注意:要切换到hadoop用户

关闭 Hadoop 集群也是在 Master 节点上执行的:./stop-all.sh 即可

hadoop@master sbin]$ stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [master]
master: stopping namenode
slave01: stopping datanode
slave03: stopping datanode
slave02: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
slave01: stopping nodemanager
slave02: stopping nodemanager
slave03: stopping nodemanager
slave01: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
slave02: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
slave03: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop

至此hadoop集群搭建完成。

二、spark集群搭建

1. 安装spark

将spark-2.4.7-bin-hadoop2.7.tgz上传到服务器,执行如下命令:

[hadoop@master ~]$ sudo tar -zxf ~/spark-2.4.7-bin-hadoop2.7.tgz -C /usr/local/
[hadoop@master ~]$ cd /usr/local
[hadoop@master local]$ sudo mv spark-2.4.7-bin-hadoop2.7/ spark
[hadoop@master ~]$ sudo chown -R hadoop:hadoop ./spark

2. 配置spark

  1. 在/usr/local/spark/conf里,先执行如下命令:

    cp spark-env.sh.template spark-env.sh
    
  2. 在spark-env.sh中添加如下内容:

    export JAVA_HOME=/usr/java/jdk1.8.0_131
    export SPARK_MASTER_IP=192.168.1.224
    export SPARK_MASTER_PORT=7077
    export HADOOP_HOME=/usr/local/hadoop/
    export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
    
  3. 在/usr/local/spark/conf里,添加内容到slaves, 这里有4个运行节点把master也算进去了,master既做管理又做计算。在添加前先执行下面的命令:

    cp slaves.template slaves
    

    添加如下内容:

    master
    slave1
    slave2
    slave3
    
  4. 配置好后,将Master主机上的/usr/local/spark文件夹复制到各个节点上。在master主机上执行如下命令:

    [hadoop@master ~]$ cd /usr/local/
    [hadoop@master local]$ sudo tar -zcf spark.master.tar.gz ./spark
    [hadoop@master local]$ scp ./spark.master.tar.gz slave1:/home/hadoop
    [hadoop@master local]$ scp ./spark.master.tar.gz slave2:/home/hadoop
    [hadoop@master local]$ scp ./spark.master.tar.gz slave2:/home/hadoop
    
  5. 在 slave1、slave2、slave3节点上分别执行下面同样的操作:

    [hadoop@slave1 ~]$ sudo tar -zxf ~/spark.master.tar.gz -C /usr/local
    [hadoop@slave1 ~]$ cd /usr/local/
    [hadoop@slave1 local]$ sudo chown -R hadoop:hadoop /usr/local/spark
    

3. 启动spark集群

启动Spark集群前,要先启动Hadoop集群。在Master节点主机上运行如下命令:

注意:要切换到hadoop用户

启动hadoop集群

[hadoop@master ~]$ cd /usr/local/hadoop/sbin
[hadoop@master sbin]$ ./start-all.sh

启动Spark集群

[hadoop@master ~]$ cd /usr/local/hadoop/spark/sbin
[hadoop@master sbin]$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.out
slave3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.out
master: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-master.out

查看Spark的集群信息

在浏览器上访问: http://192.168.1.224:8080 ,如下图:
在这里插入图片描述

4. 关闭spark集群

注意:切换到hadoop用户

[hadoop@master spark]$ ./sbin/stop-all.sh
[hadoop@master spark]$ cd ../
[hadoop@master local]$ cd hadoop
[hadoop@master hadoop]$ ./sbin/stop-all.sh 
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值