CentOS7下 Hadoop2.7.3+Spark2.1.0 集群环境搭建(1NN+2DN)

环境

主机名ip进程
nn.hadoop.data.example.net172.16.156.220NameNode、Master、ResourceManager、SecondaryNameNode、JobHistoryServer
dn1.hadoop.data.example.net172.16.156.221NodeManager、DataNode、Worker
dn2.hadoop.data.example.net172.16.156.222NodeManager、DataNode、Worker
yum安装如下包 (可能有部分包用不到)
yum install pcre-devel openssl openssl-devel openssh-clients htop gcc zlib lrzsz zip unzip vim telnet-server ncurses wget net-tools
关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
配置hosts文件
vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.156.220  nn.hadoop.data.example.net
172.16.156.221  dn1.hadoop.data.example.net
172.16.156.222  dn2.hadoop.data.example.net

安装JDK和Scala

0.创建文件夹
mkdir /app/java
mkdir /app/scala
1.下载

下载JDK

wget http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.tar.gz

如果失效,点这里下载 并上传至服务器
下载Scala

wget http://downloads.lightbend.com/scala/2.12.1/scala-2.12.1.tgz
2.移动&解压
mv jdk-8u121-linux-x64.tar.gz /app/java
tar -zxvf jdk-8u121-linux-x64.tar.gz
mv scala-2.12.1.tgz /app/scala
tar -zxvf scala-2.12.1.tgz
3.授权
chmod -R 775 /app/
chown -R hadoop /app/

创建hadoop用户

useradd hadoop
passwd hadoop

如无特殊说明 以后均为hadoop用户操作

SSH完密码登录

生成秘钥:~/.ssh/id_rsa和~/.ssh/id_rsa.pub

ssh-keygen -t rsa 

拷贝公钥到其他机器上

ssh-copy-id -i nn.hadoop.data.example.net
ssh-copy-id -i dn1.hadoop.data.example.net
ssh-copy-id -i dn2.hadoop.data.example.net

安装Hadoop

0.创建文件夹
mkdir /app/hadoop/data
mkdir /app/hadoop/name
mkdir /app/hadoop/tmp
1.下载hadoop
wget http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
2.移动&解压
mv hadoop-2.7.3.tar.gz /app/hadoop
tar -zxvf hadoop-2.7.3.tar.gz
3.修改配置文件

/etc/profile (root权限)

HADOOP_HOME=/app/hadoop/hadoop-2.7.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

slaves

dn1.hadoop.data.example.net
dn2.hadoop.data.example.net

hadoop-env.sh

# export JAVA_HOME=${JAVA_HOME}
改为
export JAVA_HOME=/app/java/jdk1.8.0_121/

core-site.xml

<configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://nn.hadoop.data.example.net:9000</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>nn.hadoop.data.example.net:50090</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/app/hadoop/name</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/app/hadoop/data</value>
    </property>
</configuration>

mapred-site.xml

cp mapred-site.xml.template mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>nn.hadoop.data.example.net:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>nn.hadoop.data.example.net:19888</value>
    </property>
</configuration>

yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>nn.hadoop.data.example.net</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
         <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>
4.格式化namenode
hadoop namenode -format
5.复制文件到其他机器

将/app/hadoop(包括data、name、tmp和配置好的hadoop)复制到其他机器。

6.启动dfs
start-dfs.sh
7.启动yarn
start-yarn.sh
8.启动jobhistory
mr-jobhistory-daemon.sh start historyserver

安装Spark2

0.创建文件夹
mkdir /app/spark
1.下载Spark2
wget http://www.apache.org/dyn/closer.lua/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz
2.移动&解压
mv spark-2.1.0-bin-hadoop2.7.tgz /app/spark
tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz
3.修改配置文件

/etc/profile (root权限)

export SPARK_HOME=/app/spark/spark-2.1.0-bin-hadoop2.7
export PATH="$SPARK_HOME/bin:$PATH"

spark-env.sh

cp spark-env.sh.template spark-env.sh
export SCALA_HOME=/app/scala/scala-2.12.1
export JAVA_HOME=/app/java/jdk1.8.0_121
export SPARK_MASTER_IP=nn.hadoop.data.easydebug.net
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/app/hadoop/hadoop-2.7.3/etc/hadoop

slaves

dn1.hadoop.data.example.net
dn2.hadoop.data.example.net
4.复制文件到其他机器

将/app/spark复制到其他机器。

5.启动Spark
/app/spark/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh

安装完成 ^_^


因为系统变量改了几次 最后贴一下完整的 其实可以在配置前直接贴进去
export JAVA_HOME=/app/java/jdk1.8.0_121
export SCALA_HOME=/app/scala/scala-2.12.1
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

HADOOP_HOME=/app/hadoop/hadoop-2.7.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

export SPARK_HOME=/app/spark/spark-2.1.0-bin-hadoop2.7
export PATH="$SPARK_HOME/bin:$PATH"

参考文章

CentOS 6.5 hadoop 2.7.3 集群环境搭建
http://blog.csdn.net/mxxlevel/article/details/52653086
Spark修炼之道(进阶篇)——Spark入门到精通:第一节 Spark 1.5.0集群搭建
https://yq.aliyun.com/articles/60309?spm=5176.8251999.569296.66.0H8Bal
Hadoop2.7.3+Spark2.1.0 完全分布式环境 搭建全过程
http://www.cnblogs.com/purstar/p/6293605.html

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值