Centos7+JDK1.8+Scala2.11+Hadoop2.8+Spark2.2+ZooKeeper3.4 集群搭建
第一次写博客,记录下自己的Spark集群搭建过程,参考资料一并奉上
https://blog.csdn.net/weixin_42292787/article/details/89309645
https://blog.csdn.net/qazwsxpcm/article/details/78937820
1、集群规划
机器名称 | spark1 | spark2 | spark3 | spark4 |
---|---|---|---|---|
IP | 192.168.80.151 | 192.168.80.152 | 192.168.80.153 | 192.168.80.154 |
节点类型 | Hadoop的namenode节点,Spark的master节点 | Hadoop的datanode节点,Spark的slave节点 | Hadoop的datanode节点,Spark的slave节点 | Hadoop的datanode节点,Spark的slave节点 |
JAVA_HOME | /usr/local/java/jdk1.8.0_221 | /usr/local/java/jdk1.8.0_221 | /usr/local/java/jdk1.8.0_221 | /usr/local/java/jdk1.8.0_221 |
SCALA_HOME | /usr/local/scala/scala-2.11.12 | /usr/local/scala/scala-2.11.12 | /usr/local/scala/scala-2.11.12 | /usr/local/scala/scala-2.11.12 |
HADOOP_HOME | /usr/local/hadoop/hadoop-2.8.2 | /usr/local/hadoop/hadoop-2.8.2 | /usr/local/hadoop/hadoop-2.8.2 | /usr/local/hadoop/hadoop-2.8.2 |
SPARK_HOME | /usr/local/spark/spark-2.2.0-bin-hadoop2.7 | /usr/local/spark/spark-2.2.0-bin-hadoop2.7 | /usr/local/spark/spark-2.2.0-bin-hadoop2.7 | /usr/local/spark/spark-2.2.0-bin-hadoop2.7 |
ZK_HOME | /usr/local/zookeeper/zookeeper3.4 | /usr/local/zookeeper/zookeeper3.4 | /usr/local/zookeeper/zookeeper3.4 | /usr/local/zookeeper/zookeeper3.4 |
2、安装准备
2.1 安装包
链接:https://pan.baidu.com/s/1z-qGJfZRA1VPVnmst9I77w
提取码:rjzp
2.2 安装一台新的虚拟机
其余的步骤不再赘述,直接默认配置,下一步即可,这里重点说一下网络配置
2.2.1 输入命令进行网络配置
vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
首先将BOOTPROTO设置为static,ONBOOT设置为yes
ip地址、子网掩码以及网关的设置参考你本地的物理机
打开cmd,输入ipconfig得到如下信息:
这里还需要注意ip地址的范围:
dns设置为114.114.114.114即可