一、Spark简介:Saprk可以运行在hadoop的yarn或Mesos,standalone,clude(资源管理框架)上,使用的文件系统可以是HDFS,也可以使Cassandra,HBase等。
二、环境搭建:
如果想学习spark最好的方法是看官方文档。
spark仅仅是一个通用的负责计算的框架,有很多内置的算子。而在mapreduce中需要在map中提供排序规则
他可以生成一个运行的图dat,在spark之上还有更高级的应用,spark sql,MLlib,Graphx
spark既可以运行在windows上又可以运行在Unix系列上
spark2.0 废弃了java7和python2.6使用更高的版本。
spark可以不依赖hadoop
主机名 | ip地址 | Spark程序 |
master.hadoop | 192.168.1.2 | master |
slave1.hadoop | 192.168.1.3 | worker |
slave2.hadoop | 192.168.1.4 | worker |
1、在安装Spark之前需要安装Scala
下载scala 2.11.7
wget https://downloads.lightbend.com/scala/2.11.7/scala-2.11.7.tgz
2、解压Scala压缩包,并配置环境变量
vi /etc/profile
export SCALA_HOME=/apps/scala-2.11.7
export PATH=$PATH:$SCALA_HOME/bin
source /etc/profile
3、下载spark-2.2.0-bin-hadoop2.7.tgz
wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz
4、解压、并配置环境变量
vi /etc/profile
#添加以下代码
export SPARK_HOME=/apps/spark-2.2.0
export PATH=$PATH:$SPARK_HOME/bin
5、配置Spark
首先进到配置目录
[root@master apps]# cd spark-2.2.0/conf/
[root@master conf]# ll
总用量 32
-rw-r--r--. 1 500 500 996 7月 1 2017 docker.properties.template
-rw-r--r--. 1 500 500 1105 7月 1 2017 fairscheduler.xml.template
-rw-r--r--. 1 500 500 2025 7月 1 2017 log4j.properties.template
-rw-r--r--. 1 500 500 7313 7月 1 2017 metrics.properties.template
-rw-r--r--. 1 500 500 865 7月 1 2017 slaves.template
-rw-r--r--. 1 500 500 1292 7月 1 2017 spark-defaults.conf.template
-rwxr-xr-x. 1 500 500 3699 7月 1 2017 spark-env.sh.template
然后分别将spark-env.sh.template和slaves.template改名为 spark-env.sh,slaves
[root@master conf]# mv spark-env.sh.template spark-env.sh
[root@master conf]# mv slaves.template slaves
在spark-env.sh中添加配置
[root@master conf]# vi spark-env.sh
##添加以下配置
#- JAVA_HOME:Java安装目录
export JAVA_HOME=/apps/jdk1.8.0_171
#- SCALA_HOME:Scala安装目录
export SCALA_HOME=/apps/scala-2.11.7
#- HADOOP_HOME:hadoop安装目录
#export HADOOP_HOME=/apps/hadoop-2.8.0/
#- HADOOP_CONF_DIR:hadoop集群的配置文件的目录
#export HADOOP_CONF_DIR=/apps/hadoop-2.8.0/etc/hadoop
#- SPARK_MASTER_IP:spark集群的Master节点的ip地址
export SPARK_MASTER_IP=master.hadoop
#- SPARK_WORKER_MEMORY:每个worker节点能够最大分配给exectors的内存大小
export SPARK_WORKER_MEMORY=512m
#- SPARK_WORKER_CORES:每个worker节点所占有的CPU核数目
export SPARK_WORKER_CORES=2
#- SPARK_WORKER_INSTANCES:每台机器上开启的worker节点的数目
export SPARK_WORKER_INSTANCES=1
#指定端口
export SPARK_MASTER_PORT=7077
因为先不与hadoop整合所以现将有关hadoop的配置注释掉,现在就可以在单台机器上启动了
[root@master spark-2.2.0]# bin/spark-shell
运行结果:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/08/20 21:24:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/20 21:24:42 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
18/08/20 21:25:13 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/08/20 21:25:14 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
18/08/20 21:25:17 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.1.2:4041
Spark context available as 'sc' (master = local[*], app id = local-1534771483747).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.2.0
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
scala>
6、集群搭建
[root@slave1 conf]# vi slaves
添加以下内容
# A Spark Worker will be started on each of the machines listed below.
#master.hadoop
slave1.hadoop
slave2.hadoop
在slaves里面添加:其他机器的ip地址或主机名(添加主机名需要在/etc/hosts)里面设置ip地址与主机名的映射关系
然后将Spark安装包分发到其他机器上,并且在/etc/profile中配置Spark环境变量
7、启动集群
[root@master spark-2.2.0]# sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.hadoop.out
slave2.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out
slave1.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out
master.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.hadoop.out
通过jps命令查看启动情况(一台master,两台worker)
8、通过web页面查看集群情况(集群提供了web端口8080)