本文面向spark以及入门推荐系统的新手(包括我自己)
spark环境搭建
Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For the Scala API, Spark 2.3.1 uses Scala 2.11. You will need to use a compatible Scala version (2.11.x).
自己项目开发程序
Scala and Java users can include Spark in their projects using its Maven coordinates and in the future Python users can also install Spark from PyPI
本地spark
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It’s easy to run locally on one machine — all you need is to have java installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation.
安装jdk8
mac java_home: https://www.mkyong.com/java/how-to-set-java_home-environment-variable-on-mac-os-x/
spark
下载spark:https://spark.apache.org/downloads.html
解压: spark-2.3.1-bin-hadoop2.7.tgz
环境变量
export JAVA_HOME=$(/usr/libexec/java_home)
export SPARK_HOME={YOUR_SPARK_HOME}
export PATH=$SPARK_HOME/bin:$PATH
测试spark demo
run-example SparkPi 10
如果设置正确,会出现以下日志:
2018-09-01 13:30:09 INFO TaskSchedulerImpl:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool
2018-09-01 13:30:09 INFO DAGScheduler:54 - ResultStage 0 (reduce at SparkPi.scala:38) finished in 0.785 s
2018-09-01 13:30:09 INFO DAGScheduler:54 - Job 0 finished: reduce at SparkPi.scala:38, took 0.888303 s
Pi is roughly 3.142015142015142
2018-09-01 13:30:09 INFO AbstractConnector:318 -