集群计算系统Spark——安装

Spark——Lightning-Fast Cluster Computing,这是Spark 官方logo的内容,让人很期待它的计算速度是否真的如此之快。

Spark是由UC Berkeley AMPLab开发的,一种类似Hadoop MapReduce的系统,但是在读写速度上面都有很快的速度。大家都知道Hadoop框架I/O开销是很大的,基于Hadoop的MapReduce运算很多是基于磁盘的。Spark设计的初衷,就是为那些需多次内存迭代的运算设计的,在机器学习和数据挖掘中,有很多算法需要用到这样的特性。所以,Spark支持内存环境的集群运算(不清楚它如何存放大量数据,待进一步研究),并且提供了Shark——一种完全兼容Hive的数据仓库系统,但是要比Hive快100多倍。

UC Berkeley最伟大的地方在于有很多研究成果都贡献给了开源社区,Spark同样是一个开源项目,下面就让我们试着安装一下吧。

我的测试环境是Ubuntu 12.04:

1. 下载Spark

目前最新的版本是spark-0.7.2,下载地址:http://spark-project.org/downloads/

下载后解压到主文件夹,备用。(有点做菜的感觉,呵呵~)

2. 下载Scala

Scala是一种兼顾面向对象和过程的编程语言,Spark需要有Scala的支持才能使用,对于Spark 0.7.2要求的是Scala 2.9.3(在Spark目录下的README文件中写的是2.9.2,估计是没有及时更新),下载地址:http://www.scala-lang.org/downloads

将Scala解压后,将Scala放到/usr/local/shar/目录下,设置系统环境变量PATH,增加SCALA_HOME,我是在~/.profile中设置的,在最后添加:

#set scala path
export SCALA_HOME=/usr/local/share/scala-2.9.3
export PATH=$PATH:$SCALA_HOME/bin

进入在第一步解压的Spark目录,进入conf子目录,执行:

cp spark-env.sh.template spark-env.sh
然后修改spark-env.sh内容:设置其中SCALA_HOME=/usr/local/share/scala-2.9.3

3. 安装sbt

所谓sbt,就是Simple Build Tool,是一种用于编译Scala和Java的工具,下载地址:http://www.scala-sbt.org/release/docs/Getting-Started/Setup.html

对于Ubuntu环境,需要下载DEB package,然后双击,输入管理员密码后,就可以自动安装完成。

Spark也支持Maven编译,有兴趣的朋友可以参考:http://spark-project.org/docs/latest/building-with-maven.html

4. 编译Spark

进入解压好的Spark目录下,运行:

sbt package

初次编译的时间会比较长,在我的环境中最后显示编译时间为1069s。

在编译的过程中,可能会出现一些error,但基本都是关于twitter4j的,可能需要调用一些twitter for Jave的API,不过咱们不用管他,不影响使用。

5. 运行例子程序

在Spark目录下,examples目录中有一些用Java或Scala编写的例子,为了监测编译是否正确,咱们可以试着跑跑。

注意:在跑样例程序之前,需要修改Spark根目录下的run文件,在144行和146行中,必须将“spark-examples-”修改为“spark-examples_”。你可以自己进入examples/target/scala-2.9.3目录,看一下里面jar文件的名称就知道为什么需要这么做了。不然,在跑样例程序的时候,会出现类似下面的错误:

yn@ubuntu:~/spark-0.7.2$ ./run spark.examples.SparkPi local[2]
13/06/21 09:57:06 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
13/06/21 09:57:06 INFO spark.SparkEnv: Registering BlockManagerMaster
13/06/21 09:57:06 INFO storage.MemoryStore: MemoryStore started with capacity 323.9 MB.
13/06/21 09:57:06 INFO storage.DiskStore: Created local directory at /tmp/spark-local-20130621095706-c588
13/06/21 09:57:06 INFO network.ConnectionManager: Bound socket to port 33533 with id = ConnectionManagerId(ubuntu,33533)
13/06/21 09:57:06 INFO storage.BlockManagerMaster: Trying to register BlockManager
13/06/21 09:57:06 INFO storage.BlockManagerMasterActor$BlockManagerInfo: Registering block manager ubuntu:33533 with 323.9 MB RAM
13/06/21 09:57:06 INFO storage.BlockManagerMaster: Registered BlockManager
13/06/21 09:57:06 INFO server.Server: jetty-7.6.8.v20121106
13/06/21 09:57:06 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:55751
13/06/21 09:57:06 INFO broadcast.HttpBroadcast: Broadcast server started at http://192.168.1.100:55751
13/06/21 09:57:06 INFO spark.SparkEnv: Registering MapOutputTracker
13/06/21 09:57:06 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-f068aacf-fe52-4db6-bff4-a2aacead476f
13/06/21 09:57:06 INFO server.Server: jetty-7.6.8.v20121106
13/06/21 09:57:06 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:35958
13/06/21 09:57:07 INFO io.IoWorker: IoWorker thread 'spray-io-worker-0' started
13/06/21 09:57:07 INFO server.HttpServer: akka://spark/user/BlockManagerHTTPServer started on /0.0.0.0:48242
13/06/21 09:57:07 INFO storage.BlockManagerUI: Started BlockManager web UI at http://ubuntu:48242
Exception in thread "main" java.lang.NullPointerException
    at java.net.URI$Parser.parse(URI.java:3019)
    at java.net.URI.<init>(URI.java:595)
    at spark.SparkContext.addJar(SparkContext.scala:511)
    at spark.SparkContext$$anonfun$2.apply(SparkContext.scala:102)
    at spark.SparkContext$$anonfun$2.apply(SparkContext.scala:102)
    at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
    at scala.collection.immutable.List.foreach(List.scala:76)
    at spark.SparkContext.<init>(SparkContext.scala:102)
    at spark.examples.SparkPi$.main(SparkPi.scala:14)
    at spark.examples.SparkPi.main(SparkPi.scala

现在,你可以在Spark根目录下运行:

./run spark.examples.SparkPi local[2]

如果能够看到如下结果,基本就算编译成功了:

yn@ubuntu:~/spark-0.7.2$ ./run spark.examples.SparkPi local[2]
13/06/21 10:01:19 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
13/06/21 10:01:19 INFO spark.SparkEnv: Registering BlockManagerMaster
13/06/21 10:01:19 INFO storage.MemoryStore: MemoryStore started with capacity 323.9 MB.
13/06/21 10:01:19 INFO storage.DiskStore: Created local directory at /tmp/spark-local-20130621100119-7b12
13/06/21 10:01:19 INFO network.ConnectionManager: Bound socket to port 60838 with id = ConnectionManagerId(ubuntu,60838)
13/06/21 10:01:19 INFO storage.BlockManagerMaster: Trying to register BlockManager
13/06/21 10:01:19 INFO storage.BlockManagerMasterActor$BlockManagerInfo: Registering block manager ubuntu:60838 with 323.9 MB RAM
13/06/21 10:01:19 INFO storage.BlockManagerMaster: Registered BlockManager
13/06/21 10:01:19 INFO server.Server: jetty-7.6.8.v20121106
13/06/21 10:01:19 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:54966
13/06/21 10:01:19 INFO broadcast.HttpBroadcast: Broadcast server started at http://192.168.1.100:54966
13/06/21 10:01:19 INFO spark.SparkEnv: Registering MapOutputTracker
13/06/21 10:01:19 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-fae55966-53aa-4f14-9dfc-334239949905
13/06/21 10:01:19 INFO server.Server: jetty-7.6.8.v20121106
13/06/21 10:01:19 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:40454
13/06/21 10:01:19 INFO io.IoWorker: IoWorker thread 'spray-io-worker-0' started
13/06/21 10:01:19 INFO server.HttpServer: akka://spark/user/BlockManagerHTTPServer started on /0.0.0.0:53206
13/06/21 10:01:19 INFO storage.BlockManagerUI: Started BlockManager web UI at http://ubuntu:53206
13/06/21 10:01:19 INFO spark.SparkContext: Added JAR /home/yn/spark-0.7.2/examples/target/scala-2.9.3/spark-examples_2.9.3-0.7.2.jar at http://192.168.1.100:40454/jars/spark-examples_2.9.3-0.7.2.jar with timestamp 1371780079811
13/06/21 10:01:20 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:22
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:22) with 2 output partitions (allowLocal=false)
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Final stage: Stage 0 (map at SparkPi.scala:18)
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Parents of final stage: List()
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Missing parents: List()
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:18), which has no missing parents
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:18)
13/06/21 10:01:20 INFO local.LocalScheduler: Running ResultTask(0, 0)
13/06/21 10:01:20 INFO local.LocalScheduler: Running ResultTask(0, 1)
13/06/21 10:01:20 INFO local.LocalScheduler: Size of task 1 is 1343 bytes
13/06/21 10:01:20 INFO local.LocalScheduler: Size of task 0 is 1343 bytes
13/06/21 10:01:20 INFO local.LocalScheduler: Fetching http://192.168.1.100:40454/jars/spark-examples_2.9.3-0.7.2.jar with timestamp 1371780079811
13/06/21 10:01:20 INFO spark.Utils: Fetching http://192.168.1.100:40454/jars/spark-examples_2.9.3-0.7.2.jar to /tmp/fetchFileTemp4213276189938108169.tmp
13/06/21 10:01:20 INFO local.LocalScheduler: Adding file:/tmp/spark-cdc3e2f9-e2ea-45ee-99bc-c2cd324f9ee1/spark-examples_2.9.3-0.7.2.jar to class loader
13/06/21 10:01:20 INFO local.LocalScheduler: Finished ResultTask(0, 1)
13/06/21 10:01:20 INFO local.LocalScheduler: Finished ResultTask(0, 0)
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Completed ResultTask(0, 1)
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Completed ResultTask(0, 0)
13/06/21 10:01:20 INFO scheduler.DAGScheduler: Stage 0 (map at SparkPi.scala:18) finished in 0.285 s
13/06/21 10:01:20 INFO spark.SparkContext: Job finished: reduce at SparkPi.scala:22, took 0.326348109 s
Pi is roughly 3.1431

5. 关于Hadoop版本

Spark是兼容Hadoop HDFS的,所以为了能够正常运行,Spark版本与Hadoop版本必须匹配,对于Spark 0.7.2默认支持的是Hadoop 1.0.4,当需要修改Hadoop版本的时候,可以在根目录下project/SparkBuild.scala中修改HADOOP_VERSION,然后在Spark根目录下重新执行:

sbt clean compile
或者

sbt clean package

关于这两个命令是否功能一样,不是很清楚,待研究,如果有指导的朋友,还请不吝赐教~

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值