【第22期】观点:IT 行业加班,到底有没有价值?

spark1.6.1及scala2.11.8安装配置

原创 2016年05月31日 21:05:28

首先,安装spark之前需要安装配置的软件有:JDK,Scala,ssh,Hadoop这些开发平台的安装配置在我之前的博客中都有详细的攻略,需要的请去看看。
hadoop安装配置

再此提一句,无论是hadoop,hbase,hive,spark都是需要版本适配的,不然就会多很多步的不必要操作,版本的适配官网上都有,这里写者是使用:jdk1.7+hadoop2.6.4+scala2.11.8+spark1.6.1。

由于spark的内核是scala,所以使用spark之前,必先安装scala,那么废话不多说,开始安装。

scala安装配置

  1. 解压scala-2.11.8.tgz

hadoop@master:/software/spark-1.6.1-bin-hadoop2.6$ cd ~
hadoop@master:~$ cd Downloads/ hadoop@master:~/Downloads$ ls

apache-hive-2.0.0-bin.tar.gz scala-2.11.8.tgz hadoop-2.6.4.tar.gz
spark-1.6.1-bin-hadoop2.6.tgz hbase-1.2.1-bin.tar.gz
zookeeper-3.5.0-alpha.tar.gz jdk-7u80-linux-x64.tar.gz
hadoop@master:~/Downloads$ cd /software/ hadoop@master:/software$ tar
-zxvf scala-2.11.8/

2.配置环境变量

hadoop@master:/software$ sudo gedit /etc/profile

里面添加
export SCALA_HOME=/software/scala-2.11.8
export PATH=$SCALA_HOME/bin:$PATH

hadoop@master:/software$ source /etc/profile

3.启动及验证

hadoop@master:/software$ scala
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80).
Type in expressions for evaluation. Or try :help.

scala> 8*8
res0: Int = 64

scala> ;

安装spark

  1. 解压spark-1.6.1-bin-hadoop2.6.tgz

hadoop@master:/software$ tar -zxvf ~/Downloads/spark-1.6.1-bin-hadoop2.6.tgz

2.配置环境变量

sudo gedit /etc/profile
export SPARK_HOME=/software/spark-1.6.1-bin-hadoop2.6
export PATH=$SPARK_HOME/bin:$PATH

hadoop@master:/software$ source /etc/profile

3.修改spark-env.sh

hadoop@master:~$ cd /software/spark-1.6.1-bin-hadoop2.6/conf/
hadoop@master:/software/spark-1.6.1-bin-hadoop2.6/conf$ ls
Docker.properties.template metrics.properties.template
spark-env.sh.template fairscheduler.xml.template slaves.template
log4j.properties.template spark-defaults.conf.template
hadoop@master:/software/spark-1.6.1-bin-hadoop2.6/conf$ cp
spark-env.sh.template spark-env.sh
hadoop@master:/software/spark-1.6.1-bin-hadoop2.6/conf$ sudo gedit
spark-env.sh

加入
export SCALA_HOME=/software/scala-2.11.8
export JAVA_HOME=/software/jdk1.7.0_80

export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=512m
export master=spark://master:7070

修改slaves
hadoop@master:/software/spark-1.6.1-bin-hadoop2.6/conf$ cp slaves.template slaves
hadoop@master:/software/spark-1.6.1-bin-hadoop2.6/conf$ sudo gedit slaves

loaclhost改为master

4.启动spark

hadoop@master:/software/spark-1.6.1-bin-hadoop2.6$ sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to
/software/spark-1.6.1-bin-hadoop2.6/logs/spark-tg-org.apache.spark.deploy.master.Master-1-master.out
master: starting org.apache.spark.deploy.worker.Worker, logging to
/software/spark-1.6.1-bin-hadoop2.6/logs/spark-tg-org.apache.spark.deploy.worker.Worker-1-master.out

jps查看进程 多了Worker,Master

5.进入spark-shell

hadoop@master:/software/spark-1.6.1-bin-hadoop2.6$ bin/spark-shell
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN
Please initialize the log4j system properly. log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark’s repl log4j profile:
org/apache/spark/log4j-defaults-repl.properties To adjust logging
level use sc.setLogLevel(“INFO”) Welcome to
__
/ / _ _/ /__
\ \/ \/ _ `/ / ‘/ // ._/_,// //_\ version 1.6.1
/_/

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java
1.7.0_80) Type in expressions to have them evaluated. Type :help for more information. Spark context available as sc. 16/05/31 05:58:36
WARN Connection: BoneCP specified but not present in CLASSPATH (or one
of dependencies) 16/05/31 05:58:37 WARN Connection: BoneCP specified
but not present in CLASSPATH (or one of dependencies) 16/05/31
05:58:45 WARN ObjectStore: Version information not found in metastore.
hive.metastore.schema.verification is not enabled so recording the
schema version 1.2.0 16/05/31 05:58:46 WARN ObjectStore: Failed to get
database default, returning NoSuchObjectException 16/05/31 05:58:50
WARN Connection: BoneCP specified but not present in CLASSPATH (or one
of dependencies) 16/05/31 05:58:51 WARN Connection: BoneCP specified
but not present in CLASSPATH (or one of dependencies) 16/05/31
05:58:57 WARN ObjectStore: Version information not found in metastore.
hive.metastore.schema.verification is not enabled so recording the
schema version 1.2.0 16/05/31 05:58:58 WARN ObjectStore: Failed to get
database default, returning NoSuchObjectException SQL context
available as sqlContext.

scala>

可以通过浏览器访问查看:url为maste的Ip:4040,master的Ip:7077

版权声明:本文为博主原创文章,未经博主允许不得转载。 举报

相关文章推荐

最新版scala2.11.8与spark1.6.1一步到位安装

一,scala安装: 先到官网下载一个scala的压缩包,它没有过多的要求,然后在Linux下按照如下步骤操作: 1,解压包: hadoop@master:/mysoft...

Spark1.6.1平台搭建(hadoop-2.7.2 scala-2.11.8)

0 说明0.1 备注建立文件夹(sparkdir、hadoop、java、scala、spark) 每台机器均有 /usr/sparkdir /hadoop ...

linux(centos7)下安装 scala2.11.6

1、下载文件scala2.11.6.tgz         http://www.scala-lang.org/download/2.11.6.html 2、登陆linux        [...

Scala2.11.8 + Sbt + Maven + IntelliJ Idea + Spark2.0开发环境搭建备忘

已有hadoop yarn 和 spark 集群部署、运行在分布式环境中,程序开发编码在PC上,由于逐渐增多scala编写spark2.0程序,入乡随俗使用sbt和IntelliJ Idea,顺便对P...

Eclipse4.4.2+scala2.11+jdk1.8+scala-plugin开发第一个scala程序

学习spark少不了要看scala的代码,这里先学习如何利用eclipse来搭建scala开发环境并运行第一个scala程序。

ubuntu 14.04 下单机安装 hadoop 2.7.2+scala 2.11.8+spark 2.0伪分布式教程

听说高大上的spark2.0版本发布啦,凑个热闹来安装一下。 本文安装方式及教程截图均基于ubuntu 14.04系统 需要下载以下4个软件: 说明:自己尝试发现手动下载这些软件的压缩包再解压...

Linux安装Scala步骤

一、下载Scala安装包 从scala官方网站地址:http://www.scala-lang.org/download/下载scala二进制包,以2.11.8版本为例 二、安装 1、将下载的二进制包...

Scala安装

Windows下安装Scala 环境Win10。下载安装文件scala-2.11.8.msi(http://www.scala-lang.org/download/2.11.8.html) 设置环境...

linux下Scala的安装

1、进入官网下载tgz文件:scala-2.11.8.tgz 2、tar -xzvf scala-2.11.8.tgz (把压缩包先copy到/opt目录下,然后执行该命令) 安装路径为/op...

mac安装Scala

Scala语言的名称来自于“可伸展的语言”。它是一种把面向对象和函数式编程理念加入到静态类型语言中的 混血儿。它跑在标准的 Java 平台上,可以与所有的 Java 库实现无缝 交互。它也是用来编写脚...
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)