我的Spark学习笔记(一)

我在win10环境下,参考这篇博客,搭建Spark开发环境,记录如下。

安装配置JDK后,验证如下:

C:\Users\jinjiankang>java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)

安装配置Scala后,验证如下:

C:\Users\jinjiankang>scala
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152).
Type in expressions for evaluation. Or try :help.

scala>

安装配置Spark后,验证如下:

C:\Users\jinjiankang>spark-shell
略
... file:/D:/ProgramFiles/spark-2.2.0-bin-hadoop2.7 ...19/11/27 13:30:53 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://10.13.24.159:4040
Spark context available as 'sc' (master = local[*], app id = local-1574832647347).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152)
Type in expressions to have them evaluated.
Type :help for more information.

scala>
C:\Users\jinjiankang>spark-submit
Usage: spark-submit [options] <app jar | python file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]

Options:
  --master MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.
  --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                              on one of the worker machines inside the cluster ("cluster")
                              (Default: client).
  --class CLASS_NAME          Your application's main class (for Java / Scala apps).
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of local jars to include on the driver
                              and executor classpaths.
  --packages                  Comma-separated list of maven coordinates of jars to include
                              on the driver and executor classpaths. Will search the local
                              maven repo, then maven central and any additional remote
                              repositories given by --repositories. The format for the
                              coordinates should be groupId:artifactId:version.
  --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
                              resolving the dependencies provided in --packages to avoid
                              dependency conflicts.
  --repositories              Comma-separated list of additional remote repositories to
                              search for the maven coordinates given with --packages.
  --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
                              on the PYTHONPATH for Python apps.
  --files FILES               Comma-separated list of files to be placed in the working
                              directory of each executor. File paths of these files
                              in executors can be accessed via SparkFiles.get(fileName).

  --conf PROP=VALUE           Arbitrary Spark configuration property.
  --properties-file FILE      Path to a file from which to load extra properties. If not
                              specified, this will look for conf/spark-defaults.conf.

  --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
  --driver-java-options       Extra Java options to pass to the driver.
  --driver-library-path       Extra library path entries to pass to the driver.
  --driver-class-path         Extra class path entries to pass to the driver. Note that
                              jars added with --jars are automatically included in the
                              classpath.

  --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).

  --proxy-user NAME           User to impersonate when submitting the application.
                              This argument does not work with --principal / --keytab.

  --help, -h                  Show this help message and exit.
  --verbose, -v               Print additional debug output.
  --version,                  Print the version of current Spark.

 Spark standalone with cluster deploy mode only:
  --driver-cores NUM          Cores for driver (Default: 1).

 Spark standalone or Mesos with cluster deploy mode only:
  --supervise                 If given, restarts the driver on failure.
  --kill SUBMISSION_ID        If given, kills the driver specified.
  --status SUBMISSION_ID      If given, requests the status of the driver specified.

 Spark standalone and Mesos only:
  --total-executor-cores NUM  Total cores for all executors.

 Spark standalone and YARN only:
  --executor-cores NUM        Number of cores per executor. (Default: 1 in YARN mode,
                              or all available cores on the worker in standalone mode)

 YARN-only:
  --driver-cores NUM          Number of cores used by the driver, only in cluster mode
                              (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
                              If dynamic allocation is enabled, the initial number of
                              executors will be at least NUM.
  --archives ARCHIVES         Comma separated list of archives to be extracted into the
                              working directory of each executor.
  --principal PRINCIPAL       Principal to be used to login to KDC, while running on
                              secure HDFS.
  --keytab KEYTAB             The full path to the file that contains the keytab for the
                              principal specified above. This keytab will be copied to
                              the node running the Application Master via the Secure
                              Distributed Cache, for renewing the login tickets and the
                              delegation tokens periodically.

各个Home环境变量

C:\Users\jinjiankang>echo %JAVA_HOME%
D:\ProgramFiles\Java\jdk1.8.0_152

C:\Users\jinjiankang>echo %SCALA_HOME%
D:\ProgramFiles\scala

C:\Users\jinjiankang>echo %SPARK_HOME%
D:\ProgramFiles\spark-2.2.0-bin-hadoop2.7

C:\Users\jinjiankang>echo %HADOOP_HOME%
D:\ProgramFiles\hadoop-2.7.7

Hello World(本机Spark环境)

  1. 在idea工具里安装Scala插件。
  2. 新建一个Scala项目及Scala文件。
    File-New-Project…-Scala-IDEA
    项目名称:ScalaHelloWorld
    JDK:1.8
    Scala SDK:scala-sdk-2.11.8

新建包:com.jjk
新建一个名称为Hello,类型为Object的文件,其内容如下:
object Hello {
def main(args: Array[String]): Unit = {
println(“Hello World”);
}
}

  1. 导出工件。
    File-Project Structure-Artifacts,略,可参考这篇博客
    Buid-Build Artifacts,生成工件ScalaHelloWorld.jar。
  2. 通过spark-submit命令,在本机命令行执行:
    D:\Dump\ScalaDB\ScalaHelloWorld\out\artifacts\ScalaHelloWorld_jar>spark-submit --class com.jjk.Hello ScalaHelloWorld.jar
    Hello World
  3. 模仿上述步骤,新建一个Java项目,通过spark-submit命令,在本机命令行D:\Dump\ScalaDB\JavaHelloWorld\out\artifacts\JavaHelloWorld_jar>spark-submit --class com.jjk.Hello JavaHelloWorld.jar
    Hello World

Hello World(Linux下Spark集群环境)

TODO

对于没有本地Linux环境的同学,可以使用腾讯云搭建 Hadoop 伪分布式环境

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值