如何在IDEA IDE 开发环境中直接以 Yarn 方式提交Spark 代码到远端 Yarn集群运行

概要

一般Spark 代码开发提交方式的痛点:
(1)IDEA IDE 本机运行 :
setMaster(local[]) :
痛点: 这种方式本地JVM中模拟方式,本地负载高,可能无法全面调试
( 2) spark-submit 命令方式:
setMaster(“yarn”) 或 setMaster("local[
] ")
打包代码jar->上传到Hadoop主机->调 用 spark-submit
痛点:繁琐,每次都要copy到hadoop主机,并调用spark-submit

有没有一步到位,直接在IDEA 中运行就可提交到Yarn上,结果输出到IDEA 中的方式呢?经过研究我成功了! (网上对这种方法的介绍,很难发现)

整体架构

IDEA 主机 作为Driver直接与Yarn通讯

技术细节

  1. 需要将spark和hadoop 中的jar,放到在本地IDEA开发环境可以访问
    的hdfs文件系统中,包含的jar来自于以下位置(不一定全需要)
     %SPARK_HOME/jars
     %HADOOP_HOME/share/Hadoop
    上传到HDFS 中 hdfs:///usr/sparkjars/

  2. 需要代码中包含以下关键个参数
    .setMaster(“yarn”) //设置YARN 集群运行方式
    .set(“yarn.resourcemanager.hostname”, “hadoop02”)// 设置resourcemanager的ip
    .set(“spark.driver.host”, “10.194.208.109”) // 设置IDEA IED 主机开发端作为Driver端
    .set(“spark.yarn.jars”, “hdfs:///usr/sparkjars/*”) // 设置为yarn 运行spark需要的jar包,必须是yarn可访问位置hdfs
    // 设置程序打包后的jar包的路径,如果有其他的依赖包,可以在这里添加,逗号隔开
    .setJars(List(“target/Spark-demo-1.0-SNAPSHOT.jar”)) // 设置spark程序编译后打包文件相对位置

  3. 要把%HADOOP_HOME/etc/hadoop中以下三个配置文件放到IDEA Maven 工程里 resources 下

  • core-site.xml
  • hdfs-site.xml
  • yarn-site.xml

请添加图片描述
4.pom.xml 中除了开发用jar, 还需要如下依赖:(从输出中,猜测可能是在本地生成web服务,上传文件用的);

<dependency>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-yarn-server-web-proxy</artifactId>
   <version>${hadoop.version}</version>
 </dependency>
  1. 完整scala代码:
package org.example

import org.apache.spark.sql.SparkSession
import org.apache.spark.SparkConf

object SimpleApp_on_yarn {
  def main(args: Array[String]) {
    val logFile = "hdfs://10.194.216.100:9000/test/README.md"

    System.setProperty("HADOOP_USER_NAME", "root") //hdfs 目录访问的解决权限问题
    val spark_on_yarn_Conf = new SparkConf()
      .setAppName("WordCount")
      // .set("mapreduce.app-submission.cross-platform", "true")
      // 设置yarn-client模式提交
      .setMaster("yarn")
      // 设置resourcemanager的ip
      .set("yarn.resourcemanager.hostname", "hadoop02")

      // 设置executor的个数
      //.set("spark.executor.instance", "1")
      // .set("spark.executor.cores", "4")
      // 设置executor的内存大小
      //.set("spark.executor.memory", "600m")
      //.set("spark.driver.memory", "600m")


      // 设置driver的ip地址 ; 本地IP,不能是localhost
      .set("spark.driver.host", "10.194.208.109")  //开发端作为Driver端
      .set("spark.driver.port", "5555")
      //需要设置为yarn可访问位置-hdfs
      .set("spark.yarn.jars", "hdfs:///usr/sparkjars/*")
      //  设置程序打包后的jar包的路径,如果有其他的依赖包,可以在这里添加,逗号隔开
      .setJars(List("target/Spark-demo-1.0-SNAPSHOT.jar"))


    val spark = SparkSession.builder.config(spark_on_yarn_Conf).getOrCreate()

    val csinfo = spark.sparkContext.getConf.getAll
    println("*" * 50)
    csinfo.foreach(println);
    println("*" * 50)

     val logData = spark.read.textFile(logFile)

    val rows = logData.count();
    println()
    println("********************结果如下******************")
    println(s"****  文 件 的 总 行 数 是: $rows 行 ****")
    println("*" * 50)
    println()
    spark.close()

  }
}

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>org.example</groupId>
  <artifactId>Spark-demo</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>jar</packaging>

  <name>Spark-demo</name>


  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <scala.version>2.12</scala.version>
    <spark.version>3.3.4</spark.version>
    <hadoop.version>2.10.2</hadoop.version>
  </properties>

  <repositories>
    <repository>
      <id>central_maven</id>
      <name>central maven</name>
      <url>https://repo1.maven.org/maven2</url>
    </repository>
    <repository>
      <id>nexus-aliyun</id>
      <name>Nexus aliyun</name>
      <url>https://maven.aliyun.com/nexus/content/groups/public</url>
    </repository>
    <repository>
      <id>centralMaven</id>
      <name>central maven</name>
      <url>https://mvnrepository.com/</url>
    </repository>
    <repository>
      <id>cloudera</id>
      <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>
    <repository>
      <id>apache.snapshots</id>
      <name>Apache Development Snapshot Repository</name>
      <url>https://repository.apache.org/content/repositories/snapshots/</url>
      <releases>
        <enabled>false</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
    </repository>
  </repositories>

  <dependencies>


    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.12</artifactId>
      <version>${spark.version}</version>
     <!-- <scope>provided</scope> -->
    </dependency>

   <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-yarn-server-web-proxy</artifactId>
      <version>${hadoop.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs-client</artifactId>
      <version>${hadoop.version}</version>
    </dependency>




    <!--spark作业需要外部集群支持时 -->

  <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-yarn_2.12</artifactId>
      <version>${spark.version}</version>
    <!-- <scope>provided</scope> -->
    </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>



</project>

整个运行输出:
D:\programs\jdk1.8.0_202\bin\java.exe “-javaagent:D:\Program Files\IDEACEdition20230301\lib\idea_rt.jar=58112:D:\Program Files\IDEACEdition20230301\bin” -Dfile.encoding=UTF-8 -classpath D:\programs\jdk1.8.0_202\jre\lib\charsets.jar;D:\programs\jdk1.8.0_202\jre\lib\deploy.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\access-bridge-64.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\cldrdata.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\dnsns.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\jaccess.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\jfxrt.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\localedata.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\nashorn.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\sunec.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\sunjce_provider.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\sunmscapi.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\sunpkcs11.jar;D:\programs\jdk1.8.0_202\jre\lib\ext\zipfs.jar;D:\programs\jdk1.8.0_202\jre\lib\javaws.jar;D:\programs\jdk1.8.0_202\jre\lib\jce.jar;D:\programs\jdk1.8.0_202\jre\lib\jfr.jar;D:\programs\jdk1.8.0_202\jre\lib\jfxswt.jar;D:\programs\jdk1.8.0_202\jre\lib\jsse.jar;D:\programs\jdk1.8.0_202\jre\lib\management-agent.jar;D:\programs\jdk1.8.0_202\jre\lib\plugin.jar;D:\programs\jdk1.8.0_202\jre\lib\resources.jar;D:\programs\jdk1.8.0_202\jre\lib\rt.jar;E:\workspace\idea_projects\Spark-demo\Spark-demo\target\classes;E:\workspace\maven_repository\org\apache\spark\spark-sql_2.12\3.3.4\spark-sql_2.12-3.3.4.jar;E:\workspace\maven_repository\org\rocksdb\rocksdbjni\6.20.3\rocksdbjni-6.20.3.jar;E:\workspace\maven_repository\com\univocity\univocity-parsers\2.9.1\univocity-parsers-2.9.1.jar;E:\workspace\maven_repository\org\apache\spark\spark-sketch_2.12\3.3.4\spark-sketch_2.12-3.3.4.jar;E:\workspace\maven_repository\org\apache\spark\spark-core_2.12\3.3.4\spark-core_2.12-3.3.4.jar;E:\workspace\maven_repository\org\apache\avro\avro\1.11.0\avro-1.11.0.jar;E:\workspace\maven_repository\org\apache\avro\avro-mapred\1.11.0\avro-mapred-1.11.0.jar;E:\workspace\maven_repository\org\apache\avro\avro-ipc\1.11.0\avro-ipc-1.11.0.jar;E:\workspace\maven_repository\org\tukaani\xz\1.9\xz-1.9.jar;E:\workspace\maven_repository\com\twitter\chill_2.12\0.10.0\chill_2.12-0.10.0.jar;E:\workspace\maven_repository\com\esotericsoftware\kryo-shaded\4.0.2\kryo-shaded-4.0.2.jar;E:\workspace\maven_repository\com\esotericsoftware\minlog\1.3.0\minlog-1.3.0.jar;E:\workspace\maven_repository\org\objenesis\objenesis\2.5.1\objenesis-2.5.1.jar;E:\workspace\maven_repository\com\twitter\chill-java\0.10.0\chill-java-0.10.0.jar;E:\workspace\maven_repository\org\apache\spark\spark-launcher_2.12\3.3.4\spark-launcher_2.12-3.3.4.jar;E:\workspace\maven_repository\org\apache\spark\spark-kvstore_2.12\3.3.4\spark-kvstore_2.12-3.3.4.jar;E:\workspace\maven_repository\org\apache\spark\spark-network-common_2.12\3.3.4\spark-network-common_2.12-3.3.4.jar;E:\workspace\maven_repository\com\google\crypto\tink\tink\1.6.1\tink-1.6.1.jar;E:\workspace\maven_repository\com\google\code\gson\gson\2.8.6\gson-2.8.6.jar;E:\workspace\maven_repository\org\apache\spark\spark-network-shuffle_2.12\3.3.4\spark-network-shuffle_2.12-3.3.4.jar;E:\workspace\maven_repository\org\apache\spark\spark-unsafe_2.12\3.3.4\spark-unsafe_2.12-3.3.4.jar;E:\workspace\maven_repository\javax\activation\activation\1.1.1\activation-1.1.1.jar;E:\workspace\maven_repository\org\apache\curator\curator-recipes\2.13.0\curator-recipes-2.13.0.jar;E:\workspace\maven_repository\org\apache\curator\curator-framework\2.13.0\curator-framework-2.13.0.jar;E:\workspace\maven_repository\org\apache\curator\curator-client\2.13.0\curator-client-2.13.0.jar;E:\workspace\maven_repository\org\apache\zookeeper\zookeeper\3.6.2\zookeeper-3.6.2.jar;E:\workspace\maven_repository\org\apache\zookeeper\zookeeper-jute\3.6.2\zookeeper-jute-3.6.2.jar;E:\workspace\maven_repository\org\apache\yetus\audience-annotations\0.5.0\audience-annotations-0.5.0.jar;E:\workspace\maven_repository\jakarta\servlet\jakarta.servlet-api\4.0.3\jakarta.servlet-api-4.0.3.jar;E:\workspace\maven_repository\commons-codec\commons-codec\1.15\commons-codec-1.15.jar;E:\workspace\maven_repository\org\apache\commons\commons-lang3\3.12.0\commons-lang3-3.12.0.jar;E:\workspace\maven_repository\org\apache\commons\commons-math3\3.6.1\commons-math3-3.6.1.jar;E:\workspace\maven_repository\org\apache\commons\commons-text\1.10.0\commons-text-1.10.0.jar;E:\workspace\maven_repository\commons-io\commons-io\2.11.0\commons-io-2.11.0.jar;E:\workspace\maven_repository\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;E:\workspace\maven_repository\org\apache\commons\commons-collections4\4.4\commons-collections4-4.4.jar;E:\workspace\maven_repository\com\google\code\findbugs\jsr305\3.0.0\jsr305-3.0.0.jar;E:\workspace\maven_repository\org\slf4j\slf4j-api\1.7.32\slf4j-api-1.7.32.jar;E:\workspace\maven_repository\org\slf4j\jul-to-slf4j\1.7.32\jul-to-slf4j-1.7.32.jar;E:\workspace\maven_repository\org\slf4j\jcl-over-slf4j\1.7.32\jcl-over-slf4j-1.7.32.jar;E:\workspace\maven_repository\org\apache\logging\log4j\log4j-slf4j-impl\2.17.2\log4j-slf4j-impl-2.17.2.jar;E:\workspace\maven_repository\org\apache\logging\log4j\log4j-api\2.17.2\log4j-api-2.17.2.jar;E:\workspace\maven_repository\org\apache\logging\log4j\log4j-core\2.17.2\log4j-core-2.17.2.jar;E:\workspace\maven_repository\org\apache\logging\log4j\log4j-1.2-api\2.17.2\log4j-1.2-api-2.17.2.jar;E:\workspace\maven_repository\com\ning\compress-lzf\1.1\compress-lzf-1.1.jar;E:\workspace\maven_repository\org\xerial\snappy\snappy-java\1.1.8.4\snappy-java-1.1.8.4.jar;E:\workspace\maven_repository\org\lz4\lz4-java\1.8.0\lz4-java-1.8.0.jar;E:\workspace\maven_repository\com\github\luben\zstd-jni\1.5.2-1\zstd-jni-1.5.2-1.jar;E:\workspace\maven_repository\org\roaringbitmap\RoaringBitmap\0.9.25\RoaringBitmap-0.9.25.jar;E:\workspace\maven_repository\org\roaringbitmap\shims\0.9.25\shims-0.9.25.jar;E:\workspace\maven_repository\org\scala-lang\modules\scala-xml_2.12\1.2.0\scala-xml_2.12-1.2.0.jar;E:\workspace\maven_repository\org\scala-lang\scala-library\2.12.15\scala-library-2.12.15.jar;E:\workspace\maven_repository\org\scala-lang\scala-reflect\2.12.15\scala-reflect-2.12.15.jar;E:\workspace\maven_repository\org\json4s\json4s-jackson_2.12\3.7.0-M11\json4s-jackson_2.12-3.7.0-M11.jar;E:\workspace\maven_repository\org\json4s\json4s-core_2.12\3.7.0-M11\json4s-core_2.12-3.7.0-M11.jar;E:\workspace\maven_repository\org\json4s\json4s-ast_2.12\3.7.0-M11\json4s-ast_2.12-3.7.0-M11.jar;E:\workspace\maven_repository\org\json4s\json4s-scalap_2.12\3.7.0-M11\json4s-scalap_2.12-3.7.0-M11.jar;E:\workspace\maven_repository\org\glassfish\jersey\core\jersey-client\2.36\jersey-client-2.36.jar;E:\workspace\maven_repository\jakarta\ws\rs\jakarta.ws.rs-api\2.1.6\jakarta.ws.rs-api-2.1.6.jar;E:\workspace\maven_repository\org\glassfish\hk2\external\jakarta.inject\2.6.1\jakarta.inject-2.6.1.jar;E:\workspace\maven_repository\org\glassfish\jersey\core\jersey-common\2.36\jersey-common-2.36.jar;E:\workspace\maven_repository\jakarta\annotation\jakarta.annotation-api\1.3.5\jakarta.annotation-api-1.3.5.jar;E:\workspace\maven_repository\org\glassfish\hk2\osgi-resource-locator\1.0.3\osgi-resource-locator-1.0.3.jar;E:\workspace\maven_repository\org\glassfish\jersey\core\jersey-server\2.36\jersey-server-2.36.jar;E:\workspace\maven_repository\jakarta\validation\jakarta.validation-api\2.0.2\jakarta.validation-api-2.0.2.jar;E:\workspace\maven_repository\org\glassfish\jersey\containers\jersey-container-servlet\2.36\jersey-container-servlet-2.36.jar;E:\workspace\maven_repository\org\glassfish\jersey\containers\jersey-container-servlet-core\2.36\jersey-container-servlet-core-2.36.jar;E:\workspace\maven_repository\org\glassfish\jersey\inject\jersey-hk2\2.36\jersey-hk2-2.36.jar;E:\workspace\maven_repository\org\glassfish\hk2\hk2-locator\2.6.1\hk2-locator-2.6.1.jar;E:\workspace\maven_repository\org\glassfish\hk2\external\aopalliance-repackaged\2.6.1\aopalliance-repackaged-2.6.1.jar;E:\workspace\maven_repository\org\glassfish\hk2\hk2-api\2.6.1\hk2-api-2.6.1.jar;E:\workspace\maven_repository\org\glassfish\hk2\hk2-utils\2.6.1\hk2-utils-2.6.1.jar;E:\workspace\maven_repository\org\javassist\javassist\3.25.0-GA\javassist-3.25.0-GA.jar;E:\workspace\maven_repository\io\netty\netty-all\4.1.74.Final\netty-all-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-buffer\4.1.74.Final\netty-buffer-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-codec\4.1.74.Final\netty-codec-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-common\4.1.74.Final\netty-common-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-handler\4.1.74.Final\netty-handler-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-tcnative-classes\2.0.48.Final\netty-tcnative-classes-2.0.48.Final.jar;E:\workspace\maven_repository\io\netty\netty-resolver\4.1.74.Final\netty-resolver-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-transport\4.1.74.Final\netty-transport-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-transport-classes-epoll\4.1.74.Final\netty-transport-classes-epoll-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-transport-native-unix-common\4.1.74.Final\netty-transport-native-unix-common-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-transport-classes-kqueue\4.1.74.Final\netty-transport-classes-kqueue-4.1.74.Final.jar;E:\workspace\maven_repository\io\netty\netty-transport-native-epoll\4.1.74.Final\netty-transport-native-epoll-4.1.74.Final-linux-x86_64.jar;E:\workspace\maven_repository\io\netty\netty-transport-native-epoll\4.1.74.Final\netty-transport-native-epoll-4.1.74.Final-linux-aarch_64.jar;E:\workspace\maven_repository\io\netty\netty-transport-native-kqueue\4.1.74.Final\netty-transport-native-kqueue-4.1.74.Final-osx-x86_64.jar;E:\workspace\maven_repository\io\netty\netty-transport-native-kqueue\4.1.74.Final\netty-transport-native-kqueue-4.1.74.Final-osx-aarch_64.jar;E:\workspace\maven_repository\com\clearspring\analytics\stream\2.9.6\stream-2.9.6.jar;E:\workspace\maven_repository\io\dropwizard\metrics\metrics-core\4.2.7\metrics-core-4.2.7.jar;E:\workspace\maven_repository\io\dropwizard\metrics\metrics-jvm\4.2.7\metrics-jvm-4.2.7.jar;E:\workspace\maven_repository\io\dropwizard\metrics\metrics-json\4.2.7\metrics-json-4.2.7.jar;E:\workspace\maven_repository\io\dropwizard\metrics\metrics-graphite\4.2.7\metrics-graphite-4.2.7.jar;E:\workspace\maven_repository\io\dropwizard\metrics\metrics-jmx\4.2.7\metrics-jmx-4.2.7.jar;E:\workspace\maven_repository\com\fasterxml\jackson\module\jackson-module-scala_2.12\2.13.4\jackson-module-scala_2.12-2.13.4.jar;E:\workspace\maven_repository\com\thoughtworks\paranamer\paranamer\2.8\paranamer-2.8.jar;E:\workspace\maven_repository\org\apache\ivy\ivy\2.5.1\ivy-2.5.1.jar;E:\workspace\maven_repository\oro\oro\2.0.8\oro-2.0.8.jar;E:\workspace\maven_repository\net\razorvine\pickle\1.2\pickle-1.2.jar;E:\workspace\maven_repository\net\sf\py4j\py4j\0.10.9.5\py4j-0.10.9.5.jar;E:\workspace\maven_repository\org\apache\commons\commons-crypto\1.1.0\commons-crypto-1.1.0.jar;E:\workspace\maven_repository\org\apache\spark\spark-catalyst_2.12\3.3.4\spark-catalyst_2.12-3.3.4.jar;E:\workspace\maven_repository\org\scala-lang\modules\scala-parser-combinators_2.12\1.1.2\scala-parser-combinators_2.12-1.1.2.jar;E:\workspace\maven_repository\org\codehaus\janino\janino\3.0.16\janino-3.0.16.jar;E:\workspace\maven_repository\org\codehaus\janino\commons-compiler\3.0.16\commons-compiler-3.0.16.jar;E:\workspace\maven_repository\org\antlr\antlr4-runtime\4.8\antlr4-runtime-4.8.jar;E:\workspace\maven_repository\org\apache\arrow\arrow-vector\7.0.0\arrow-vector-7.0.0.jar;E:\workspace\maven_repository\org\apache\arrow\arrow-format\7.0.0\arrow-format-7.0.0.jar;E:\workspace\maven_repository\org\apache\arrow\arrow-memory-core\7.0.0\arrow-memory-core-7.0.0.jar;E:\workspace\maven_repository\com\google\flatbuffers\flatbuffers-java\1.12.0\flatbuffers-java-1.12.0.jar;E:\workspace\maven_repository\org\apache\arrow\arrow-memory-netty\7.0.0\arrow-memory-netty-7.0.0.jar;E:\workspace\maven_repository\org\apache\spark\spark-tags_2.12\3.3.4\spark-tags_2.12-3.3.4.jar;E:\workspace\maven_repository\org\apache\orc\orc-core\1.7.10\orc-core-1.7.10.jar;E:\workspace\maven_repository\org\apache\orc\orc-shims\1.7.10\orc-shims-1.7.10.jar;E:\workspace\maven_repository\com\google\protobuf\protobuf-java\2.5.0\protobuf-java-2.5.0.jar;E:\workspace\maven_repository\io\airlift\aircompressor\0.21\aircompressor-0.21.jar;E:\workspace\maven_repository\org\jetbrains\annotations\17.0.0\annotations-17.0.0.jar;E:\workspace\maven_repository\org\threeten\threeten-extra\1.5.0\threeten-extra-1.5.0.jar;E:\workspace\maven_repository\org\apache\orc\orc-mapreduce\1.7.10\orc-mapreduce-1.7.10.jar;E:\workspace\maven_repository\org\apache\hive\hive-storage-api\2.7.2\hive-storage-api-2.7.2.jar;E:\workspace\maven_repository\org\apache\parquet\parquet-column\1.12.2\parquet-column-1.12.2.jar;E:\workspace\maven_repository\org\apache\parquet\parquet-common\1.12.2\parquet-common-1.12.2.jar;E:\workspace\maven_repository\org\apache\parquet\parquet-encoding\1.12.2\parquet-encoding-1.12.2.jar;E:\workspace\maven_repository\org\apache\parquet\parquet-hadoop\1.12.2\parquet-hadoop-1.12.2.jar;E:\workspace\maven_repository\org\apache\parquet\parquet-format-structures\1.12.2\parquet-format-structures-1.12.2.jar;E:\workspace\maven_repository\org\apache\parquet\parquet-jackson\1.12.2\parquet-jackson-1.12.2.jar;E:\workspace\maven_repository\com\fasterxml\jackson\core\jackson-databind\2.13.4.2\jackson-databind-2.13.4.2.jar;E:\workspace\maven_repository\com\fasterxml\jackson\core\jackson-annotations\2.13.4\jackson-annotations-2.13.4.jar;E:\workspace\maven_repository\com\fasterxml\jackson\core\jackson-core\2.13.4\jackson-core-2.13.4.jar;E:\workspace\maven_repository\org\apache\xbean\xbean-asm9-shaded\4.20\xbean-asm9-shaded-4.20.jar;E:\workspace\maven_repository\org\spark-project\spark\unused\1.0.0\unused-1.0.0.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-yarn-server-web-proxy\2.10.2\hadoop-yarn-server-web-proxy-2.10.2.jar;E:\workspace\maven_repository\javax\servlet\servlet-api\2.5\servlet-api-2.5.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-yarn-server-common\2.10.2\hadoop-yarn-server-common-2.10.2.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-yarn-registry\2.10.2\hadoop-yarn-registry-2.10.2.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-common\2.10.2\hadoop-common-2.10.2.jar;E:\workspace\maven_repository\xmlenc\xmlenc\0.52\xmlenc-0.52.jar;E:\workspace\maven_repository\org\apache\httpcomponents\httpclient\4.5.13\httpclient-4.5.13.jar;E:\workspace\maven_repository\org\apache\httpcomponents\httpcore\4.4.13\httpcore-4.4.13.jar;E:\workspace\maven_repository\commons-net\commons-net\3.1\commons-net-3.1.jar;E:\workspace\maven_repository\org\mortbay\jetty\jetty-sslengine\6.1.26\jetty-sslengine-6.1.26.jar;E:\workspace\maven_repository\javax\servlet\jsp\jsp-api\2.1\jsp-api-2.1.jar;E:\workspace\maven_repository\net\java\dev\jets3t\jets3t\0.9.0\jets3t-0.9.0.jar;E:\workspace\maven_repository\com\jamesmurty\utils\java-xmlbuilder\0.4\java-xmlbuilder-0.4.jar;E:\workspace\maven_repository\commons-configuration\commons-configuration\1.6\commons-configuration-1.6.jar;E:\workspace\maven_repository\commons-digester\commons-digester\1.8\commons-digester-1.8.jar;E:\workspace\maven_repository\commons-beanutils\commons-beanutils\1.9.4\commons-beanutils-1.9.4.jar;E:\workspace\maven_repository\org\slf4j\slf4j-reload4j\1.7.36\slf4j-reload4j-1.7.36.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-auth\2.10.2\hadoop-auth-2.10.2.jar;E:\workspace\maven_repository\com\nimbusds\nimbus-jose-jwt\7.9\nimbus-jose-jwt-7.9.jar;E:\workspace\maven_repository\com\github\stephenc\jcip\jcip-annotations\1.0-1\jcip-annotations-1.0-1.jar;E:\workspace\maven_repository\net\minidev\json-smart\2.3\json-smart-2.3.jar;E:\workspace\maven_repository\net\minidev\accessors-smart\1.2\accessors-smart-1.2.jar;E:\workspace\maven_repository\org\ow2\asm\asm\5.0.4\asm-5.0.4.jar;E:\workspace\maven_repository\org\apache\directory\server\apacheds-kerberos-codec\2.0.0-M15\apacheds-kerberos-codec-2.0.0-M15.jar;E:\workspace\maven_repository\org\apache\directory\server\apacheds-i18n\2.0.0-M15\apacheds-i18n-2.0.0-M15.jar;E:\workspace\maven_repository\org\apache\directory\api\api-asn1-api\1.0.0-M20\api-asn1-api-1.0.0-M20.jar;E:\workspace\maven_repository\org\apache\directory\api\api-util\1.0.0-M20\api-util-1.0.0-M20.jar;E:\workspace\maven_repository\com\jcraft\jsch\0.1.55\jsch-0.1.55.jar;E:\workspace\maven_repository\org\apache\htrace\htrace-core4\4.1.0-incubating\htrace-core4-4.1.0-incubating.jar;E:\workspace\maven_repository\org\codehaus\woodstox\stax2-api\4.2.1\stax2-api-4.2.1.jar;E:\workspace\maven_repository\com\fasterxml\woodstox\woodstox-core\5.3.0\woodstox-core-5.3.0.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-annotations\2.10.2\hadoop-annotations-2.10.2.jar;D:\programs\jdk1.8.0_202\lib\tools.jar;E:\workspace\maven_repository\org\fusesource\leveldbjni\leveldbjni-all\1.8\leveldbjni-all-1.8.jar;E:\workspace\maven_repository\org\apache\geronimo\specs\geronimo-jcache_1.0_spec\1.0-alpha-1\geronimo-jcache_1.0_spec-1.0-alpha-1.jar;E:\workspace\maven_repository\org\ehcache\ehcache\3.3.1\ehcache-3.3.1.jar;E:\workspace\maven_repository\com\zaxxer\HikariCP-java7\2.4.12\HikariCP-java7-2.4.12.jar;E:\workspace\maven_repository\com\microsoft\sqlserver\mssql-jdbc\6.2.1.jre7\mssql-jdbc-6.2.1.jre7.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-yarn-common\2.10.2\hadoop-yarn-common-2.10.2.jar;E:\workspace\maven_repository\javax\xml\bind\jaxb-api\2.2.2\jaxb-api-2.2.2.jar;E:\workspace\maven_repository\javax\xml\stream\stax-api\1.0-2\stax-api-1.0-2.jar;E:\workspace\maven_repository\org\apache\commons\commons-compress\1.21\commons-compress-1.21.jar;E:\workspace\maven_repository\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;E:\workspace\maven_repository\org\mortbay\jetty\jetty-util\6.1.26\jetty-util-6.1.26.jar;E:\workspace\maven_repository\com\sun\jersey\jersey-core\1.9\jersey-core-1.9.jar;E:\workspace\maven_repository\com\sun\jersey\jersey-client\1.9\jersey-client-1.9.jar;E:\workspace\maven_repository\org\codehaus\jackson\jackson-core-asl\1.9.13\jackson-core-asl-1.9.13.jar;E:\workspace\maven_repository\org\codehaus\jackson\jackson-mapper-asl\1.9.13\jackson-mapper-asl-1.9.13.jar;E:\workspace\maven_repository\org\codehaus\jackson\jackson-jaxrs\1.9.13\jackson-jaxrs-1.9.13.jar;E:\workspace\maven_repository\org\codehaus\jackson\jackson-xc\1.9.13\jackson-xc-1.9.13.jar;E:\workspace\maven_repository\commons-cli\commons-cli\1.2\commons-cli-1.2.jar;E:\workspace\maven_repository\com\google\inject\extensions\guice-servlet\3.0\guice-servlet-3.0.jar;E:\workspace\maven_repository\com\google\inject\guice\3.0\guice-3.0.jar;E:\workspace\maven_repository\javax\inject\javax.inject\1\javax.inject-1.jar;E:\workspace\maven_repository\aopalliance\aopalliance\1.0\aopalliance-1.0.jar;E:\workspace\maven_repository\com\sun\jersey\jersey-server\1.9\jersey-server-1.9.jar;E:\workspace\maven_repository\asm\asm\3.1\asm-3.1.jar;E:\workspace\maven_repository\com\sun\jersey\jersey-json\1.9\jersey-json-1.9.jar;E:\workspace\maven_repository\org\codehaus\jettison\jettison\1.1\jettison-1.1.jar;E:\workspace\maven_repository\com\sun\xml\bind\jaxb-impl\2.2.3-1\jaxb-impl-2.2.3-1.jar;E:\workspace\maven_repository\com\sun\jersey\contribs\jersey-guice\1.9\jersey-guice-1.9.jar;E:\workspace\maven_repository\ch\qos\reload4j\reload4j\1.2.18.3\reload4j-1.2.18.3.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-yarn-api\2.10.2\hadoop-yarn-api-2.10.2.jar;E:\workspace\maven_repository\com\google\guava\guava\11.0.2\guava-11.0.2.jar;E:\workspace\maven_repository\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar;E:\workspace\maven_repository\org\mortbay\jetty\jetty\6.1.26\jetty-6.1.26.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-hdfs-client\2.10.2\hadoop-hdfs-client-2.10.2.jar;E:\workspace\maven_repository\com\squareup\okhttp\okhttp\2.7.5\okhttp-2.7.5.jar;E:\workspace\maven_repository\com\squareup\okio\okio\1.6.0\okio-1.6.0.jar;E:\workspace\maven_repository\org\apache\spark\spark-yarn_2.12\3.3.4\spark-yarn_2.12-3.3.4.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-client-api\3.3.2\hadoop-client-api-3.3.2.jar;E:\workspace\maven_repository\org\apache\hadoop\hadoop-client-runtime\3.3.2\hadoop-client-runtime-3.3.2.jar;D:\programs\scala21215\lib\scala-library.jar;D:\programs\scala21215\lib\scala-parser-combinators_2.12-1.0.7.jar;D:\programs\scala21215\lib\scala-reflect.jar;D:\programs\scala21215\lib\scala-swing_2.12-2.0.3.jar;D:\programs\scala21215\lib\scala-xml_2.12-1.0.6.jar org.example.SimpleApp_on_yarn
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/E:/workspace/maven_repository/org/apache/logging/log4j/log4j-slf4j-impl/2.17.2/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/E:/workspace/maven_repository/org/slf4j/slf4j-reload4j/1.7.36/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Using Spark’s default log4j profile: org/apache/spark/log4j2-defaults.properties
24/05/20 15:08:00 INFO SparkContext: Running Spark version 3.3.4
24/05/20 15:08:00 INFO ResourceUtils: ==============================================================
24/05/20 15:08:00 INFO ResourceUtils: No custom resources configured for spark.driver.
24/05/20 15:08:00 INFO ResourceUtils: ==============================================================
24/05/20 15:08:00 INFO SparkContext: Submitted application: WordCount
24/05/20 15:08:00 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
24/05/20 15:08:00 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
24/05/20 15:08:00 INFO ResourceProfileManager: Added ResourceProfile id: 0
24/05/20 15:08:00 INFO SecurityManager: Changing view acls to: S0085449,root
24/05/20 15:08:00 INFO SecurityManager: Changing modify acls to: S0085449,root
24/05/20 15:08:00 INFO SecurityManager: Changing view acls groups to:
24/05/20 15:08:00 INFO SecurityManager: Changing modify acls groups to:
24/05/20 15:08:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(S0085449, root); groups with view permissions: Set(); users with modify permissions: Set(S0085449, root); groups with modify permissions: Set()
24/05/20 15:08:01 INFO Utils: Successfully started service ‘sparkDriver’ on port 5555.
24/05/20 15:08:01 INFO SparkEnv: Registering MapOutputTracker
24/05/20 15:08:01 INFO SparkEnv: Registering BlockManagerMaster
24/05/20 15:08:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
24/05/20 15:08:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
24/05/20 15:08:01 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
24/05/20 15:08:01 INFO DiskBlockManager: Created local directory at C:\Users\S0085449\AppData\Local\Temp\blockmgr-d151d098-1c07-4679-be02-8b4b4347e7f6
24/05/20 15:08:01 INFO MemoryStore: MemoryStore started with capacity 1979.1 MiB
24/05/20 15:08:01 INFO SparkEnv: Registering OutputCommitCoordinator
24/05/20 15:08:01 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
24/05/20 15:08:01 INFO SparkContext: Added JAR target/Spark-demo-1.0-SNAPSHOT.jar at spark://10.194.208.109:5555/jars/Spark-demo-1.0-SNAPSHOT.jar with timestamp 1716188880457
24/05/20 15:08:01 INFO RMProxy: Connecting to ResourceManager at hadoop02/10.194.216.101:8032
24/05/20 15:08:02 INFO Configuration: resource-types.xml not found
24/05/20 15:08:02 INFO ResourceUtils: Unable to find ‘resource-types.xml’.
24/05/20 15:08:02 INFO ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
24/05/20 15:08:02 INFO ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
24/05/20 15:08:02 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
24/05/20 15:08:02 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
24/05/20 15:08:02 INFO Client: Setting up container launch context for our AM
24/05/20 15:08:02 INFO Client: Setting up the launch environment for our AM container
24/05/20 15:08:02 INFO Client: Preparing resources for our AM container
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/JLargeArrays-1.5.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/JTransforms-3.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/RoaringBitmap-0.9.25.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/aircompressor-0.21.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/algebra_2.12-2.0.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/annotations-17.0.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/antlr4-runtime-4.8.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/aopalliance-repackaged-2.6.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/arpack-2.2.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/arpack_combined_all-0.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/arrow-format-7.0.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/arrow-memory-core-7.0.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/arrow-memory-netty-7.0.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/arrow-vector-7.0.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/audience-annotations-0.5.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/automaton-1.11-8.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/avro-1.11.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/avro-ipc-1.11.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/avro-mapred-1.11.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/blas-2.2.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/breeze-macros_2.12-1.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/breeze_2.12-1.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/cats-kernel_2.12-2.1.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/chill-java-0.10.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/chill_2.12-0.10.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-codec-1.15.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-collections-3.2.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-collections4-4.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-compiler-3.0.16.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-compress-1.21.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-configuration-1.6.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-crypto-1.1.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-io-2.11.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-lang-2.6.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-lang3-3.12.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-math3-3.6.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/commons-text-1.10.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/compress-lzf-1.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/core-1.1.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/flatbuffers-java-1.12.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/generex-1.0.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/gson-2.8.6.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/guava-11.0.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-auth-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-common-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-hdfs-client-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-mapreduce-client-core-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-yarn-api-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-yarn-client-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-yarn-common-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hadoop-yarn-server-web-proxy-2.10.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hive-storage-api-2.7.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hk2-api-2.6.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hk2-locator-2.6.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/hk2-utils-2.6.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/htrace-core4-4.1.0-incubating.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/istack-commons-runtime-3.0.8.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/ivy-2.5.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jackson-annotations-2.13.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jackson-core-2.13.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jackson-databind-2.13.4.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jackson-dataformat-yaml-2.13.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jackson-datatype-jsr310-2.13.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jackson-mapper-asl-1.9.13.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jackson-module-scala_2.12-2.13.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jakarta.annotation-api-1.3.5.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jakarta.inject-2.6.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jakarta.servlet-api-4.0.3.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jakarta.validation-api-2.0.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jakarta.ws.rs-api-2.1.6.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jakarta.xml.bind-api-2.3.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/janino-3.0.16.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/javassist-3.25.0-GA.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jaxb-runtime-2.3.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jcl-over-slf4j-1.7.32.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jersey-client-2.36.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jersey-common-2.36.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jersey-container-servlet-2.36.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jersey-container-servlet-core-2.36.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jersey-hk2-2.36.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jersey-server-2.36.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/json4s-ast_2.12-3.7.0-M11.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/json4s-core_2.12-3.7.0-M11.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/json4s-jackson_2.12-3.7.0-M11.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/json4s-scalap_2.12-3.7.0-M11.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jsr305-3.0.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/jul-to-slf4j-1.7.32.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kryo-shaded-4.0.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-client-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-admissionregistration-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-apiextensions-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-apps-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-autoscaling-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-batch-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-certificates-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-common-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-coordination-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-core-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-discovery-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-events-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-extensions-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-flowcontrol-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-metrics-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-networking-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-node-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-policy-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-rbac-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-scheduling-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/kubernetes-model-storageclass-5.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/lapack-2.2.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/leveldbjni-all-1.8.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/log4j-api-2.17.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/log4j-core-2.17.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/log4j-slf4j-impl-2.17.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/logging-interceptor-3.12.12.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/lz4-java-1.8.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/mesos-1.4.3-shaded-protobuf.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/metrics-core-4.2.7.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/metrics-graphite-4.2.7.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/metrics-jmx-4.2.7.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/metrics-json-4.2.7.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/metrics-jvm-4.2.7.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/minlog-1.3.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-all-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-buffer-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-codec-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-common-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-handler-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-resolver-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-tcnative-classes-2.0.48.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-classes-epoll-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-classes-kqueue-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-native-epoll-4.1.74.Final-linux-aarch_64.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-native-epoll-4.1.74.Final-linux-x86_64.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-native-kqueue-4.1.74.Final-osx-aarch_64.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-native-kqueue-4.1.74.Final-osx-x86_64.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/netty-transport-native-unix-common-4.1.74.Final.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/objenesis-3.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/okhttp-3.12.12.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/okio-1.14.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/opencsv-2.3.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/orc-core-1.7.10.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/orc-mapreduce-1.7.10.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/orc-shims-1.7.10.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/oro-2.0.8.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/osgi-resource-locator-1.0.3.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/paranamer-2.8.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/parquet-column-1.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/parquet-common-1.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/parquet-encoding-1.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/parquet-format-structures-1.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/parquet-hadoop-1.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/parquet-jackson-1.12.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/pickle-1.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/protobuf-java-2.5.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/py4j-0.10.9.5.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/reload4j-1.2.18.3.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/rocksdbjni-6.20.3.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/scala-collection-compat_2.12-2.1.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/scala-compiler-2.12.15.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/scala-library-2.12.15.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/scala-parser-combinators_2.12-1.1.2.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/scala-reflect-2.12.15.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/scala-xml_2.12-1.2.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/shapeless_2.12-2.3.7.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/shims-0.9.25.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/slf4j-api-1.7.32.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/snakeyaml-1.31.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/snappy-java-1.1.8.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-catalyst_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-core_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-graphx_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-kubernetes_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-kvstore_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-launcher_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-mesos_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-mllib-local_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-mllib_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-network-common_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-network-shuffle_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-repl_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-sketch_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-sql_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-streaming_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-tags_2.12-3.3.4-tests.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-tags_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-unsafe_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spark-yarn_2.12-3.3.4.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spire-macros_2.12-0.17.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spire-platform_2.12-0.17.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spire-util_2.12-0.17.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/spire_2.12-0.17.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/stax2-api-4.2.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/stream-2.9.6.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/threeten-extra-1.5.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/tink-1.6.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/univocity-parsers-2.9.1.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/woodstox-core-5.3.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/xbean-asm9-shaded-4.20.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/xz-1.9.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/zjsonpatch-0.3.0.jar
24/05/20 15:08:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop01:9000/usr/sparkjars/zstd-jni-1.5.2-1.jar

24/05/20 15:08:02 INFO Client: Uploading resource file:/C:/Users/S0085449/AppData/Local/Temp/spark-a4fcde9f-3c05-4896-ab52-695bb521ad3c/__spark_conf__2109594566421860317.zip -> hdfs://hadoop01:9000/user/root/.sparkStaging/application_1716185220041_0029/spark_conf.zip
24/05/20 15:08:03 INFO SecurityManager: Changing view acls to: S0085449,root
24/05/20 15:08:03 INFO SecurityManager: Changing modify acls to: S0085449,root
24/05/20 15:08:03 INFO SecurityManager: Changing view acls groups to:
24/05/20 15:08:03 INFO SecurityManager: Changing modify acls groups to:
24/05/20 15:08:03 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(S0085449, root); groups with view permissions: Set(); users with modify permissions: Set(S0085449, root); groups with modify permissions: Set()
24/05/20 15:08:03 INFO Client: Submitting application application_1716185220041_0029 to ResourceManager
24/05/20 15:08:03 INFO YarnClientImpl: Submitted application application_1716185220041_0029
24/05/20 15:08:04 INFO Client: Application report for application_1716185220041_0029 (state: ACCEPTED)
24/05/20 15:08:04 INFO Client:
client token: N/A
diagnostics: AM container is launched, waiting for AM container to Register with RM
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1716188859168
final status: UNDEFINED
tracking URL: http://hadoop02:8088/proxy/application_1716185220041_0029/
user: root
24/05/20 15:08:05 INFO Client: Application report for application_1716185220041_0029 (state: ACCEPTED)
24/05/20 15:08:06 INFO Client: Application report for application_1716185220041_0029 (state: ACCEPTED)
24/05/20 15:08:07 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop02, PROXY_URI_BASES -> http://hadoop02:8088/proxy/application_1716185220041_0029), /proxy/application_1716185220041_0029
24/05/20 15:08:07 INFO Client: Application report for application_1716185220041_0029 (state: RUNNING)
24/05/20 15:08:07 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.194.216.102
ApplicationMaster RPC port: -1
queue: default
start time: 1716188859168
final status: UNDEFINED
tracking URL: http://hadoop02:8088/proxy/application_1716185220041_0029/
user: root
24/05/20 15:08:07 INFO YarnClientSchedulerBackend: Application application_1716185220041_0029 has started running.
24/05/20 15:08:07 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 58200.
24/05/20 15:08:07 INFO NettyBlockTransferService: Server created on 10.194.208.109:58200
24/05/20 15:08:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
24/05/20 15:08:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.194.208.109, 58200, None)
24/05/20 15:08:07 INFO BlockManagerMasterEndpoint: Registering block manager 10.194.208.109:58200 with 1979.1 MiB RAM, BlockManagerId(driver, 10.194.208.109, 58200, None)
24/05/20 15:08:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.194.208.109, 58200, None)
24/05/20 15:08:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.194.208.109, 58200, None)
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /jobs: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /jobs/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /jobs/job: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /jobs/job/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /stages: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /stages/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /stages/stage: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /stages/stage/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /stages/pool: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /stages/pool/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /storage: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /storage/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /storage/rdd: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /storage/rdd/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /environment: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /environment/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /executors: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /executors/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /executors/threadDump: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /executors/threadDump/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /static: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /api: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /jobs/job/kill: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /stages/stage/kill: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO ServerInfo: Adding filter to /metrics/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:07 INFO YarnSchedulerBackend Y a r n S c h e d u l e r E n d p o i n t : A p p l i c a t i o n M a s t e r r e g i s t e r e d a s N e t t y R p c E n d p o i n t R e f ( s p a r k − c l i e n t : / / Y a r n A M ) 24 / 05 / 2015 : 08 : 12 I N F O Y a r n S c h e d u l e r B a c k e n d YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM) 24/05/20 15:08:12 INFO YarnSchedulerBackend YarnSchedulerEndpoint:ApplicationMasterregisteredasNettyRpcEndpointRef(sparkclient://YarnAM)24/05/2015:08:12INFOYarnSchedulerBackendYarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.194.216.100:35340) with ID 2, ResourceProfileId 0
24/05/20 15:08:12 INFO BlockManagerMasterEndpoint: Registering block manager hadoop01:41291 with 366.3 MiB RAM, BlockManagerId(2, hadoop01, 41291, None)
24/05/20 15:08:12 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.194.216.102:42756) with ID 1, ResourceProfileId 0
24/05/20 15:08:12 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
24/05/20 15:08:12 INFO BlockManagerMasterEndpoint: Registering block manager hadoop03:40469 with 366.3 MiB RAM, BlockManagerId(1, hadoop03, 40469, None)


(spark.jars,target/Spark-demo-1.0-SNAPSHOT.jar)
(spark.driver.host,10.194.208.109)
(spark.executor.id,driver)
(yarn.resourcemanager.hostname,hadoop02)
(spark.driver.port,5555)
(spark.driver.appUIAddress,http://10.194.208.109:4040)
(spark.master,yarn)
(spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES,http://hadoop02:8088/proxy/application_1716185220041_0029)
(spark.driver.extraJavaOptions,-XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED)
(spark.yarn.jars,)
(spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS,hadoop02)
(spark.app.name,WordCount)
(spark.executor.extraJavaOptions,-XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED)
(spark.app.id,application_1716185220041_0029)
(spark.ui.filters,org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter)
(spark.app.initial.jar.urls,spark://10.194.208.109:5555/jars/Spark-demo-1.0-SNAPSHOT.jar)
(spark.app.startTime,1716188880457)


24/05/20 15:08:12 INFO SharedState: Setting hive.metastore.warehouse.dir (‘null’) to the value of spark.sql.warehouse.dir.
24/05/20 15:08:12 INFO SharedState: Warehouse path is ‘file:/E:/workspace/idea_projects/Spark-demo/Spark-demo/spark-warehouse’.
24/05/20 15:08:12 INFO ServerInfo: Adding filter to /SQL: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:12 INFO ServerInfo: Adding filter to /SQL/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:12 INFO ServerInfo: Adding filter to /SQL/execution: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:12 INFO ServerInfo: Adding filter to /SQL/execution/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:12 INFO ServerInfo: Adding filter to /static/sql: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/05/20 15:08:13 INFO InMemoryFileIndex: It took 24 ms to list leaf files for 1 paths.
24/05/20 15:08:13 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
24/05/20 15:08:14 INFO FileSourceStrategy: Pushed Filters:
24/05/20 15:08:14 INFO FileSourceStrategy: Post-Scan Filters:
24/05/20 15:08:14 INFO FileSourceStrategy: Output Data Schema: struct<>
24/05/20 15:08:15 INFO CodeGenerator: Code generated in 144.971801 ms
24/05/20 15:08:15 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 315.1 KiB, free 1978.8 MiB)
24/05/20 15:08:15 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.9 KiB, free 1978.8 MiB)
24/05/20 15:08:15 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.194.208.109:58200 (size: 27.9 KiB, free: 1979.1 MiB)
24/05/20 15:08:15 INFO SparkContext: Created broadcast 0 from count at SimpleApp_on_yarn.scala:58
24/05/20 15:08:15 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
24/05/20 15:08:15 INFO DAGScheduler: Registering RDD 3 (count at SimpleApp_on_yarn.scala:58) as input to shuffle 0
24/05/20 15:08:15 INFO DAGScheduler: Got map stage job 0 (count at SimpleApp_on_yarn.scala:58) with 1 output partitions
24/05/20 15:08:15 INFO DAGScheduler: Final stage: ShuffleMapStage 0 (count at SimpleApp_on_yarn.scala:58)
24/05/20 15:08:15 INFO DAGScheduler: Parents of final stage: List()
24/05/20 15:08:15 INFO DAGScheduler: Missing parents: List()
24/05/20 15:08:15 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at count at SimpleApp_on_yarn.scala:58), which has no missing parents
24/05/20 15:08:15 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 14.0 KiB, free 1978.8 MiB)
24/05/20 15:08:15 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 6.9 KiB, free 1978.7 MiB)
24/05/20 15:08:15 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.194.208.109:58200 (size: 6.9 KiB, free: 1979.1 MiB)
24/05/20 15:08:15 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1509
24/05/20 15:08:15 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at count at SimpleApp_on_yarn.scala:58) (first 15 tasks are for partitions Vector(0))
24/05/20 15:08:15 INFO YarnScheduler: Adding task set 0.0 with 1 tasks resource profile 0
24/05/20 15:08:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (hadoop01, executor 2, partition 0, NODE_LOCAL, 4912 bytes) taskResourceAssignments Map()
24/05/20 15:08:15 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on hadoop01:41291 (size: 6.9 KiB, free: 366.3 MiB)
24/05/20 15:08:16 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop01:41291 (size: 27.9 KiB, free: 366.3 MiB)
24/05/20 15:08:17 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2046 ms on hadoop01 (executor 2) (1/1)
24/05/20 15:08:17 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
24/05/20 15:08:17 INFO DAGScheduler: ShuffleMapStage 0 (count at SimpleApp_on_yarn.scala:58) finished in 2.143 s
24/05/20 15:08:17 INFO DAGScheduler: looking for newly runnable stages
24/05/20 15:08:17 INFO DAGScheduler: running: Set()
24/05/20 15:08:17 INFO DAGScheduler: waiting: Set()
24/05/20 15:08:17 INFO DAGScheduler: failed: Set()
24/05/20 15:08:17 INFO CodeGenerator: Code generated in 10.135599 ms
24/05/20 15:08:17 INFO SparkContext: Starting job: count at SimpleApp_on_yarn.scala:58
24/05/20 15:08:17 INFO DAGScheduler: Got job 1 (count at SimpleApp_on_yarn.scala:58) with 1 output partitions
24/05/20 15:08:17 INFO DAGScheduler: Final stage: ResultStage 2 (count at SimpleApp_on_yarn.scala:58)
24/05/20 15:08:17 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
24/05/20 15:08:17 INFO DAGScheduler: Missing parents: List()
24/05/20 15:08:17 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[6] at count at SimpleApp_on_yarn.scala:58), which has no missing parents
24/05/20 15:08:17 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 11.1 KiB, free 1978.7 MiB)
24/05/20 15:08:17 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 5.5 KiB, free 1978.7 MiB)
24/05/20 15:08:17 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.194.208.109:58200 (size: 5.5 KiB, free: 1979.1 MiB)
24/05/20 15:08:17 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1509
24/05/20 15:08:17 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[6] at count at SimpleApp_on_yarn.scala:58) (first 15 tasks are for partitions Vector(0))
24/05/20 15:08:17 INFO YarnScheduler: Adding task set 2.0 with 1 tasks resource profile 0
24/05/20 15:08:17 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 1) (hadoop01, executor 2, partition 0, NODE_LOCAL, 4464 bytes) taskResourceAssignments Map()
24/05/20 15:08:17 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on hadoop01:41291 (size: 5.5 KiB, free: 366.3 MiB)
24/05/20 15:08:17 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.194.216.100:35340
24/05/20 15:08:17 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 1) in 313 ms on hadoop01 (executor 2) (1/1)
24/05/20 15:08:17 INFO YarnScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool
24/05/20 15:08:17 INFO DAGScheduler: ResultStage 2 (count at SimpleApp_on_yarn.scala:58) finished in 0.322 s
24/05/20 15:08:17 INFO DAGScheduler: Job 1 is finished. Cancelling potential speculative or zombie tasks for this job
24/05/20 15:08:17 INFO YarnScheduler: Killing all running tasks in stage 2: Stage finished
24/05/20 15:08:17 INFO DAGScheduler: Job 1 finished: count at SimpleApp_on_yarn.scala:58, took 0.332740 s

**结果如下
**** 文 件 的 总 行 数 是: 124 行 ****


24/05/20 15:08:17 INFO SparkUI: Stopped Spark web UI at http://10.194.208.109:4040
24/05/20 15:08:17 INFO YarnClientSchedulerBackend: Interrupting monitor thread
24/05/20 15:08:18 INFO YarnClientSchedulerBackend: Shutting down all executors
24/05/20 15:08:18 INFO YarnSchedulerBackend$ YarnDriverEndpoint: Asking each executor to shut down
24/05/20 15:08:18 INFO YarnClientSchedulerBackend: YARN client scheduler backend Stopped
24/05/20 15:08:18 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
24/05/20 15:08:18 INFO MemoryStore: MemoryStore cleared
24/05/20 15:08:18 INFO BlockManager: BlockManager stopped
24/05/20 15:08:18 INFO BlockManagerMaster: BlockManagerMaster stopped
24/05/20 15:08:18 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
24/05/20 15:08:18 INFO SparkContext: Successfully stopped SparkContext
24/05/20 15:08:18 INFO ShutdownHookManager: Shutdown hook called
24/05/20 15:08:18 INFO ShutdownHookManager: Deleting directory C:\Users\S0085449\AppData\Local\Temp\spark-a4fcde9f-3c05-4896-ab52-695bb521ad3c

Process finished with exit code 0

《完》

  • 25
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
以下是使用idea编写spark程序并提交yarn集群的例子: 1. 首先,在idea创建一个新的maven项目,选择scala语言。 2. 在pom.xml文件添加以下依赖: ``` <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.4.0</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.4.0</version> </dependency> </dependencies> ``` 3. 创建一个简单的Spark应用程序,例如: ``` import org.apache.spark.sql.SparkSession object WordCount { def main(args: Array[String]): Unit = { val spark = SparkSession.builder() .appName("Word Count") .getOrCreate() val lines = spark.read.textFile(args(0)).rdd val words = lines.flatMap(_.split(" ")) val wordCounts = words.map((_, 1)).reduceByKey(_ + _) wordCounts.saveAsTextFile(args(1)) spark.stop() } } ``` 4. 在idea配置Spark环境变量,打开“Run/Debug Configurations”窗口,选择“Application”,然后单击“+”按钮添加一个新的配置。在“Environment variables”字段添加以下内容: ``` SPARK_HOME=/path/to/your/spark/home ``` 5. 在idea打开终端,使用以下命令将应用程序打包成jar文件: ``` mvn package ``` 6. 将jar文件上传到yarn集群: ``` hadoop fs -put /path/to/your/jar/file /user/yourname/ ``` 7. 在yarn集群提交应用程序: ``` spark-submit --class WordCount --master yarn --deploy-mode cluster /user/yourname/your-jar-file.jar /input/path /output/path ``` 其,“WordCount”是你的应用程序的类名,“/input/path”是输入文件的路径,“/output/path”是输出文件的路径。 8. 等待应用程序运行完成,然后检查输出文件是否正确生成。 希望这个例子能够帮助你使用idea编写spark程序并提交yarn集群

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值