Spark on Yarn 在cdh集群中运行报错 Required executor memory

  1. Run on a YARN cluster

    spark-submit \
    --class com.hnb.data.UserKeyOpLog \
    --master yarn \
    --deploy-mode cluster \
    --executor-memory 128M \
    --num-executors 2 \
    lib/original-dataceter-spark.jar  \
    args(1) \
    args(2)  \
    args(3)
    

    报错:

    Exception in thread "main" java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
    
    [root@node00 spark]# spark-submit   --class com.hnb.data.UserKeyOpLog   --master yarn   --deploy-mode cluster   --executor-memory 128M   --num-executors 2 lib/original-dataceter-spark.jar  /kafka-source/user_key_op_topic/201811/07  /kafka-source/user_key_op_topic/out1  /kafka-source/user_key_op_topic/out2
    18/11/09 12:44:04 INFO client.RMProxy: Connecting to ResourceManager at node00/172.16.10.190:8032
    18/11/09 12:44:04 INFO yarn.Client: Requesting a new application from cluster with 3 NodeManagers
    18/11/09 12:44:04 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (1040 MB per container)
    Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory (1024+384 MB) is above the max threshold (1040 MB) of this cluster! Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.
       at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:299)
       at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:139)
       at org.apache.spark.deploy.yarn.Client.run(Client.scala:1023)
       at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1083)
       at org.apache.spark.deploy.yarn.Client.main(Client.scala)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730)
       at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
       at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
       at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
       at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    [root@node00 spark]#
    

    解决办法:
    修改配置

    1. yarn.scheduler.maximum-allocation-mb
      在这里插入图片描述

    2. yarn.nodemanager.resource.memory-mb
      在这里插入图片描述
      在yarn的配置中将上述两项设置为2吉字节,重启yarn,然后重新运行。

      [root@node00 spark]# spark-submit --class com.hnb.data.UserKeyOpLog --master yarn --deploy-mode cluster --executor-memory 128M --num-executors 2 lib/original-dataceter-spark.jar  /kafka-source/user_key_op_topic/201811/07  /kafka-source/user_key_op_topic/out1  /kafka-source/user_key_op_topic/out2
      18/11/09 12:59:03 INFO client.RMProxy: Connecting to ResourceManager at node00/172.16.10.190:8032
      18/11/09 12:59:03 INFO yarn.Client: Requesting a new application from cluster with 3 NodeManagers
      18/11/09 12:59:03 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2048 MB per container)
      18/11/09 12:59:03 INFO yarn.Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
      18/11/09 12:59:03 INFO yarn.Client: Setting up container launch context for our AM
      18/11/09 12:59:03 INFO yarn.Client: Setting up the launch environment for our AM container
      18/11/09 12:59:03 INFO yarn.Client: Preparing resources for our AM container
      18/11/09 12:59:04 INFO yarn.Client: Uploading resource file:/opt/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/spark/lib/original-dataceter-spark.jar -> hdfs://nameservice1/user/root/.sparkStaging/application_1541739040266_0001/original-dataceter-spark.jar
      18/11/09 12:59:04 INFO yarn.Client: Uploading resource file:/tmp/spark-403b6665-d14c-4919-80cf-e6e2ddaf0835/__spark_conf__512169842143839514.zip -> hdfs://nameservice1/user/root/.sparkStaging/application_1541739040266_0001/__spark_conf__512169842143839514.zip
      18/11/09 12:59:04 INFO spark.SecurityManager: Changing view acls to: root
      18/11/09 12:59:04 INFO spark.SecurityManager: Changing modify acls to: root
      18/11/09 12:59:04 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
      18/11/09 12:59:04 INFO yarn.Client: Submitting application 1 to ResourceManager
      18/11/09 12:59:05 INFO impl.YarnClientImpl: Submitted application application_1541739040266_0001
      18/11/09 12:59:06 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:06 INFO yarn.Client: 
      	 client token: N/A
      	 diagnostics: N/A
      	 ApplicationMaster host: N/A
      	 ApplicationMaster RPC port: -1
      	 queue: root.users.root
      	 start time: 1541739544990
      	 final status: UNDEFINED
      	 tracking URL: http://node00:8088/proxy/application_1541739040266_0001/
      	 user: root
      18/11/09 12:59:07 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:08 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:09 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:10 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:11 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:12 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:13 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:14 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:15 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:16 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:17 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:18 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:19 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:20 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:21 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:22 INFO yarn.Client: Application report for application_1541739040266_0001 (state: ACCEPTED)
      18/11/09 12:59:23 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:23 INFO yarn.Client: 
      	 client token: N/A
      	 diagnostics: N/A
      	 ApplicationMaster host: 172.16.10.192
      	 ApplicationMaster RPC port: 0
      	 queue: root.users.root
      	 start time: 1541739544990
      	 final status: UNDEFINED
      	 tracking URL: http://node00:8088/proxy/application_1541739040266_0001/
      	 user: root
      18/11/09 12:59:24 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:25 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:26 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:27 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:28 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:29 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:30 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:31 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:32 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:33 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:34 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:35 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:36 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:37 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:38 INFO yarn.Client: Application report for application_1541739040266_0001 (state: RUNNING)
      18/11/09 12:59:39 INFO yarn.Client: Application report for application_1541739040266_0001 (state: FINISHED)
      18/11/09 12:59:39 INFO yarn.Client: 
      	 client token: N/A
      	 diagnostics: N/A
      	 ApplicationMaster host: 172.16.10.192
      	 ApplicationMaster RPC port: 0
      	 queue: root.users.root
      	 start time: 1541739544990
      	 final status: SUCCEEDED
      	 tracking URL: http://node00:8088/proxy/application_1541739040266_0001/
      	 user: root
      18/11/09 12:59:39 INFO util.ShutdownHookManager: Shutdown hook called
      18/11/09 12:59:39 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-403b6665-d14c-4919-80cf-e6e2ddaf0835
      [root@node00 spark]# 
      
  2. 本地提交模式

    spark-submit \
    --master local \
    --class  org.apache.spark.examples.SparkPi \
    lib/spark-examples.jar
    
    [root@cdh01 ~]#  spark-submit --master local --class  org.apache.spark.examples.SparkPi /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/lib/spark-examples.jar 10[root@cdh01 ~]#  spark-submit --master local --class  org.apache.spark.examples.SparkPi /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/lib/spark-examples.jar 10
    18/10/29 14:39:08 INFO spark.SparkContext: Running Spark version 1.6.0
    18/10/29 14:39:09 INFO spark.SecurityManager: Changing view acls to: root
    18/10/29 14:39:09 INFO spark.SecurityManager: Changing modify acls to: root
    18/10/29 14:39:09 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
    18/10/29 14:39:09 INFO util.Utils: Successfully started service 'sparkDriver' on port 55692.
    18/10/29 14:39:09 INFO slf4j.Slf4jLogger: Slf4jLogger started
    18/10/29 14:39:09 INFO Remoting: Starting remoting
    18/10/29 14:39:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.50.202:43516]
    18/10/29 14:39:10 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@192.168.50.202:43516]
    18/10/29 14:39:10 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 43516.
    18/10/29 14:39:10 INFO spark.SparkEnv: Registering MapOutputTracker
    18/10/29 14:39:10 INFO spark.SparkEnv: Registering BlockManagerMaster
    18/10/29 14:39:10 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-2bf97eb7-1a7e-4df7-b221-4e603dc3a55f
    18/10/29 14:39:10 INFO storage.MemoryStore: MemoryStore started with capacity 530.0 MB
    18/10/29 14:39:10 INFO spark.SparkEnv: Registering OutputCommitCoordinator
    18/10/29 14:39:10 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
    18/10/29 14:39:10 INFO ui.SparkUI: Started SparkUI at http://192.168.50.202:4040
    18/10/29 14:39:10 INFO spark.SparkContext: Added JAR file:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/lib/spark-examples.jar at spark://192.168.50.202:55692/jars/spark-examples.jar with timestamp 1540795150401
    18/10/29 14:39:10 INFO executor.Executor: Starting executor ID driver on host localhost
    18/10/29 14:39:10 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53969.
    18/10/29 14:39:10 INFO netty.NettyBlockTransferService: Server created on 53969
    18/10/29 14:39:10 INFO storage.BlockManager: external shuffle service port = 7337
    18/10/29 14:39:10 INFO storage.BlockManagerMaster: Trying to register BlockManager
    18/10/29 14:39:10 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:53969 with 530.0 MB RAM, BlockManagerId(driver, localhost, 53969)
    18/10/29 14:39:10 INFO storage.BlockManagerMaster: Registered BlockManager
    18/10/29 14:39:11 INFO scheduler.EventLoggingListener: Logging events to hdfs://cdh01:8020/user/spark/applicationHistory/local-1540795150435
    18/10/29 14:39:11 INFO spark.SparkContext: Registered listener com.cloudera.spark.lineage.ClouderaNavigatorListener
    18/10/29 14:39:11 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:36
    18/10/29 14:39:11 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions
    18/10/29 14:39:11 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)
    18/10/29 14:39:11 INFO scheduler.DAGScheduler: Parents of final stage: List()
    18/10/29 14:39:11 INFO scheduler.DAGScheduler: Missing parents: List()
    18/10/29 14:39:11 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
    18/10/29 14:39:12 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 530.0 MB)
    18/10/29 14:39:12 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B, free 530.0 MB)
    18/10/29 14:39:12 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:53969 (size: 1202.0 B, free: 530.0 MB)
    18/10/29 14:39:12 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1004
    18/10/29 14:39:12 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
    18/10/29 14:39:12 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 2036 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
    18/10/29 14:39:12 INFO executor.Executor: Fetching spark://192.168.50.202:55692/jars/spark-examples.jar with timestamp 1540795150401
    18/10/29 14:39:12 INFO spark.ExecutorAllocationManager: New executor driver has registered (new total is 1)
    18/10/29 14:39:12 INFO util.Utils: Fetching spark://192.168.50.202:55692/jars/spark-examples.jar to /tmp/spark-e7873ccb-d141-4347-abcd-1b263d364be3/userFiles-89bc4061-62e5-41b0-b1c2-cecbc4d3af73/fetchFileTemp4804387182541284155.tmp
    18/10/29 14:39:12 INFO executor.Executor: Adding file:/tmp/spark-e7873ccb-d141-4347-abcd-1b263d364be3/userFiles-89bc4061-62e5-41b0-b1c2-cecbc4d3af73/spark-examples.jar to class loader
    18/10/29 14:39:12 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, executor driver, partition 1, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 342 ms on localhost (executor driver) (1/10)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 1.0 in stage 0.0 (TID 1). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, executor driver, partition 2, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 51 ms on localhost (executor driver) (2/10)
    18/10/29 14:39:12 INFO executor.Executor: Running task 2.0 in stage 0.0 (TID 2)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 2.0 in stage 0.0 (TID 2). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, executor driver, partition 3, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 3.0 in stage 0.0 (TID 3)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 39 ms on localhost (executor driver) (3/10)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 3.0 in stage 0.0 (TID 3). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, executor driver, partition 4, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 4.0 in stage 0.0 (TID 4)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 42 ms on localhost (executor driver) (4/10)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 4.0 in stage 0.0 (TID 4). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, localhost, executor driver, partition 5, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 5.0 in stage 0.0 (TID 5)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 37 ms on localhost (executor driver) (5/10)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 5.0 in stage 0.0 (TID 5). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, localhost, executor driver, partition 6, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 71 ms on localhost (executor driver) (6/10)
    18/10/29 14:39:12 INFO executor.Executor: Running task 6.0 in stage 0.0 (TID 6)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 6.0 in stage 0.0 (TID 6). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, localhost, executor driver, partition 7, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 7.0 in stage 0.0 (TID 7)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 32 ms on localhost (executor driver) (7/10)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 7.0 in stage 0.0 (TID 7). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, localhost, executor driver, partition 8, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 8.0 in stage 0.0 (TID 8)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 28 ms on localhost (executor driver) (8/10)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 8.0 in stage 0.0 (TID 8). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, localhost, executor driver, partition 9, PROCESS_LOCAL, 2038 bytes)
    18/10/29 14:39:12 INFO executor.Executor: Running task 9.0 in stage 0.0 (TID 9)
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 27 ms on localhost (executor driver) (9/10)
    18/10/29 14:39:12 INFO executor.Executor: Finished task 9.0 in stage 0.0 (TID 9). 877 bytes result sent to driver
    18/10/29 14:39:12 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 24 ms on localhost (executor driver) (10/10)
    18/10/29 14:39:12 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 0.628 s
    18/10/29 14:39:12 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 1.046294 s
    18/10/29 14:39:12 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
    Pi is roughly 3.141903141903142
    18/10/29 14:39:13 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.50.202:4040
    18/10/29 14:39:13 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
    18/10/29 14:39:13 INFO storage.MemoryStore: MemoryStore cleared
    18/10/29 14:39:13 INFO storage.BlockManager: BlockManager stopped
    18/10/29 14:39:13 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
    18/10/29 14:39:13 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
    18/10/29 14:39:13 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
    18/10/29 14:39:13 INFO spark.SparkContext: Successfully stopped SparkContext
    18/10/29 14:39:13 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
    18/10/29 14:39:13 INFO util.ShutdownHookManager: Shutdown hook called
    18/10/29 14:39:13 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-e7873ccb-d141-4347-abcd-1b263d364be3
    18/10/29 14:39:13 INFO Remoting: Remoting shut down
    18/10/29 14:39:13 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
    
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值