spark-测试cdh集群中spark是否正常运行

在测试CDH集群中Spark的运行时遇到了内存不足的问题,具体表现为Executor内存需求超过集群最大限制。错误信息指出executor所需1.4GB内存超过了集群1GB的最大阈值。解决方法是在YARN配置中将'yarn.scheduler.maximum-allocation-mb'和'yarn.nodemanager.resource.memory-mb'设置为2GB,并重启YARN服务。
摘要由CSDN通过智能技术生成

1.本地模式

[root@cdh01 ~]#  spark-submit --master local --class  org.apache.spark.examples.SparkPi /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/lib/spark-examples.jar 10
18/10/29 14:39:08 INFO spark.SparkContext: Running Spark version 1.6.0
18/10/29 14:39:09 INFO spark.SecurityManager: Changing view acls to: root
18/10/29 14:39:09 INFO spark.SecurityManager: Changing modify acls to: root
18/10/29 14:39:09 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/10/29 14:39:09 INFO util.Utils: Successfully started service 'sparkDriver' on port 55692.
18/10/29 14:39:09 INFO slf4j.Slf4jLogger: Slf4jLogger started
18/10/29 14:39:09 INFO Remoting: Starting remoting
18/10/29 14:39:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.50.202:43516]
18/10/29 14:39:10 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@192.168.50.202:43516]
18/10/29 14:39:10 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 43516.
18/10/29 14:39:10 INFO spark.SparkEnv: Registering MapOutputTracker
18/10/29 14:39:10 INFO spark.SparkEnv: Registering BlockManagerMaster
18/10/29 14:39:10 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-2bf97eb7-1a7e-4df7-b221-4e603dc3a55f
18/10/29 14:39:10 INFO storage.MemoryStore: MemoryStore started with capacity 530.0 MB
18/10/29 14:39:10 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/10/29 14:39:10 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/10/29 14:39:10 INFO ui.SparkUI: Started SparkUI at http://192.168.50.202:4040
18/10/29 14:39:10 INFO spark.SparkContext: Added JAR file:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/lib/spark-examples.jar at spark://192.168.50.202:55692/jars/spark-examples.jar with timestamp 1540795150401
18/10/29 14:39:10 INFO executor.Executor: Starting executor ID driver on host localhost
18/10/29 14:39:10 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53969.
18/10/29 14:39:10 INFO netty.NettyBlockTransferService: Server created on 53969
18/10/29 14:39:10 INFO storage.BlockManager: external shuffle service port = 7337
18/10/29 14:39:10 INFO storage.BlockManagerMaster: Trying to register BlockManager
18/10/29 14:39:10 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:53969 with 530.0 MB RAM, BlockManagerId(driver, localhost, 53969)
18/10/29 14:39:10 INFO storage.BlockManagerMaster: Registered BlockManager
18/10/29 14:39:11 INFO scheduler.EventLoggingListener: Logging events to hdfs://cdh01:8020/user/spark/applicationHistory/local-1540795150435
18/10/29 14:39:11 INFO spark.SparkContext: Registered listener com.cloudera.spark.lineage.ClouderaNavigatorListener
18/10/29 14:39:11 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:36
18/10/29 14:39:11 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions
18/10/29 14:39:11 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)
18/10/29 14:39:11 INFO scheduler.DAGScheduler: Parents of final stage: List()
18/10/29 14:39:11 INFO scheduler.DAGScheduler: Missing parents: List()
18/10/29 14:39:11 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
18/10/29 14:39:12 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 530.0 MB)
18/10/29 14:39:12 INFO storage
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值