spark读mysql索引丢失_spark-submit找不到mysqldriver

spark-submit找不到mysqldriver2018-05-22 17:57阅读: 播种者博主很神秘,什么也没有留下~关注在将job提交上集群上暴出这个问题,使用local模式不存在问题,尝试使用 --jars参数和--driver-class-path将需要的jar文件加入启动参数中,依然不能解决问题。查看日志发现总是在hadoop38机器上发生这个问题,怀疑是这个节点不...
摘要由CSDN通过智能技术生成

eae470ab4183c290835f7c3a8b6fdebe.png

spark-submit找不到mysqldriver

2018-05-22 17:57阅读:

887099cb5092ad2a0964ef25853a1f36.gif

播种者

博主很神秘,什么也没有留下~

关注

在将job提交上集群上暴出这个问题,使用local模式不存在问题,尝试使用 --jars参数和

--driver-class-path

将需要的jar文件加入启动参数中,依然不能解决问题。查看日志发现总是在hadoop38机器上发生这个问题,怀疑是这个节点不能获取到提交作业的共享jar文件。原因暂时没有时间查找,先暴力解决问题,将该节点NM停止,然后再重新提交,暴力解决该问题,甚至于都不需要使用--jars参数和

--driver-class-path

参数都可以提交成功。原因暂时记录,后面再来查找为什么该节点添加进来后nm不能获取到资源文件。

部分日志摘录如下:

Did not find registered driver with class

com.mysql.jdbc.Driver

Container: container_e11_1506782856952_0012_02_000001 on

hadoop38_8041

===================================================================================

LogType:stderr

Log Upload Time:Mon May 21 17:27:16 +0800 2018

LogLength:68856

Log Contents:

18/05/21 17:27:07 INFO yarn.ApplicationMaster: Registered signal

handlers for [TERM, HUP, INT]

18/05/21 17:27:08 INFO yarn.ApplicationMaster:

ApplicationAttemptId: appattempt_1506782856952_0012_000002

18/05/21 17:27:08 INFO spark.SecurityManager: Changing view acls

to: yarn,hadoop_user

18/05/21 17:27:08 INFO spark.SecurityManager: Chang

ing modify acls to: yarn,hadoop_user

18/05/21 17:27:08 INFO spark.SecurityManager: SecurityManager:

authentication disabled; ui acls disabled; users with view

permissions: Set(yarn, hadoop_user); users with modify permissions:

Set(yarn, hadoop_user)

18/05/21 17:27:08 INFO yarn.ApplicationMaster: Starting the user

application in a separate Thread

18/05/21 17:27:08 INFO yarn.ApplicationMaster: Waiting for spark

context initialization

18/05/21 17:27:08 INFO yarn.ApplicationMaster: Waiting for spark

context initialization ...

18/05/21 17:27:09 INFO spark.SparkContext: Running Spark version

1.6.0

18/05/21 17:27:09 INFO spark.SecurityManager: Changing view acls

to: yarn,hadoop_user

18/05/21 17:27:09 INFO spark.SecurityManager: Changing modify acls

to: yarn,hadoop_user

18/05/21 17:27:09 INFO spark.SecurityManager: SecurityManager:

authentication disabled; ui acls disabled; users with view

permissions: Set(yarn, hadoop_user); users with modify permissions:

Set(yarn, hadoop_user)

18/05/21 17:27:09 INFO util.Utils: Successfully started service

'sparkDriver' on port 14752.

18/05/21 17:27:09 INFO slf4j.Slf4jLogger: Slf4jLogger started

18/05/21 17:27:09 INFO Remoting: Starting remoting

18/05/21 17:27:09 INFO Remoting: Remoting started; listening on

addresses :[akka.tcp://sparkDriverActorSystem@hadoop38:12325]

18/05/21 17:27:09 INFO Remoting: Remoting now listens on addresses:

[akka.tcp://sparkDriverActorSystem@hadoop38:12325]

18/05/21 17:27:09 INFO util.Utils: Successfully started service

'sparkDriverActorSystem' on port 12325.

18/05/21 17:27:09 INFO spark.SparkEnv: Registering

MapOutputTracker

18/05/21 17:27:09 INFO spark.SparkEnv: Registering

BlockManagerMaster

18/05/21 17:27:09 INFO storage.DiskBlockManager: Created local

directory at

/data/yarn/nm/usercache/hadoop_user/appcache/application_1506782856952_0012/blockmgr-35013f02-3d24-42c8-9be0-5e08e89d2b41

18/05/21 17:27:09 INFO storage.MemoryStore: MemoryStore started

with capacity 491.7 MB

18/05/21 17:27:09 INFO spark.SparkEnv: Registering

OutputCommitCoordinator

18/05/21 17:27:09 INFO ui.JettyUtils: Adding filter:

org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter

18/05/21 17:27:10 INFO util.Utils: Successfully started service

'SparkUI' on port 35226.

18/05/21 17:27:10 INFO ui.SparkUI: Started SparkUI at

http://hadoop38:35226

18/05/21 17:27:10 INFO cluster.YarnClusterScheduler: Created

YarnClusterScheduler

18/05/21 17:27:10 INFO util.Utils: Successfully started service

'org.apache.spark.network.netty.NettyBlockTransferService'

on port 33946.

18/05/21 17:27:10 INFO netty.NettyBlockTransferService:

Server created on 33946

18/05/21 17:27:10 INFO storage.BlockManager: external shuffle

service port = 7337

18/05/21 17:27:10 INFO storage.BlockManagerMaster: Trying to

register BlockManager

18/05/21 17:27:10 INFO storage.BlockManagerMasterEndpoint:

Registering block manager hadoop38:33946 with 491.7 MB RAM,

BlockManagerId(driver, hadoop38, 33946)

18/05/21 17:27:10 INFO storage.BlockManagerMaster: Registered

BlockManager

18/05/21 17:27:10 INFO scheduler.EventLoggingListener: Logging

events to

hdfs://hadoop42:8020/user/spark/applicationHistory/application_1506782856952_0012_2

18/05/21 17:27:10 WARN spark.SparkContext: Dynamic Allocation and

num executors both set, thus dynamic allocation disabled.

18/05/21 17:27:10 INFO

cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:

ApplicationMaster registered as

NettyRpcEndpointRef(spark://YarnAM@hadoop38:14752)

18/05/21 17:27:10 INFO yarn.YarnRMClient: Registering the

ApplicationMaster

18/05/21 17:27:10 INFO

client.ConfiguredRMFailoverProxyProvider: Failing over to

rm171

18/05/21 17:27:10 INFO yarn.YarnAllocator: Will request 1 executor

containers, each with 2 cores and 6758 MB memory including 614 MB

overhead

18/05/21 17:27:10 INFO yarn.YarnAllocator: Container request (host:

Any, capability: )

18/05/21 17:27:10 INFO yarn.ApplicationMaster: Started progress

reporter thread with (heartbeat : 3000, initial allocation : 200)

intervals

18/05/21 17:27:10 INFO impl.AMRMClientImpl: Received new token for

: hadoop43:8041

18/05/21 17:27:10 INFO yarn.YarnAllocator: Launching container

container_e11_1506782856952_0012_02_000002 for on host

hadoop43

18/05/21 17:27:10 INFO yarn.YarnAllocator: Launching

ExecutorRunnable. driverUrl:

spark://CoarseGrainedScheduler@hadoop38:14752,

executorHostname: hadoop43

18/05/21 17:27:10 INFO yarn.ExecutorRunnable: Starting Executor

Container

18/05/21 17:27:10 INFO yarn.YarnAllocator: Received 1 containers

from YARN, launching executors on 1 of them.

18/05/21 17:27:10 INFO

impl.ContainerManagementProtocolProxy:

yarn.client.max-cached-nodemanagers-proxies : 0

18/05/21 17:27:10 INFO yarn.ExecutorRunnable: Setting up

ContainerLaunchContext

18/05/21 17:27:10 INFO yarn.ExecutorRunnable: Preparing Local

resources

18/05/21 17:27:10 INFO yarn.ExecutorRunnable: Prepared Local

resources Map(__app__.jar -> resource { scheme: 'hdfs' host:

'hadoop42' port: 8020 file:

'/user/hadoop_user/jsonParseBySpark-0.0.1-SNAPSHOT.jar' } size:

374671 timestamp: 1526893648896 type: FILE visibility: PUBLIC,

mysql-connector-java-5.1.33.jar -> resource { scheme: 'hdfs'

host: 'hadoop42' port: 8020 file:

'/user/hadoop_user/.sparkStaging/application_1506782856952_0012/mysql-connector-java-5.1.33.jar'

} size: 959984 timestamp: 1526894813671 type: FILE visibility:

PRIVATE, druid-1.0.13.jar -> resource { scheme: 'hdfs' host:

'hadoop42' port: 8020 file:

'/user/hadoop_user/.sparkStaging/application_1506782856952_0012/druid-1.0.13.jar'

} size: 1929981 timestamp: 1526894813590 type: FILE visibility:

PRIVATE, druid-1.0.23.jar -> resource { scheme: 'hdfs' host:

'hadoop42' port: 8020 file:

'/user/hadoop_user/.sparkStaging/application_1506782856952_0012/druid-1.0.23.jar'

} size: 2113249 timestamp: 1526894813623 type: FILE visibility:

PRIVATE, __spark_conf__ -> resource { scheme: 'hdfs' host:

'hadoop42' port: 8020 file:

'/user/hadoop_user/.sparkStaging/application_1506782856952_0012/__spark_conf__5472283492530888899.zip'

} size: 32034 timestamp: 1526894813735 type: ARCHIVE visibility:

PRIVATE, gson-2.7.jar -> resource { scheme: 'hdfs' host:

'hadoop42' port: 8020 file:

'/user/hadoop_user/.sparkStaging/application_1506782856952_0012/gson-2.7.jar'

} size: 231952 timestamp: 1526894813647 type: FILE visibility:

PRIVATE)

18/05/21 17:27:10 INFO yarn.ExecutorRunnable:

===============================================================================

YARN executor launch context:

env:

CLASSPATH ->

{ {PWD}}{ {PWD}}/__spark_conf__{ {HADOOP_COMMON_HOME}}/../../../CDH-5.8.0-1.cdh5.8.0.p0.42/lib/spark/lib/spark-assembly.jar.....................................................

SPARK_YARN_CACHE_ARCHIVES ->

hdfs://hadoop42:8020/user/hadoop_user/.sparkStaging/application_1506782856952_0012/__spark_conf__5472283492530888899.zip#__spark_conf__

SPARK_LOG_URL_STDERR ->

http://hadoop43:8042/node/containerlogs/container_e11_1506782856952_0012_02_000002/hadoop_user/stderr?start=-4096

SPARK_YARN_CACHE_FILES_FILE_SIZES ->

374671,1929981,2113249,231952,959984

SPARK_YARN_STAGING_DIR ->

.sparkStaging/application_1506782856952_0012

SPARK_DIST_CLASSPATH ->

/data/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/jars/ST4-4.0.4.jar:/data/..............................................................................

SPARK_YARN_CACHE_FILES_VISIBILITIES ->

PUBLIC,PRIVATE,PRIVATE,PRIVATE,PRIVATE

SPARK_YARN_CACHE_ARCHIVES_FILE_SIZES ->

32034

SPARK_USER -> hadoop_user

SPARK_YARN_MODE -> true

SPARK_YARN_CACHE_ARCHIVES_TIME_STAMPS ->

1526894813735

SPARK_YARN_CACHE_FILES_TIME_STAMPS ->

1526893648896,1526894813590,1526894813623,1526894813647,1526894813671

SPARK_LOG_URL_STDOUT ->

http://hadoop43:8042/node/containerlogs/container_e11_1506782856952_0012_02_000002/hadoop_user/stdout?start=-4096

SPARK_YARN_CACHE_FILES ->

hdfs://hadoop42:8020/user/hadoop_user/jsonParseBySpark-0.0.1-SNAPSHOT.jar#__app__.jar,hdfs://hadoop42:8020/user/hadoop_user/.sparkStaging/application_1506782856952_0012/druid-1.0.13.jar#druid-1.0.13.jar,hdfs://hadoop42:8020/user/hadoop_user/.sparkStaging/application_1506782856952_0012/druid-1.0.23.jar#druid-1.0.23.jar,hdfs://hadoop42:8020/user/hadoop_user/.sparkStaging/application_1506782856952_0012/gson-2.7.jar#gson-2.7.jar,hdfs://hadoop42:8020/user/hadoop_user/.sparkStaging/application_1506782856952_0012/mysql-connector-java-5.1.33.jar#mysql-connector-java-5.1.33.jar

SPARK_YARN_CACHE_ARCHIVES_VISIBILITIES ->

PRIVATE

command:

LD_LIBRARY_PATH='{ {HADOOP_COMMON_HOME}}/../../../CDH-5.8.0-1.cdh5.8.0.p0.42/lib/hadoop/lib/native:$LD_LIBRARY_PATH'

{ {JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p'

-Xms6144m -Xmx6144m -Djava.io.tmpdir={ {PWD}}/tmp

'-Dspark.driver.port=14752' '-Dspark.authenticate=false'

'-Dspark.shuffle.service.port=7337' '-Dspark.ui.port=0'

-Dspark.yarn.app.container.log.dir= -XX:MaxPermSize=256m

org.apache.spark.executor.CoarseGrainedExecutorBackend

--driver-url spark://CoarseGrainedScheduler@hadoop38:14752

--executor-id 1 --hostname hadoop43 --cores 2 --app-id

application_1506782856952_0012 --user-class-path

file:$PWD/__app__.jar --user-class-path file:$PWD/druid-1.0.13.jar

--user-class-path file:$PWD/druid-1.0.23.jar --user-class-path

file:$PWD/gson-2.7.jar --user-class-path

file:$PWD/mysql-connector-java-5.1.33.jar 1> /stdout 2>

/stderr

===============================================================================

18/05/21 17:27:10 INFO

impl.ContainerManagementProtocolProxy: Opening proxy :

hadoop43:8041

18/05/21 17:27:14 INFO cluster.YarnClusterSchedulerBackend:

Registered executor NettyRpcEndpointRef(null) (hadoop43:45315) with

ID 1

18/05/21 17:27:14 INFO storage.BlockManagerMasterEndpoint:

Registering block manager hadoop43:26493 with 3.1 GB RAM,

BlockManagerId(1, hadoop43, 26493)

18/05/21 17:27:14 INFO cluster.YarnClusterSchedulerBackend:

SchedulerBackend is ready for scheduling beginning after reached

minRegisteredResourcesRatio: 0.8

18/05/21 17:27:14 INFO cluster.YarnClusterScheduler:

YarnClusterScheduler.postStartHook done

18/05/21 17:27:14 ERROR yarn.ApplicationMaster: User class threw

exception: java.sql.SQLException: No suitable driver

java.sql.SQLException: No suitable driver

at

at

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)

at

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)

at scala.Option.getOrElse(Option.scala:120)

at

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:49)

at

org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)

at

org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)

at

org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:222)

at

org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:146)

at

com.hadoop_user.init.P_DatasInit$.main(P_DatasInit.scala:42)

at com.hadoop_user.init.P_DatasInit.main(P_DatasInit.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native

Method)

at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.

at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.

at

at

org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)

LogType:stdout

Log Upload Time:Mon May 21 17:27:16 +0800 2018

LogLength:0

Log Contents:

Container: container_e11_1506782856952_0012_01_000002 on

hadoop38_8041

===================================================================================

LogType:stderr

Log Upload Time:Mon May 21 17:27:16 +0800 2018

LogLength:67024

Log Contents:

18/05/21 17:26:59 INFO

executor.CoarseGrainedExecutorBackend: Started daemon with

process name: 7566@hadoop38

18/05/21 17:26:59 INFO

executor.CoarseGrainedExecutorBackend: Registered signal

handlers for [TERM, HUP, INT]

18/05/21 17:27:00 INFO spark.SecurityManager: Changing view acls

to: yarn,hadoop_user

18/05/21 17:27:00 INFO spark.SecurityManager: Changing modify acls

to: yarn,hadoop_user

18/05/21 17:27:00 INFO spark.SecurityManager: SecurityManager:

authentication disabled; ui acls disabled; users with view

permissions: Set(yarn, hadoop_user); users with modify permissions:

Set(yarn, hadoop_user)

18/05/21 17:27:00 INFO spark.SecurityManager: Changing view acls

to: yarn,hadoop_user

18/05/21 17:27:00 INFO spark.SecurityManager: Changing modify acls

to: yarn,hadoop_user

18/05/21 17:27:00 INFO spark.SecurityManager: SecurityManager:

authentication disabled; ui acls disabled; users with view

permissions: Set(yarn, hadoop_user); users with modify permissions:

Set(yarn, hadoop_user)

18/05/21 17:27:00 INFO slf4j.Slf4jLogger: Slf4jLogger started

18/05/21 17:27:00 INFO Remoting: Starting remoting

18/05/21 17:27:00 INFO Remoting: Remoting started; listening on

addresses

:[akka.tcp://sparkExecutorActorSystem@hadoop38:12007]

18/05/21 17:27:00 INFO Remoting: Remoting now listens on addresses:

[akka.tcp://sparkExecutorActorSystem@hadoop38:12007]

18/05/21 17:27:00 INFO util.Utils: Successfully started service

'sparkExecutorActorSystem' on port 12007.

18/05/21 17:27:01 INFO storage.DiskBlockManager: Created local

directory at

/data/yarn/nm/usercache/hadoop_user/appcache/application_1506782856952_0012/blockmgr-9c6025ba-690f-4465-9232-8d1243ddbd15

18/05/21 17:27:01 INFO storage.MemoryStore: MemoryStore started

with capacity 3.1 GB

18/05/21 17:27:01 INFO

executor.CoarseGrainedExecutorBackend: Connecting to driver:

spark://CoarseGrainedScheduler@hadoop43:28027

18/05/21 17:27:01 INFO

executor.CoarseGrainedExecutorBackend: Successfully

registered with driver

18/05/21 17:27:01 INFO executor.Executor: Starting executor ID 1 on

host hadoop38

18/05/21 17:27:01 INFO util.Utils: Successfully started service

'org.apache.spark.network.netty.NettyBlockTransferService'

on port 28851.

18/05/21 17:27:01 INFO netty.NettyBlockTransferService:

Server created on 28851

18/05/21 17:27:01 INFO storage.BlockManager: external shuffle

service port = 7337

18/05/21 17:27:01 INFO storage.BlockManagerMaster: Trying to

register BlockManager

18/05/21 17:27:01 INFO storage.BlockManagerMaster: Registered

BlockManager

18/05/21 17:27:01 INFO storage.BlockManager: Registering executor

with local external shuffle service.

18/05/21 17:27:04 INFO

executor.CoarseGrainedExecutorBackend: Got assigned task

0

18/05/21 17:27:04 INFO

executor.CoarseGrainedExecutorBackend: Got assigned task

1

18/05/21 17:27:04 INFO executor.Executor: Running task 1.0 in stage

0.0 (TID 1)

18/05/21 17:27:04 INFO executor.Executor: Running task 0.0 in stage

0.0 (TID 0)

18/05/21 17:27:04 INFO broadcast.TorrentBroadcast: Started reading

broadcast variable 0

18/05/21 17:27:04 INFO storage.MemoryStore: Block

broadcast_0_piece0 stored as bytes in memory (estimated size 26.5

KB, free 26.5 KB)

18/05/21 17:27:04 INFO broadcast.TorrentBroadcast: Reading

broadcast variable 0 took 112 ms

18/05/21 17:27:04 INFO storage.MemoryStore: Block broadcast_0

stored as values in memory (estimated size 101.3 KB, free 127.7

KB)

18/05/21 17:27:05 INFO spark.CacheManager: Partition rdd_3_0 not

found, computing it

18/05/21 17:27:05 INFO spark.CacheManager: Another thread is

loading rdd_3_0, waiting for it to finish...

18/05/21 17:27:05 INFO spark.CacheManager: Finished waiting for

rdd_3_0

18/05/21 17:27:05 INFO jdbc.JDBCRDD: closed connection

18/05/21 17:27:05 ERROR executor.Executor: Exception in task 0.0 in

stage 0.0 (TID 0)

java.lang.IllegalStateException: Did not find registered driver

with class com.mysql.jdbc.Driver

at

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2$$anonfun$3.apply(JdbcUtils.scala:58)

at

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2$$anonfun$3.apply(JdbcUtils.scala:58)

at scala.Option.getOrElse(Option.scala:120)

at

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply(JdbcUtils.scala:57)

at

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply(JdbcUtils.scala:52)

at

org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.(JDBCRDD.scala:347)

at

org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:339)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at

org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at

org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at

org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at

org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)

at

org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at

org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at

org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)

at

org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at

org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)

at org.apache.spark.scheduler.Task.run(Task.scala:89)

at

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)

at

at

at

18/05/21 17:27:05 INFO spark.CacheManager: Whoever was loading

rdd_3_0 failed; we'll try it ourselves

18/05/21 17:27:05 INFO spark.CacheManager: Partition rdd_3_0 not

found, computing it

18/05/21 17:27:05 INFO jdbc.JDBCRDD: closed connection

18/05/21 17:27:05 ERROR executor.Executor: Exception in task 1.0 in

stage 0.0 (TID 1)

java.lang.IllegalStateException: Did not find registered driver

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值