spark提交到yarn_spark任务提交到yarn上运行报错

在将Spark任务提交到YARN运行时遇到'No suitable driver'错误,原因是缺少JDBC驱动配置。通过在代码中添加'driver'选项并指定为'com.mysql.jdbc.Driver',问题得到解决。此问题提醒了在Spark数据源配置中正确指定JDBC驱动的重要性。
摘要由CSDN通过智能技术生成

1、报错信息

java.sql.SQLException: No suitable driver

at java.sql.DriverManager.getDriver(DriverManager.java:315)

at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)

at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)

at scala.Option.getOrElse(Option.scala:121)

at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:83)

at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:34)

at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)

at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)

at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)

at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)

at com.dataexa.cp.base.datasource.DataBaseToDF.convert(DataBaseToDF.scala:22)

at com.dataexa.cp.base.datasource.DataSourceReader$$anonfun$getResult$1.apply(DataSourceReader.scala:63)

at com.dataexa.cp.base.datasource.DataSourceReader$$anonfun$getResult$1.apply(DataSourceReader.scala:56)

at scala.collection.Iterator$class.foreach(Iterator.scala:893)

at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)

at scala.collection.MapLike$DefaultKeySet.foreach(MapLike.scala:174)

at com.dataexa.cp.base.datasource.DataSourceReader.getResult(DataSourceReader.scala:56)

at com.dataexa.cp.base.datasource.DataSourceReader$.main(DataSourceReader.scala:125)

at com.dataexa.cp.base.datasource.DataSourceReader.main(DataSourceReader.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

2、代码中报错的部分是这里

case class DataBaseToDF(sparkSession: SparkSession){

def convert(dataBase: DataBase): DataFrame = {

val dataFrame = sparkSession.read.format(dataBase.getDbType)

.options(Map("url" -> dataBase.getUrl,

"inferschema" -> "true",

"dbtable" -> dataBase.getTableName,

"user" -> dataBase.getUsername,

"password" -> dataBase.getPassword))

// .option ( "inferschema", "true" )

// .option("url",dataBase.getUrl)

// .option("dbtable",dataBase.getTableName)

// .option("user",dataBase.getUsername)

// .option("password",dataBase.getPassword)

.load()

dataFrame

}

}

spark读取mysql数据库的时候报的错。。。。

代码中加上driver的配置项,问题解决,同事写的时候说是不需要加,真是坑死人!

case class DataBaseToDF(sparkSession: SparkSession){

def convert(dataBase: DataBase): DataFrame = {

val dataFrame = sparkSession.read.format(dataBase.getDbType)

.options(Map("url" -> dataBase.getUrl,

"inferschema" -> "true",

"driver" -> "com.mysql.jdbc.Driver",

"dbtable" -> dataBase.getTableName,

"user" -> dataBase.getUsername,

"password" -> dataBase.getPassword))

// .option ( "inferschema", "true" )

// .option("url",dataBase.getUrl)

// .option("dbtable",dataBase.getTableName)

// .option("user",dataBase.getUsername)

// .option("password",dataBase.getPassword)

.load()

dataFrame

}

}

标签:option,dataBase,scala,yarn,报错,sql,spark,DataSourceReader

来源: https://blog.csdn.net/love_zy0216/article/details/89212035

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值