java.sql.SQLException: No suitable driver -- sparkshell读取mysql数据load失败 未解决 (但Properties方式可以)

spark使用jdbc格式读取数据内容
要将驱动jar包复制到spark的jars目录下
注意是单节点的spark还是集群的spark
要将jar包复制到每个节点。

加载jar包方法有几个

1.启动spark shell 时,加上 --jars

[root@hadoop01 spark-2.2.0-bin-hadoop2.7]# 
bin/spark-shell --jars mysql-connector-java-5.1.7-bin.jar --driver--class-path --jars mysql-connector-java-5.1.7-bin.jar(要写完整路径)

bin/spark-shell --jars /usr/local/spark-2.2.0-bin-hadoop2.7/mysql-connector-java-5.1.7-bin.jar --driver-class-path /usr/local/spark-2.2.0-bin-hadoop2.7/mysql-connector-java-5.1.7-bin.jar
2.使用option配置

val jdbcDF = spark.read.format("jdbc")
.option("driver","com.mysql.jdbc.Driver")
.option("url", "jdbc:mysql//hadoop01:3306/test")
.option("dbtable", "u")
.option("user","root")
.option("password","root").load()

但是最后还是没什么用


使用命令出错:

scala> val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql//hadoop01:3306/test").option("dbtable", "u").option("user","root").option("password","root").load()

报错:java.sql.SQLException: No suitable driver

java.sql.SQLException: No suitable driver
  at java.sql.DriverManager.getDriver(DriverManager.java:315)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
  ... 48 elided

然后将mysql的jar包cp到spark的jars目录下
但是:再次启动还是同样的错误

使用.option(“driver”,“com.mysql.jdbc.Driver”)照样找不到,这个直接报空指针了,再换个方法
val jdbcDF = spark.read.format("jdbc")
.option("driver","com.mysql.jdbc.Driver")
.option("url", "jdbc:mysql//hadoop01:3306/test")
.option("dbtable", "u")
.option("user","root")
.option("password","root").load()

所以第二次启动:使用spark-shell --jars

[root@hadoop01 spark-2.2.0-bin-hadoop2.7]# 
bin/spark-shell --jars mysql-connector-java-5.1.7-bin.jar

报错:

java.io.FileNotFoundException:
 Jar /usr/local/spark-2.2.0-bin-hadoop2.7/mysql-connector-java-5.1.7-bin.jar 
 not found

在这里插入图片描述
发现到sppark的根目录去找jar包 没有到jars目录下找

所以将mysql驱动jar包再次cp到spark根目录下。

再次启动bin/spark-shell --jars mysql-connector-java-5.1.7-bin.jar

成功
在这里插入图片描述

再次读取数据,仍然报没有驱动的错误

最后关闭spaark时发现

注意是112节点 我启动sparkshell是111节点

scala> 19/11/19 09:33:58 ERROR TaskSchedulerImpl: Lost executor 0 on 192.168.37.112: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.

所以
在这里插入图片描述
我启动了spark集群,然后再启动spark shell,连接的就可能不是本机的spark
而是其他节点的spark
而我其他节点没有mysql驱动
所以
就一直出错 不管我在111节点再怎么搞都没用。
贼坑。

在这里插入图片描述
改完后仍然报错!!!!未解决

spark sql 读取jdbc的两种方式 第一种不管怎么改都不行 不知道怎么办???

val jdbcDF = spark.read.format("jdbc").option("driver","com.mysql.jdbc.Driver").option("url", "jdbc:mysql//hadoop01:3306/test").option("dbtable", "u").option("user","root").option("password","root").load()
val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql//hadoop01:3306/test").option("dbtable", "u").option("user","root").option("password","root").load()



val jdbcDF = spark.read.format("jdbc")
.option("url", "jdbc:mysql//hadoop01:3306/test")
.option("dbtable", "u")
.option("user","root")
.option("password","root")
.load()

但是这种配置Properties方法就可以使用!!

在这里插入图片描述

val connectionProperties = new java.util.Properties()
connectionProperties.put("user", "root")
connectionProperties.put("password", "root")
val jdbcDF2 = spark.read.jdbc("jdbc:mysql://hadoop01:3306/test", "u", connectionProperties)

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql//hadoop01:3306/test").option("dbtable", "u").option("user","root").option("password","root").load()
java.sql.SQLException: No suitable driver
  at java.sql.DriverManager.getDriver(DriverManager.java:315)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
  ... 48 elided

scala> val jdbcDF = spark.read.format("jdbc").option("driver","com.mysql.jdbc.Driver").option("url", "jdbc:mysql//hadoop01:3306/test").option("dbtable", "u").option("user","root").option("password","root").load()
java.lang.NullPointerException
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:72)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:113)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:47)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
  ... 48 elided

scala> val connectionProperties = new java.util.Properties()
connectionProperties: java.util.Properties = {}

scala> connectionProperties.put("user", "root")
res0: Object = null

scala> connectionProperties.put("password", "root")
res1: Object = null

scala> val jdbcDF2 = spark.read.jdbc("jdbc:mysql://hadoop01:3306/test", "u", connectionProperties)
jdbcDF2: org.apache.spark.sql.DataFrame = [id: int, name: string]

scala> val jdbcDF = spark.read.format("jdbc")
jdbcDF: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@399ac1a3

scala> .option("url", "jdbc:mysql//hadoop01:3306/test")
res2: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@399ac1a3

scala> .option("dbtable", "u")
res3: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@399ac1a3

scala> .option("user","root")
res4: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@399ac1a3

scala> .option("password","root")
res5: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@399ac1a3

scala> .load()
java.sql.SQLException: No suitable driver
  at java.sql.DriverManager.getDriver(DriverManager.java:315)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
  ... 48 elided

scala>
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值