org.apache.spark.SparkException: Unable to create database default as failed to create its directory

idea + spark在win10系统本地调试中,出现以下错误:

Exception in thread "main" org.apache.spark.SparkException: Unable to create database default as failed to create its directory file:/D:/My%2520Works/dev/pro/git/dhe/spark-warehouse
	at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.liftedTree1$1(InMemoryCatalog.scala:115)
	at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.doCreateDatabase(InMemoryCatalog.scala:109)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.createDatabase(ExternalCatalog.scala:69)
	at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:117)
	at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder.catalog$lzycompute(BaseSessionStateBuilder.scala:133)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder.catalog(BaseSessionStateBuilder.scala:131)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anon$1.<init>(BaseSessionStateBuilder.scala:157)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder.analyzer(BaseSessionStateBuilder.scala:157)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
	at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
	at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
	at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:432)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:233)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
	at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:254)
	at TestRule$.main(TestRule.scala:25)
	at TestRule.main(TestRule.scala)
Caused by: ExitCodeException exitCode=-1073741515: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
	at org.apache.hadoop.util.Shell.run(Shell.java:901)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:840)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:522)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:562)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:561)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:561)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:561)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:561)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:561)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:705)
	at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.liftedTree1$1(InMemoryCatalog.scala:112)
	... 23 more
19/03/28 10:22:17 INFO SparkContext: Invoking stop() from shutdown hook
19/03/28 10:22:17 INFO SparkUI: Stopped Spark web UI at http://DESKTOP-UGQ3VF9:4040
19/03/28 10:22:17 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/03/28 10:22:17 INFO MemoryStore: MemoryStore cleared
19/03/28 10:22:17 INFO BlockManager: BlockManager stopped
19/03/28 10:22:17 INFO BlockManagerMaster: BlockManagerMaster stopped
19/03/28 10:22:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/03/28 10:22:17 INFO SparkContext: Successfully stopped SparkContext
19/03/28 10:22:17 INFO ShutdownHookManager: Shutdown hook called
19/03/28 10:22:17 INFO ShutdownHookManager: Deleting directory C:\Users\gu\AppData\Local\Temp\spark-606e2938-e589-4e1e-a106-a163f6b205dc

代码是spark读取mysql数据源:

def main(args: Array[String]): Unit = {
 
   val session =  SparkSession.builder().master("local").appName("TestMysql").getOrCreate()



    val url = "jdbc:mysql://10.12.52.133:3306/test"
    val table = "vps_key_account_user_tkl_test"
    val properties = new Properties()
    properties.setProperty("user","bdpdoop")
    properties.setProperty("password","q1w2e3r4ys")
    //需要传入Mysql的URL、表明、properties(连接数据库的用户名密码)
    val df = session.read.jdbc(url,table,properties)
    df.createOrReplaceTempView(table)
    session.sql(s"select * from ${table}").show()

  }

解决方案:根据错误提示,是在D盘下,没有D:\My%20Works\dev\pro\git\dhe\spark-warehouse目录,可以手动创建相应目录,也可以多执行几次,一直到创建spark-warehoure目录为止。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值