Spark在本地测试通过standlone模式跑代码时报了一堆错
环境: win10
代码如下:
val spark = SparkSession.builder().appName(this.getClass.getName)
.master("spark://master:7077")
.config("spark.sql.hive.convertMetastorePartquet", false)
.config("hive.metastore.uris","thrift://192.168.**.**:9083")
.config("spark.sql.warehouse.dir", "hdfs://192.168.**.**:9000/user/hive/warehouse")
.config("spark.sql.parquet.writeLegacyFormat", true)
.config("spark.executor.memory","512M")
.config("spark.executor.cores","1")
.config("spark.num.executors","2")
.enableHiveSupport()
.getOrCreate()
spark.sql("show databases").show()
报错如下:
19/08/21 10:39:42 WARN Hive: Failed to access metastore. This class should not accessed in runtime.
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
解决方法:
在WIN10环境中安装hadoop和Spark(官网下载对应版本,之后解压,配置环境变量,重启)
之后报错如下:
Exception in thread "main"java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir:/tmp/hive on HDFS should be writable.
Current permissions are: rwxr-xr-x
解决办法:
该错误主要是Spark读取到本地的文件,但是权限不够(虽然是说HDFS 但是在HDFS中给定所有用户777,但是依旧有问题),所以在本地中删除之前存在的/tmp/hive,之后重新跑代码,会重新生成(win10 玩Spark,遇到一堆BUG。。。)
之后解决问题: