spark-thrift-server 报错 Wrong FS

具体报错

spark-thrift-server 执行删表语句,出现如下报错

Error: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Wrong FS: hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db/dt_segment, expected: hdfs://hadoopmaster) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:361) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:263) 
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78) 
    at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:263) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:258) 
    at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:272) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Wrong FS: hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db/dt_segment, expected: hdfs://hadoopmaster) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:112) 
    at org.apache.spark.sql.hive.HiveExternalCatalog.dropTable(HiveExternalCatalog.scala:517) 
    at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.dropTable(ExternalCatalogWithListener.scala:104) 
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.dropTable(SessionCatalog.scala:778) 
    at org.apache.spark.sql.execution.command.DropTableCommand.run(ddl.scala:248) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) 
    at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687) 
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) 
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) 
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) 
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685) 
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228) 
    at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99) 
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) 
    at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615) 
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610) 
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325) ... 16 more

实际原因

  • hadoop 使用了 ha 模式,有双 namenodespark-thrift-server 配置的 --conf spark.sql.warehouse.dir 地址是其中一个 namenode 地址,需要修改成 nameservice 的地址
  • 原因是 hive-metastore 配置的地址是 nameservice 地址,hive 元数据有问题,所以可以建库建表,可以查询,但是不能删表

查看 hive 元数据

  • hive.dbs - hive 库元数据信息
  • hive.sds - hive 表元数据信息

查看默认的 hdfs 路径

 select * from hive.dbs where NAME='default'; 

默认的 hdfs 地址是走的 nameservice

+-------+-----------------------+-----------------------------------------+---------+------------+------------+ 
| DB_ID | DESC                  | DB_LOCATION_URI                         | NAME    | OWNER_NAME | OWNER_TYPE |
+-------+-----------------------+-----------------------------------------+---------+------------+------------+ 
| 1     | Default Hive database | hdfs://hadoopmaster/user/hive/warehouse | default | public     | ROLE       | 
+-------+-----------------------+-----------------------------------------+---------+------------+------------+ 

查看错误的 hdfs 地址

select * from hive.dbs where DB_LOCATION_URI like '%RMSS02ETL%'; 

错误的 hdfs 地址走的是 namenode 地址

+-------+------+--------------------------------------------------------+-----------+------------+------------+ 
| DB_ID | DESC | DB_LOCATION_URI                                        | NAME      | OWNER_NAME | OWNER_TYPE |
+-------+------+--------------------------------------------------------+-----------+------------+------------+ 
|    12 |      | hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db | meta_data | hive       |     USER   |
+-------+------+--------------------------------------------------------+-----------+------------+------------+

查看 hive 表元数据数据

select * from hive.sds where LOCATION like '%RMSS02ETL%' \G;

LOCATION 处的地址也是 namenode 的地址

                    SD_ID: 3768
                    CD_ID: 378
             INPUT_FORMAT: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
            IS_COMPRESSED:
IS_STOREDASSUBDIRECTORIES:
                 LOCATION: hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db/dt_segment
              NUM_BUCKETS: -1
            OUTPUT_FORMAT: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
                 SERDE_ID: 3768

修改 spark-thrift-server 配置

--conf spark.sql.warehouse.dir 参数修改成 nameservice 的地址,重启 spark-thrift-server 使配置生效

修改 hive 元数据

修改 hive 库元数据

update hive.dbs set DB_LOCATION_URI=REPLACE(DB_LOCATION_URI,'RMSS02ETL:9000','hadoopmaster'); 

修改 hive 表元数据

update hive.sds set LOCATION=REPLACE(LOCATION,'RMSS02ETL:9000','hadoopmaster');

最后重新尝试删表,可以成功

Spark-ThriftSpark-SQL是Spark框架中的两个组件,它们有以下区别: 1. Spark-SQL是Spark的一个模块,用于处理结构化数据,支持SQL查询和DataFrame API。它提供了一种高效且易于使用的方法来处理和分析结构化数据。用户可以使用SQL语句或DataFrame API来查询和操作数据。Spark-SQL允许用户直接在Spark应用程序中使用SQL查询,而无需编写复杂的MapReduce代码。 2. Spark-ThriftSpark的一个独立服务,它提供了一个标准的Thrift接口,用于执行SQL查询。它可以作为一个独立的进程运行,并通过网络接收来自客户端的SQL查询请求,并将查询转发到Spark集群中的Spark-SQL模块进行处理。Spark-Thrift使得可以使用不同的编程语言,如Java、Python、R等,通过Thrift接口与Spark集群交互。 因此,Spark-SQL是Spark框架中用于处理结构化数据的模块,而Spark-Thrift是提供Thrift接口让用户可以使用不同编程语言与Spark-SQL模块交互的独立服务。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [CDH15.0支持spark-sql和spark-thrift-server](https://blog.csdn.net/u012458821/article/details/87635599)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* [122.Thriftspark-sql客户端部署](https://blog.csdn.net/m0_47454596/article/details/126856172)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值