把Hive操作的spark代码丢到yarn上面运行找不到数据库

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/nm-local-dir/usercache/root/filecache/19/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/11/22 10:58:40 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
17/11/22 10:58:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
17/11/22 10:58:42 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1511318642565_0002_000001
17/11/22 10:58:43 INFO spark.SecurityManager: Changing view acls to: root
17/11/22 10:58:43 INFO spark.SecurityManager: Changing modify acls to: root
17/11/22 10:58:43 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
17/11/22 10:58:44 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
17/11/22 10:58:44 INFO yarn.ApplicationMaster: Waiting for spark context initialization
17/11/22 10:58:44 INFO yarn.ApplicationMaster: Waiting for spark context initialization …
17/11/22 10:58:44 INFO spark.SparkContext: Running Spark version 1.6.0
17/11/22 10:58:44 INFO spark.SecurityManager: Changing view acls to: root
17/11/22 10:58:44 INFO spark.SecurityManager: Changing modify acls to: root
17/11/22 10:58:44 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
17/11/22 10:58:44 INFO util.Utils: Successfully started service ‘sparkDriver’ on port 36191.
17/11/22 10:58:44 INFO slf4j.Slf4jLogger: Slf4jLogger started
17/11/22 10:58:45 INFO Remoting: Starting remoting
17/11/22 10:58:45 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:43438]
17/11/22 10:58:45 INFO util.Utils: Successfully started service ‘sparkDriverActorSystem’ on port 43438.
17/11/22 10:58:45 INFO spark.SparkEnv: Registering MapOutputTracker
17/11/22 10:58:45 INFO spark.SparkEnv: Registering BlockManagerMaster
17/11/22 10:58:45 INFO storage.DiskBlockManager: Created local directory at /opt/hadoop/nm-local-dir/usercache/root/appcache/application_1511318642565_0002/blockmgr-5fc2424c-0be7-442a-b358-aec990606e96
17/11/22 10:58:45 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
17/11/22 10:58:45 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/11/22 10:58:45 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/11/22 10:58:45 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/11/22 10:58:45 INFO server.AbstractConnector: Started [email protected]:50918
17/11/22 10:58:45 INFO util.Utils: Successfully started service ‘SparkUI’ on port 50918.
17/11/22 10:58:45 INFO ui.SparkUI: Started SparkUI at http://192.168.174.24:50918
17/11/22 10:58:45 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
17/11/22 10:58:45 INFO util.Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 50119.
17/11/22 10:58:45 INFO netty.NettyBlockTransferService: Server created on 50119
17/11/22 10:58:45 INFO storage.BlockManagerMaster: Trying to register BlockManager
17/11/22 10:58:46 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.174.24:50119 with 517.4 MB RAM, BlockManagerId(driver, 192.168.174.24, 50119)
17/11/22 10:58:46 INFO storage.BlockManagerMaster: Registered BlockManager
17/11/22 10:58:46 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://[email protected]:36191)
17/11/22 10:58:46 INFO yarn.YarnRMClient: Registering the ApplicationMaster
17/11/22 10:58:46 INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
17/11/22 10:58:46 INFO yarn.YarnAllocator: Container request (host: Any, capability:

org.apache.spark.sql.execution.QueryExecutionException: FAILED: SemanticException [Error 10072]: Database does not exist: spark)

17/11/22 11:01:16 INFO spark.SparkContext: Invoking stop() from shutdown hook
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
17/11/22 11:01:17 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.174.27:33830
17/11/22 11:01:17 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
17/11/22 11:01:17 INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
17/11/22 11:01:17 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/11/22 11:01:17 INFO storage.MemoryStore: MemoryStore cleared
17/11/22 11:01:17 INFO storage.BlockManager: BlockManager stopped
17/11/22 11:01:17 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/11/22 11:01:18 INFO scheduler.OutputCommitCoordinator OutputCommi

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 如果在使用SparkSQL连接Hive时出现不到Hive数据库Hive表的情况,可能是以下几个原因: 1. Hive Metastore未启动或连接失败:Hive Metastore是Hive的元数据存储服务,如果未启动或连接失败,就无法Hive数据库或表。可以通过检查Hive Metastore的日志或者使用Hive命令行工具连接Hive Metastore来排查问题。 2. SparkSQL配置错误:在连接Hive时,需要正确配置SparkSQL的相关参数,如hive.metastore.uris、hive.exec.dynamic.partition.mode等。如果配置错误,也会导致不到Hive数据库或表。可以检查SparkSQL的配置文件或者在代码中设置相关参数来解决问题。 3. Hive数据库或表不存在:如果确保Hive Metastore已经启动并且SparkSQL配置正确,但仍然不到Hive数据库或表,可能是因为它们不存在。可以使用Hive命令行工具或者Hue等工具来检查Hive中是否存在对应的数据库或表。如果不存在,需要先创建它们。 总之,不到Hive数据库或表的问题可能有多种原因,需要逐一排查。 ### 回答2: 当我们使用SparkSQL连接Hive时,可能会出现不到Hive数据库Hive表的情况。这种情况通常出现在以下几种情况: 1. 没有正确配置Hive的环境变量:在连接Hive之前,SparkSQL需要正确配置Hive的环境变量。如果环境变量配置不正确,就会出现不到Hive数据库Hive表的问题。在Linux系统中,可以在.bashrc或.bash_profile文件中添加以下环境变量: export HIVE_HOME=/usr/local/hive export PATH=$HIVE_HOME/bin:$PATH 2. Hive的元数据未被正确加载:当我们使用SparkSQL连接Hive时,需要确保Hive的元数据已被正确加载。如果元数据加载不正确,就会出现不到Hive数据库Hive表的问题。我们可以通过在Hive命令行中执行“show databases;”和“show tables;”命令来检查元数据是否正确加载。如果元数据未被正确加载,可以尝试在Hive命令行中执行“MSCK REPAIR TABLE tablename;”命令来修复元数据。 3. HiveSparkSQL的版本不匹配:当我们使用SparkSQL连接Hive时,需要确保HiveSparkSQL的版本匹配。如果版本不匹配,就会出现不到Hive数据库Hive表的问题。我们可以通过在SparkSQL的log中查看详细的错误信息来判断版本是否匹配。如果版本不匹配,我们需要更新HiveSparkSQL的版本以确保它们匹配。 总之,我们在使用SparkSQL连接Hive时,需要注意以上问题,确保Hive的环境变量正确配置、元数据正确加载以及HiveSparkSQL的版本匹配,这样就可以避免不到Hive数据库Hive表的问题。 ### 回答3: SparkSQL是Spark的一个组件,它提供了一套基于SQL的API,方便用户在Spark上处理结构化数据。与Hive结合使用可以进一步拓展SparkSQL的能力。但有时在连接Hive时会遇到不到Hive数据库Hive表的情况。这通常有以下几种可能性: 1. Hive Metastore问题 Hive需要Metastore来存储元数据,包括数据库、表、列、分区等信息,而SparkSQL也依赖于Metastore。如果Hive Metastore挂了或者配置不正确,就会导致SparkSQL连接不上Hive数据库或表。解决方法是检查Hive Metastore的状态和配置,确保其可用。 2. HiveSparkSQL版本不兼容 不同版本的HiveSparkSQL之间可能存在不兼容的情况,导致连接失败。可以先检查HiveSparkSQL的版本兼容性,如果版本不一致需要更新版本或者将SparkSQL降级以保证兼容。 3. 配置问题 在连接Hive时需要配置一些参数,如Hive Metastore的地址、用户名、密码等。如果这些配置有误或缺失,就会导致连接不上Hive数据库或表。需要检查配置文件中的参数设置是否正确。 4. Hive繁忙 如果Hive正在进行大量的计算任务,可能会导致Hive数据库或表无法连接。此时需要等待Hive任务完成后再次尝试连接。 总之,当SparkSQL无法连接Hive数据库或表时,需要仔细排查以上问题,以及其他可能存在的问题,到根本原因并解决它。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值