Spark读写Namenode HA集群

项目场景:

最近在计算集群向存储集群写数据时出现的这个问题

在hadoop高版本是可以设置多个NN组成NN HA的,但是这时就给我们的程序带来了麻烦,怎么去指定存储集群的active的NN,去链接存储集群的hdfs呐?
一般我们都是在resource文件夹下添加存储集群的core-site.xmlhdfs-site.xml。但是这样我试了没起作用


问题描述

问题:

在程序实际运行过程中出现解析不到集群名字,找不到目标集群的情况

java.lang.IllegalArgumentException: java.net.UnknownHostException: hr-hadoop
	at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:445)
	at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:353)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:287)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:177)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:87)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
	at cn.huorong.run.SampleTaskScanSinkHbase.storeHive(SampleTaskScanSinkHbase.scala:89)
	at cn.huorong.run.SampleTaskScanSinkHbase.$anonfun$sink$1(SampleTaskScanSinkHbase.scala:40)
	at cn.huorong.run.SampleTaskScanSinkHbase.$anonfun$sink$1$adapted(SampleTaskScanSinkHbase.scala:22)
	at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2(DStream.scala:629)
	at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2$adapted(DStream.scala:629)
	at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$2(ForEachDStream.scala:51)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:417)
	at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$1(ForEachDStream.scala:51)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.$anonfun$run$1(JobScheduler.scala:256)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: hr-hadoop

原因分析:

我是在resource文件夹下添加存储集群的core-site.xmlhdfs-site.xml。但是这样我试了没起作用


解决方案:

最后我想到了SparkContext它有一个.hadoopConfiguratio方法可以设置hadoop的相关属性,或许我可以在代码里切换集群,说干就干

 /***
   * @Author: lzx
   * @Description:
   * @Date: 2022/5/27
   * @Param session: bulid好的Sparkssion
   * @Param nameSpace: 集群的命名空间
   * @Param nn1: nn1_ID
   * @Param nn1Addr: nn1对应的IP:host
   * @Param nn2: nn2_ID
   * @Param nn2Addr:  nn2对应的IP:host
   * @return: void
   **/
  def changeHDFSConf(session:SparkSession,nameSpace:String,nn1:String,nn1Addr:String,nn2:String,nn2Addr:String): Unit ={

    val sc: SparkContext = session.sparkContext
    sc.hadoopConfiguration.set("fs.defaultFS", s"hdfs://$nameSpace")
    sc.hadoopConfiguration.set("dfs.nameservices", nameSpace)
    sc.hadoopConfiguration.set(s"dfs.ha.namenodes.$nameSpace", s"$nn1,$nn2")
    sc.hadoopConfiguration.set(s"dfs.namenode.rpc-address.$nameSpace.$nn1", nn1Addr)
    sc.hadoopConfiguration.set(s"dfs.namenode.rpc-address.$nameSpace.$nn2", nn2Addr)
    sc.hadoopConfiguration.set(s"dfs.client.failover.proxy.provider.$nameSpace", s"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider")
  }

调用,入参 参考你们的core-site.xml

// TODO 改成目标 hdfs的地址
    ChangeHDFSUtil.changeHDFSConf(session,"hr-hadoop","nn1","xxx:8020","nn2","xxx:8020")
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值