SparkSession spark = SparkSession.builder().config(rdds.context().getConf()) .config("spark.sql.warehouse.dir", "/app/spark-warehouse") .config("dfs.nameservices", "cluster1") .config("dfs.ha.namenodes.cluster1", "nn1,nn2") .config("dfs.namenode.rpc-address.cluster1.nn1", "192.168.6.64:8020") .config("dfs.namenode.rpc-address.cluster1.nn2", "192.168.6.66:8020") .config("dfs.client.failover.proxy.provider.cluster1", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider") .getOrCreate();
spark Hadoop 高可用模式下读写hdfs
最新推荐文章于 2021-05-12 17:54:03 发布