在spark-shell中使用var rdd = sc.textFile("hdfs://yuncluster/aqi.data"); 来加载hdfs文件(其中yuncluster是集群名称)
在使用rdd.count()的时候一直报错识别不了集群名称 使用绝对路径就可以 hdfs://ip:port/aqi.data
(高可用 不想写死主机)
后来使用 hdfs:/aqi.data没有问题 或者换另外一个方法 sc.hadoopFile("/aqi.data")就没有问题;
需要提前将hdfs-site.xml和core-site.xml放到spark的conf中
spark连接多hdfs集群
val conf = new SparkConf().setAppName("Spark Word Count")
val sc = new SparkContext()
// 在输入数据之前先将hadoop config配置为cluster1集群
sc.hadoopConfiguration.addResource("cluster1/core-site.xml")
sc.hadoopConfiguration.addResource("cluster1/hdfs-site.xml")
// load data
val input = sc.textFile(args(0)).flatMap(_.split(" ")).map(x => (x, 1)).reduceByKey(_ + _)
// 再将hadoop config设为cluster2集群
sc.hadoopConfiguration.addResource("cluster2/core-site.xml")
sc.hadoopConfiguration.addResource("cluster2/hdfs-site.xml")
input.saveAsTextFile(args(1))
或者
val conf = new SparkConf().setAppName("Spark Word Count")
val sc = new SparkContext()
sc.hadoopConfiguration.set("fs.defaultFS", "hdfs://cluster1");
sc.hadoopConfiguration.set("dfs.nameservices", "cluster1");
sc.hadoopConfiguration.set("dfs.ha.namenodes.cluster1", "nn1,nn2");
sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn1", "namenode001:8020");
sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn2", "namenode002:8020");
sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.cluster1", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
val wc = sc.textFile(args(0)).flatMap(_.split(" ")).map(x => (x, 1)).reduceByKey(_ + _)
sc.hadoopConfiguration.set("fs.defaultFS", "hdfs://cluster2");
sc.hadoopConfiguration.set("dfs.nameservices", "cluster2");
sc.hadoopConfiguration.set("dfs.ha.namenodes.cluster2", "nn3,nn4");
sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster2.nn3", "namenode003:8020");
sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster2.nn4", "namenode004:8020");
sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.cluster2", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
wc.saveAsTextFile(args(1))