原文连接:http://blog.csdn.net/oMrApollo/article/details/69566846
错误原因分析
报错如下:
Exception in thread "main" java.io.IOException: com.mongodb.hadoop.splitter.SplitFailedException: Unable to calculate input splits: not authorized on certificate to execute command { splitVector: "certificate.certificate.access_log_test", keyPattern: { _id: 1 }, min: {}, max: {}, force: false, maxChunkSize: 8 }
at com.mongodb.hadoop.MongoInputFormat.getSplits(MongoInputFormat.java:62)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:113)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)
at AccessLogTest$.main(AccessLogTest.scala:53)
at AccessLogTest.main(AccessLogTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: com.mongodb.hadoop.splitter.SplitFailedException: Unable to calculate input splits: not authorized on certificate to execute command { splitVector: "certificate.certificate.access_log_test", keyPattern: { _id: 1 }, min: {}, max: {}, force: false, maxChunkSize: 8 }
at com.mongodb.hadoop.splitter.StandaloneMongoSplitter.calculateSplits(StandaloneMongoSplitter.java:165)
at com.mongodb.hadoop.MongoInputFormat.getSplits(MongoInputFormat.java:60)
... 14 more
错误原因剖析
not authorized on certificate to execute command { splitVector: "certificate.certificate.access_log_test", keyPattern: { _id: 1 }, min: {}, max: {}, force: false, maxChunkSize: 8 }
可以看出是splitVector权限的问题,因为Spark在拆分非分片集合时需要splitVector命令的,该命令仅限于管理员用户。mongo.input.split.create_input_splits的默认设置是true,也就是会对数据进行拆分,根据集群数,cpu核数然后将数据进行拆分成多个InputSplits,以允许Hadoop并行处理,也就是说,Hadoop为每个映射器分配一个InputSplits。
解决方法
一、
如果数据量不大的情况下可以舍弃spark的优势,将mongo.input.split.create_input_splits设置为false
config.set("mongo.input.split.create_input_splits", "false")
这种做法虽然会解决,但是舍弃了并行的优势,因为设为false时将只有一个InputSplit,他会将整个集合分配给Hadoop,这将严重的减少并行映射。
二、
没有splitVector权限,则加上该权限。
具体方法如下:
./mongo [ip]]:[port]] --authenticationDatabase admin -u[username] -p[password]
对应ip地址,port端口号,username用户名,password密码
连接上该用户数据库后
增加新权限
db.createRole({role: "hadoopSplitVector",privileges: [{resource: {db: "[dbname]",collection: "[collectionName]"},actions: ["splitVector"]}],roles:[]})
dbname:数据库名称
collectionName:表名
如若报错:
Error: Roles on the 'test' database cannot be granted privileges that target other databases or the cluster :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createRole@src/mongo/shell/db.js:1553:1
@(shell):1:1
说明你登录的用户名不能授予其他数据库权限,意思就是你登录错了。换一个你想改权限的数据库所对应的用户重新登陆。
成功的话则显示
{
"role" : "hadoopSplitVector",
"privileges" : [
{
"resource" : {
"db" : "[dbname]",
"collection" : "[collectionName]"
},
"actions" : [
"splitVector"
]
}
],
"roles" : [ ]
}
说明新建权限成功,然后我们只需把该权限赋给对应的用户就行
db.updateUser("[username]",{roles: [{role:"readWrite",db:"[dbname]"},{role:"hadoopSplitVector", db:"[dbname]"}]})
配置成功后就可以解决该问题
scala连接方法:
val config = new Configuration()
config.set("mongo.input.uri", "mongodb://[username]:[password]@[ip1]:[port1],[ip2]:[port2]/[dbname].[collectionName]?readPreference=secondary")
部署到服务器报错
ERROR SparkContext: Error initializing SparkContext.
java.net.UnknownHostException: 服务器名: 服务器名: 未知的名称或服务
······
Caused by: java.net.UnknownHostException: 服务器名: 未知的名称或服务
修改hosts映射地址即可
vim /etc/hosts
127.0.0.1 服务器名 localhost
如有疑问或错误建议请留言,谢谢。