spark2.x向mongodb中读取写入数据,读取写入相关参数参考https://docs.mongodb.com/spark-connector/current/configuration/#cache-configuration
从mongodb中读取数据时指定数据分区字段,分区大小提高读取效率, 当需要过滤部分数据集的情况下使用Dataset/SQL的方式filter,Mongo Connector会创建aggregation pipeline在mongodb端进行过滤,然后再传回给spark进行优化处理
val spark = SparkSession.builder
.appName(this.getClass.getName().stripSuffix("$"))
.getOrCreate()
val inputUri="mongodb://test:pwd123456@192.168.0.1:27017/test.articles"
val df = spark.read.format("com.mongodb.spark.sql").options(
Map("spark.mongodb.input.uri" -> inputUri,
"spark.mongodb.input.partitioner" -> "MongoPaginateBySizePartitioner",
"spark.mongodb.input.partitionerOptions.partitionKey" -> "_id",
"spark.mongodb.input.partitionerOptions.