当流数据非正常突然增多,可设置最大的接收速率, 如果流处理程序没有足够大的集群资源尽快的处理接收到的数据, 接收者可以通过设置最大速度来做限制每秒接收条数,spark streaming会计算出限制条数,并且在条件改变时动态调整. 背压可以通过设置spark.streaming.backpressure.enabled=true开启
$SPARK_HOME/bin/spark-submit --num-executors 1 --executor-memory 1g --executor-cores 1 --master yarn --deploy-mode cluster \
--conf spark.streaming.backpressure.enabled=true --conf spark.streaming.backpressure.initialRate=1000 \
--conf spark.streaming.stopGracefullyOnShutdown=true \
--name xxx_project_name \
--class xxxMainClass \
--jars /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-client.jar,/opt/cloudera/parcels/CDH/lib/hive/lib/hbase-hadoop2-compat.jar,/opt/cloudera/parcels/CDH/lib/hive/lib/hbase-hadoop-compat.jar,/opt/cloudera/parcels/CDH/lib/hive/lib/hbase-common.jar,/opt/cloudera/parcels/CDH/lib/hive/lib/hbase-server.jar,/opt/cloudera/parcels/CDH/lib/hive/lib/hbase-protocol.jar,/opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.11.2.jar,/opt/cloudera/parcels/CDH/lib/hbase/lib/htrace-core-3.2.0-incubating.jar,\
/opt/cloudera/parcels/CDH/lib/hbase/lib/metrics-core-2.2.0.jar,/opt/cloudera/parcels/CDH/lib/hbase/lib/protobuf-java-2.5.0.jar \
/xxx/xxx.jar