背景
spark.driver.host导致超时 failed 2 times due to AM Container
我的集群部署是这样的
一个CDH集群,因为我们不能在集群内进行操作,因为我们的服务是部署在集群之外的,通过
[deploy@cdh ~]$ export HADOOP_USER_NAME=shulan_admin;spark2-submit
--class com.dtwave.cheetah.node.spark.structured.streaming.StructureStreamingExecutor
--name sparksql_1 --master yarn
--deploy-mode client
--queue root.dev
/opt/workspace_pro/cheetah-node/libs/spark-structured-streaming-1.0.0-SNAPSHOT.jar
deploy-mode=