spark on k8s demo

spark on k8s调试流程

参考博客:https://www.cnblogs.com/moonlight-lin/p/13296909.html

./docker-image-tool.sh -t my_spark build

bin/docker-image-tool.sh -r 192.168.183.60:5000 -t spark_not_modify build

执行过程中部分日志为:

  • mkdir -p /opt/spark
  • mkdir -p /opt/spark/examples
  • mkdir -p /opt/spark/work-dir
  • touch /opt/spark/RELEASE
  • rm /bin/sh
  • ln -sv /bin/bash /bin/sh
    ‘/bin/sh’ -> ‘/bin/bash’
  • echo auth required pam_wheel.so use_uid
  • chgrp root /etc/passwd
  • chmod ug+rw /etc/passwd
  • rm -rf /var/cache/apt/archives

Successfully built 60194e3848df
Successfully tagged spark:my_spark

提交命令

local指的是pod中容器的路径

bin/spark-submit
–master k8s://https://192.168.183.50:6443
–deploy-mode cluster
–name spark-pi
–class org.apache.spark.examples.SparkPi
–conf spark.executor.instances=5
–conf spark.kubernetes.container.image=spark:my_spark
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar

提交之后打印日志:
pod name: spark-pi-8aca328576bfeb7e-driver
namespace: default
labels: spark-app-selector -> spark-bd93d0c89e94407e9a1f4c1af89929fd, spark-role -> driver
pod uid: 393856b3-14ed-4e98-942f-415dd8ffe451
creation time: 2023-01-03T08:29:29Z
service account name: default
volumes: hadoop-properties, spark-local-dir-1, spark-conf-volume-driver, default-token-h9dqb
node name: hadoop102
start time: 2023-01-03T08:29:29Z
phase: Pending
container status:
container name: spark-kubernetes-driver
container image: spark:my_spark
container state: waiting
pending reason: ErrImagePull
23/01/03 16:29:50 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-bd93d0c89e94407e9a1f4c1af89929fd (phase: Pending)

container拉取失败

因为我只再k8s master节点建了镜像,其他节点没有镜像,所以导致k8s node节点无法拉取到spark镜像,所以我准备安装dockers本地仓库,然后把镜像push到本地镜像仓库中,然后再调试下

安装本地镜像仓库如下:

1.首先下载 registry 镜像
docker pull registry

2.在 daemon.json 文件中添加私有镜像仓库的地址并重启。(下面那个是阿里的加速源)
vim /etc/docker/daemon.json
{
“insecure-registries”: [“192.168.183.60:5000”], #私有仓库地址
“registry-mirrors”: [“https://b9pmyelo.mirror.aliyuncs.com”] #阿里云镜像
}

systemctl daemon-reload
systemctl restart docker

3.运行 registry 容器
docker run -itd -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:latest

-itd:在容器中打开一个伪终端进行交互操作,并在后台运行
-v:把宿主机的/data/registry目录绑定到容器/var/lib/registry目录(这个目录是registry容器中存放镜像文件的目录),来实现数据的持久化;
-p:映射端口;访问宿主机的5000端口就访问到registry容器的服务了
–restart=always: 这是重启的策略,在容器退出时总是重启容器
–name registry: 创建容器命名为registry
registry:latest:这个是刚才pull下来的镜像.

Docker容器的重启策略如下:
no:默认策略,在容器退出时不重启容器
on- failure:在容器非正常退出时(退出状态非0),才会重启容器
on- failure:3 :在容器非正常退出时重启容器,最多重启3次
always:在容器退出时总是重启容器
unless-stopped:在容器退出时总是重启容器,但是不考虑在Docker守护进程启动时就已经停止了的容器

4.为镜像打标签
docker tag spark:my_spark 192.168.183.60:5000/spark:my_spark

5.上传到私有仓库
docker push 192.168.183.60:5000/spark:my_spark

报错:
[root@hadoop101 spark-3.2.1]# docker push 192.168.183.60:5000/spark:my_spark
The push refers to repository [192.168.183.60:5000/spark]
Get “https://192.168.183.60:5000/v2/”: http: server gave HTTP response to HTTPS client

原因:/etc/docker/daemon.json文件配置有问题

修改daemon.json如下

旧的内容为:
{“graph”: “/opt/docker”}

调整为新的内容为:
{
“insecure-registries”: [“192.168.183.60:5000”],
“registry-mirrors”: [“https://b9pmyelo.mirror.aliyuncs.com”]
}

8.查看私有仓库镜像
1)查看私有仓库执行此命令:
curl http://ip:端口/v2/_catalog

curl http://192.168.183.60:5000/v2/_catalog
{“repositories”:[“spark/spark”]}

2)获取某个镜像的标签列表:
curl http://ip:端口/v2/私有仓库镜像名称/tags/list

curl http://192.168.183.60:5000/v2/spark/spark/tags/list

bin/spark-submit
–master k8s://https://192.168.183.50:6443
–deploy-mode cluster
–name spark-pi
–class org.apache.spark.examples.SparkPi
–conf spark.executor.instances=1
–conf spark.kubernetes.container.image=192.168.183.60:5000/spark/spark:my_spark
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar

通过查看driver日志有报错信息如下:
23/01/03 09:50:37 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
23/01/03 09:50:39 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext . o r g .org .orgapache s p a r k spark sparkSparkContextKaTeX parse error: Can't use function '$' in math mode at position 144: …rk.SparkContext$̲.getOrCreate(Sp…runMain(SparkSubmit.scala:955)
at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-ee4a6285770a14dd-driver. Message: Forbidden!Configured service account doesn’t have access. Service account may have been revoked. pods “spark-pi-ee4a6285770a14dd-driver” is forbidden: User “system:serviceaccount:default:default” cannot get resource “pods” in API group “” in the namespace “default”.
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:471)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:453)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:947)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:221)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:187)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:86)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator. a n o n f u n anonfun anonfundriverPod 1 ( E x e c u t o r P o d s A l l o c a t o r . s c a l a : 79 ) a t s c a l a . O p t i o n . m a p ( O p t i o n . s c a l a : 230 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . k 8 s . E x e c u t o r P o d s A l l o c a t o r . < i n i t > ( E x e c u t o r P o d s A l l o c a t o r . s c a l a : 78 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . k 8 s . K u b e r n e t e s C l u s t e r M a n a g e r . c r e a t e S c h e d u l e r B a c k e n d ( K u b e r n e t e s C l u s t e r M a n a g e r . s c a l a : 118 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t 1(ExecutorPodsAllocator.scala:79) at scala.Option.map(Option.scala:230) at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:78) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:118) at org.apache.spark.SparkContext 1(ExecutorPodsAllocator.scala:79)atscala.Option.map(Option.scala:230)atorg.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:78)atorg.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:118)atorg.apache.spark.SparkContext.org a p a c h e apache apachespark S p a r k C o n t e x t SparkContext SparkContext c r e a t e T a s k S c h e d u l e r ( S p a r k C o n t e x t . s c a l a : 2973 ) . . . 19 m o r e 23 / 01 / 0309 : 50 : 39 I N F O S p a r k U I : S t o p p e d S p a r k w e b U I a t h t t p : / / s p a r k − p i − e e 4 a 6285770 a 14 d d − d r i v e r − s v c . d e f a u l t . s v c : 404023 / 01 / 0309 : 50 : 39 I N F O M a p O u t p u t T r a c k e r M a s t e r E n d p o i n t : M a p O u t p u t T r a c k e r M a s t e r E n d p o i n t s t o p p e d ! 23 / 01 / 0309 : 50 : 39 I N F O M e m o r y S t o r e : M e m o r y S t o r e c l e a r e d 23 / 01 / 0309 : 50 : 39 I N F O B l o c k M a n a g e r : B l o c k M a n a g e r s t o p p e d 23 / 01 / 0309 : 50 : 39 I N F O B l o c k M a n a g e r M a s t e r : B l o c k M a n a g e r M a s t e r s t o p p e d 23 / 01 / 0309 : 50 : 39 W A R N M e t r i c s S y s t e m : S t o p p i n g a M e t r i c s S y s t e m t h a t i s n o t r u n n i n g 23 / 01 / 0309 : 50 : 39 I N F O O u t p u t C o m m i t C o o r d i n a t o r createTaskScheduler(SparkContext.scala:2973) ... 19 more 23/01/03 09:50:39 INFO SparkUI: Stopped Spark web UI at http://spark-pi-ee4a6285770a14dd-driver-svc.default.svc:4040 23/01/03 09:50:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 23/01/03 09:50:39 INFO MemoryStore: MemoryStore cleared 23/01/03 09:50:39 INFO BlockManager: BlockManager stopped 23/01/03 09:50:39 INFO BlockManagerMaster: BlockManagerMaster stopped 23/01/03 09:50:39 WARN MetricsSystem: Stopping a MetricsSystem that is not running 23/01/03 09:50:39 INFO OutputCommitCoordinator createTaskScheduler(SparkContext.scala:2973)...19more23/01/0309:50:39INFOSparkUI:StoppedSparkwebUIathttp://sparkpiee4a6285770a14dddriversvc.default.svc:404023/01/0309:50:39INFOMapOutputTrackerMasterEndpoint:MapOutputTrackerMasterEndpointstopped!23/01/0309:50:39INFOMemoryStore:MemoryStorecleared23/01/0309:50:39INFOBlockManager:BlockManagerstopped23/01/0309:50:39INFOBlockManagerMaster:BlockManagerMasterstopped23/01/0309:50:39WARNMetricsSystem:StoppingaMetricsSystemthatisnotrunning23/01/0309:50:39INFOOutputCommitCoordinatorOutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/01/03 09:50:39 INFO SparkContext: Successfully stopped SparkContext
Exception in thread “main” org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext . o r g .org .orgapache s p a r k spark sparkSparkContextKaTeX parse error: Can't use function '$' in math mode at position 144: …rk.SparkContext$̲.getOrCreate(Sp…runMain(SparkSubmit.scala:955)
at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-ee4a6285770a14dd-driver. Message: Forbidden!Configured service account doesn’t have access. Service account may have been revoked. pods “spark-pi-ee4a6285770a14dd-driver” is forbidden: User “system:serviceaccount:default:default” cannot get resource “pods” in API group “” in the namespace “default”.
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:471)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:453)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:947)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:221)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:187)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:86)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator. a n o n f u n anonfun anonfundriverPod 1 ( E x e c u t o r P o d s A l l o c a t o r . s c a l a : 79 ) a t s c a l a . O p t i o n . m a p ( O p t i o n . s c a l a : 230 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . k 8 s . E x e c u t o r P o d s A l l o c a t o r . < i n i t > ( E x e c u t o r P o d s A l l o c a t o r . s c a l a : 78 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . k 8 s . K u b e r n e t e s C l u s t e r M a n a g e r . c r e a t e S c h e d u l e r B a c k e n d ( K u b e r n e t e s C l u s t e r M a n a g e r . s c a l a : 118 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t 1(ExecutorPodsAllocator.scala:79) at scala.Option.map(Option.scala:230) at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:78) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:118) at org.apache.spark.SparkContext 1(ExecutorPodsAllocator.scala:79)atscala.Option.map(Option.scala:230)atorg.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:78)atorg.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:118)atorg.apache.spark.SparkContext.org a p a c h e apache apachespark S p a r k C o n t e x t SparkContext SparkContext$createTaskScheduler(SparkContext.scala:2973)
… 19 more
23/01/03 09:55:39 INFO ShutdownHookManager: Shutdown hook called
23/01/03 09:55:39 INFO ShutdownHookManager: Deleting directory /tmp/spark-f220516c-c11f-4ef3-baf8-4608422d8f86
23/01/03 09:55:39 INFO ShutdownHookManager: Deleting directory /var/data/spark-773fc03f-9d44-46d3-9e7b-b147088e91a8/spark-0ab3f7e4-7d2c-4193-ad31-b8859333e065

这样还是会报错,在宿主机或容器里报,没有权限,需要在 K8S 配置一个有权限的用户

准备一个 role.yaml 文件

apiVersion: v1
kind: ServiceAccount
metadata:
name: spark
namespace: default

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: spark-role
rules:

  • apiGroups: [“”]
    resources: [“pods”]
    verbs: [“*”]
  • apiGroups: [“”]
    resources: [“services”]
    verbs: [“*”]

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spark-role-binding
namespace: default
subjects:

  • kind: ServiceAccount
    name: spark
    namespace: default
    roleRef:
    kind: Role
    name: spark-role
    apiGroup: rbac.authorization.k8s.io

可以参考 https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/manifest/spark-rbac.yaml

执行命令

sudo kubectl apply -f role.yaml

sudo kubectl get role
sudo kubectl get role spark-role -o yaml
sudo kubectl get rolebinding
sudo kubectl get rolebinding spark-role-binding -o yaml

添加了 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark

重新提交

bin/spark-submit
–master k8s://https://192.168.183.50:6443
–deploy-mode cluster
–name spark-pi
–class org.apache.spark.examples.SparkPi
–conf spark.executor.instances=3
–conf spark.kubernetes.authenticate.driver.serviceAccountName=spark
–conf spark.kubernetes.container.image=192.168.183.60:5000/spark/spark:my_spark
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar

  • CMD=(“ S P A R K H O M E / b i n / s p a r k − s u b m i t " − − c o n f " s p a r k . d r i v e r . b i n d A d d r e s s = SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress= SPARKHOME/bin/sparksubmit"conf"spark.driver.bindAddress=SPARK_DRIVER_BIND_ADDRESS” --deploy-mode client “$@”)
  • exec /usr/bin/tini -s – /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.244.2.118 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.examples.SparkPi local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.2.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
    WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    23/01/03 11:11:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
    23/01/03 11:11:13 INFO SparkContext: Running Spark version 3.2.1
    23/01/03 11:11:13 INFO ResourceUtils: ==============================================================
    23/01/03 11:11:13 INFO ResourceUtils: No custom resources configured for spark.driver.
    23/01/03 11:11:13 INFO ResourceUtils: ==============================================================
    23/01/03 11:11:13 INFO SparkContext: Submitted application: Spark Pi
    23/01/03 11:11:13 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
    23/01/03 11:11:13 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
    23/01/03 11:11:13 INFO ResourceProfileManager: Added ResourceProfile id: 0
    23/01/03 11:11:13 INFO SecurityManager: Changing view acls to: 185,root
    23/01/03 11:11:13 INFO SecurityManager: Changing modify acls to: 185,root
    23/01/03 11:11:13 INFO SecurityManager: Changing view acls groups to:
    23/01/03 11:11:13 INFO SecurityManager: Changing modify acls groups to:
    23/01/03 11:11:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(185, root); groups with view permissions: Set(); users with modify permissions: Set(185, root); groups with modify permissions: Set()
    23/01/03 11:11:13 INFO Utils: Successfully started service ‘sparkDriver’ on port 7078.
    23/01/03 11:11:13 INFO SparkEnv: Registering MapOutputTracker
    23/01/03 11:11:13 INFO SparkEnv: Registering BlockManagerMaster
    23/01/03 11:11:13 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
    23/01/03 11:11:13 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
    23/01/03 11:11:13 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
    23/01/03 11:11:14 INFO DiskBlockManager: Created local directory at /var/data/spark-eb05dcf0-aefc-4b8f-af2e-144f59c7e234/blockmgr-dea9176e-efcf-49ba-a0f8-50c993f85ef6
    23/01/03 11:11:14 INFO MemoryStore: MemoryStore started with capacity 413.9 MiB
    23/01/03 11:11:14 INFO SparkEnv: Registering OutputCommitCoordinator
    23/01/03 11:11:14 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
    23/01/03 11:11:14 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark-pi-df8121857753f04f-driver-svc.default.svc:4040
    23/01/03 11:11:14 INFO SparkContext: Added JAR local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar at file:/opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar with timestamp 1672744273215
    23/01/03 11:11:14 INFO SparkContext: The JAR local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar at file:/opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar has been added already. Overwriting of added jar is not supported in the current version.
    23/01/03 11:11:14 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
    23/01/03 11:11:16 INFO ExecutorPodsAllocator: Going to request 3 executors from Kubernetes for ResourceProfile Id: 0, target: 3, known: 0, sharedSlotFromPendingPods: 2147483647.
    23/01/03 11:11:16 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh
    23/01/03 11:11:16 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh
    23/01/03 11:11:16 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
    23/01/03 11:11:16 ERROR SparkContext: Error initializing SparkContext.
    io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default.svc/api/v1/namespaces/default/configmaps. Message: Forbidden!Configured service account doesn’t have access. Service account may have been revoked. configmaps is forbidden: User “system:serviceaccount:default:spark” cannot create resource “configmaps” in API group “” in the namespace “default”.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:292)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:893)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:372)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:86)
    at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.setUpExecutorConfigMap(KubernetesClusterSchedulerBackend.scala:80)
    at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:103)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
    at org.apache.spark.SparkContext.(SparkContext.scala:581)
    at org.apache.spark.SparkContext . g e t O r C r e a t e ( S p a r k C o n t e x t . s c a l a : 2690 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n .getOrCreate(SparkContext.scala:2690) at org.apache.spark.sql.SparkSession .getOrCreate(SparkContext.scala:2690)atorg.apache.spark.sql.SparkSessionBuilder. a n o n f u n anonfun anonfungetOrCreate 2 ( S p a r k S e s s i o n . s c a l a : 949 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession 2(SparkSession.scala:949)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:943)
    at org.apache.spark.examples.SparkPi . m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(UnknownSource)atjava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(UnknownSource)atjava.base/java.lang.reflect.Method.invoke(UnknownSource)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:955)
    at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    23/01/03 11:11:16 INFO SparkUI: Stopped Spark web UI at http://spark-pi-df8121857753f04f-driver-svc.default.svc:4040
    23/01/03 11:11:16 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
    23/01/03 11:11:16 INFO KubernetesClusterSchedulerBackend K u b e r n e t e s D r i v e r E n d p o i n t : A s k i n g e a c h e x e c u t o r t o s h u t d o w n 23 / 01 / 0311 : 11 : 16 I N F O K u b e r n e t e s C l i e n t U t i l s : S p a r k c o n f i g u r a t i o n f i l e s l o a d e d f r o m S o m e ( / o p t / s p a r k / c o n f ) : s p a r k − e n v . s h 23 / 01 / 0311 : 11 : 16 I N F O B a s i c E x e c u t o r F e a t u r e S t e p : D e c o m m i s s i o n i n g n o t e n a b l e d , s k i p p i n g s h u t d o w n s c r i p t 23 / 01 / 0311 : 11 : 16 I N F O K u b e r n e t e s C l i e n t U t i l s : S p a r k c o n f i g u r a t i o n f i l e s l o a d e d f r o m S o m e ( / o p t / s p a r k / c o n f ) : s p a r k − e n v . s h 23 / 01 / 0311 : 11 : 16 I N F O B a s i c E x e c u t o r F e a t u r e S t e p : D e c o m m i s s i o n i n g n o t e n a b l e d , s k i p p i n g s h u t d o w n s c r i p t 23 / 01 / 0311 : 11 : 16 W A R N E x e c u t o r P o d s W a t c h S n a p s h o t S o u r c e : K u b e r n e t e s c l i e n t h a s b e e n c l o s e d . 23 / 01 / 0311 : 11 : 16 E R R O R U t i l s : U n c a u g h t e x c e p t i o n i n t h r e a d m a i n i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . K u b e r n e t e s C l i e n t E x c e p t i o n : F a i l u r e e x e c u t i n g : G E T a t : h t t p s : / / k u b e r n e t e s . d e f a u l t . s v c / a p i / v 1 / n a m e s p a c e s / d e f a u l t / p e r s i s t e n t v o l u m e c l a i m s ? l a b e l S e l e c t o r = s p a r k − a p p − s e l e c t o r a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . r e q u e s t F a i l u r e ( O p e r a t i o n S u p p o r t . j a v a : 639 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . a s s e r t R e s p o n s e C o d e ( O p e r a t i o n S u p p o r t . j a v a : 576 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . h a n d l e R e s p o n s e ( O p e r a t i o n S u p p o r t . j a v a : 543 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . h a n d l e R e s p o n s e ( O p e r a t i o n S u p p o r t . j a v a : 504 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . h a n d l e R e s p o n s e ( O p e r a t i o n S u p p o r t . j a v a : 487 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . B a s e O p e r a t i o n . l i s t R e q u e s t H e l p e r ( B a s e O p e r a t i o n . j a v a : 163 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . B a s e O p e r a t i o n . l i s t ( B a s e O p e r a t i o n . j a v a : 672 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . B a s e O p e r a t i o n . d e l e t e L i s t ( B a s e O p e r a t i o n . j a v a : 786 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . B a s e O p e r a t i o n . d e l e t e ( B a s e O p e r a t i o n . j a v a : 704 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . k 8 s . K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d . KubernetesDriverEndpoint: Asking each executor to shut down 23/01/03 11:11:16 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh 23/01/03 11:11:16 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script 23/01/03 11:11:16 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh 23/01/03 11:11:16 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script 23/01/03 11:11:16 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed. 23/01/03 11:11:16 ERROR Utils: Uncaught exception in thread main io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/persistentvolumeclaims?labelSelector=spark-app-selector%3Dspark-cf3f6b3f2aef4f4ea7f977066e2be9af. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. persistentvolumeclaims is forbidden: User "system:serviceaccount:default:spark" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default". at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:487) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:163) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:672) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.deleteList(BaseOperation.java:786) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:704) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend. KubernetesDriverEndpoint:Askingeachexecutortoshutdown23/01/0311:11:16INFOKubernetesClientUtils:SparkconfigurationfilesloadedfromSome(/opt/spark/conf):sparkenv.sh23/01/0311:11:16INFOBasicExecutorFeatureStep:Decommissioningnotenabled,skippingshutdownscript23/01/0311:11:16INFOKubernetesClientUtils:SparkconfigurationfilesloadedfromSome(/opt/spark/conf):sparkenv.sh23/01/0311:11:16INFOBasicExecutorFeatureStep:Decommissioningnotenabled,skippingshutdownscript23/01/0311:11:16WARNExecutorPodsWatchSnapshotSource:Kubernetesclienthasbeenclosed.23/01/0311:11:16ERRORUtils:Uncaughtexceptioninthreadmainio.fabric8.kubernetes.client.KubernetesClientException:Failureexecuting:GETat:https://kubernetes.default.svc/api/v1/namespaces/default/persistentvolumeclaims?labelSelector=sparkappselectoratio.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:487)atio.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:163)atio.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:672)atio.fabric8.kubernetes.client.dsl.base.BaseOperation.deleteList(BaseOperation.java:786)atio.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:704)atorg.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.anonfun$stop 6 ( K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d . s c a l a : 138 ) a t o r g . a p a c h e . s p a r k . u t i l . U t i l s 6(KubernetesClusterSchedulerBackend.scala:138) at org.apache.spark.util.Utils 6(KubernetesClusterSchedulerBackend.scala:138)atorg.apache.spark.util.Utils.tryLogNonFatalError(Utils.scala:1442)
    at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.stop(KubernetesClusterSchedulerBackend.scala:139)
    at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:927)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2567)
    at org.apache.spark.SparkContext. a n o n f u n anonfun anonfunstop 12 ( S p a r k C o n t e x t . s c a l a : 2086 ) a t o r g . a p a c h e . s p a r k . u t i l . U t i l s 12(SparkContext.scala:2086) at org.apache.spark.util.Utils 12(SparkContext.scala:2086)atorg.apache.spark.util.Utils.tryLogNonFatalError(Utils.scala:1442)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:2086)
    at org.apache.spark.SparkContext.(SparkContext.scala:677)
    at org.apache.spark.SparkContext . g e t O r C r e a t e ( S p a r k C o n t e x t . s c a l a : 2690 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n .getOrCreate(SparkContext.scala:2690) at org.apache.spark.sql.SparkSession .getOrCreate(SparkContext.scala:2690)atorg.apache.spark.sql.SparkSessionBuilder. a n o n f u n anonfun anonfungetOrCreate 2 ( S p a r k S e s s i o n . s c a l a : 949 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession 2(SparkSession.scala:949)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:943)
    at org.apache.spark.examples.SparkPi . m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(UnknownSource)atjava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(UnknownSource)atjava.base/java.lang.reflect.Method.invoke(UnknownSource)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:955)
    at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    23/01/03 11:11:16 ERROR Utils: Uncaught exception in thread main
    io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/configmaps?labelSelector=spark-app-selector%3Dspark-cf3f6b3f2aef4f4ea7f977066e2be9af%2Cspark-role%3Dexecutor. Message: Forbidden!Configured service account doesn’t have access. Service account may have been revoked. configmaps is forbidden: User “system:serviceaccount:default:spark” cannot list resource “configmaps” in API group “” in the namespace “default”.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:487)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:163)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:672)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.deleteList(BaseOperation.java:786)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:704)
    at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend. a n o n f u n anonfun anonfunstop 8 ( K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d . s c a l a : 155 ) a t o r g . a p a c h e . s p a r k . u t i l . U t i l s 8(KubernetesClusterSchedulerBackend.scala:155) at org.apache.spark.util.Utils 8(KubernetesClusterSchedulerBackend.scala:155)atorg.apache.spark.util.Utils.tryLogNonFatalError(Utils.scala:1442)
    at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.stop(KubernetesClusterSchedulerBackend.scala:156)
    at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:927)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2567)
    at org.apache.spark.SparkContext. a n o n f u n anonfun anonfunstop 12 ( S p a r k C o n t e x t . s c a l a : 2086 ) a t o r g . a p a c h e . s p a r k . u t i l . U t i l s 12(SparkContext.scala:2086) at org.apache.spark.util.Utils 12(SparkContext.scala:2086)atorg.apache.spark.util.Utils.tryLogNonFatalError(Utils.scala:1442)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:2086)
    at org.apache.spark.SparkContext.(SparkContext.scala:677)
    at org.apache.spark.SparkContext . g e t O r C r e a t e ( S p a r k C o n t e x t . s c a l a : 2690 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n .getOrCreate(SparkContext.scala:2690) at org.apache.spark.sql.SparkSession .getOrCreate(SparkContext.scala:2690)atorg.apache.spark.sql.SparkSessionBuilder. a n o n f u n anonfun anonfungetOrCreate 2 ( S p a r k S e s s i o n . s c a l a : 949 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession 2(SparkSession.scala:949)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:943)
    at org.apache.spark.examples.SparkPi . m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(UnknownSource)atjava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(UnknownSource)atjava.base/java.lang.reflect.Method.invoke(UnknownSource)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:955)
    at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    23/01/03 11:11:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
    23/01/03 11:11:16 INFO MemoryStore: MemoryStore cleared
    23/01/03 11:11:16 INFO BlockManager: BlockManager stopped
    23/01/03 11:11:16 INFO BlockManagerMaster: BlockManagerMaster stopped
    23/01/03 11:11:16 WARN MetricsSystem: Stopping a MetricsSystem that is not running
    23/01/03 11:11:16 INFO OutputCommitCoordinator O u t p u t C o m m i t C o o r d i n a t o r E n d p o i n t : O u t p u t C o m m i t C o o r d i n a t o r s t o p p e d ! 23 / 01 / 0311 : 11 : 16 I N F O S p a r k C o n t e x t : S u c c e s s f u l l y s t o p p e d S p a r k C o n t e x t E x c e p t i o n i n t h r e a d " m a i n " i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . K u b e r n e t e s C l i e n t E x c e p t i o n : F a i l u r e e x e c u t i n g : P O S T a t : h t t p s : / / k u b e r n e t e s . d e f a u l t . s v c / a p i / v 1 / n a m e s p a c e s / d e f a u l t / c o n f i g m a p s . M e s s a g e : F o r b i d d e n ! C o n f i g u r e d s e r v i c e a c c o u n t d o e s n ′ t h a v e a c c e s s . S e r v i c e a c c o u n t m a y h a v e b e e n r e v o k e d . c o n f i g m a p s i s f o r b i d d e n : U s e r " s y s t e m : s e r v i c e a c c o u n t : d e f a u l t : s p a r k " c a n n o t c r e a t e r e s o u r c e " c o n f i g m a p s " i n A P I g r o u p " " i n t h e n a m e s p a c e " d e f a u l t " . a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . r e q u e s t F a i l u r e ( O p e r a t i o n S u p p o r t . j a v a : 639 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . a s s e r t R e s p o n s e C o d e ( O p e r a t i o n S u p p o r t . j a v a : 576 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . h a n d l e R e s p o n s e ( O p e r a t i o n S u p p o r t . j a v a : 543 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . h a n d l e R e s p o n s e ( O p e r a t i o n S u p p o r t . j a v a : 504 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . O p e r a t i o n S u p p o r t . h a n d l e C r e a t e ( O p e r a t i o n S u p p o r t . j a v a : 292 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . B a s e O p e r a t i o n . h a n d l e C r e a t e ( B a s e O p e r a t i o n . j a v a : 893 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . B a s e O p e r a t i o n . c r e a t e ( B a s e O p e r a t i o n . j a v a : 372 ) a t i o . f a b r i c 8. k u b e r n e t e s . c l i e n t . d s l . b a s e . B a s e O p e r a t i o n . c r e a t e ( B a s e O p e r a t i o n . j a v a : 86 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . k 8 s . K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d . s e t U p E x e c u t o r C o n f i g M a p ( K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d . s c a l a : 80 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . k 8 s . K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d . s t a r t ( K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d . s c a l a : 103 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . T a s k S c h e d u l e r I m p l . s t a r t ( T a s k S c h e d u l e r I m p l . s c a l a : 220 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 581 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 23/01/03 11:11:16 INFO SparkContext: Successfully stopped SparkContext Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default.svc/api/v1/namespaces/default/configmaps. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. configmaps is forbidden: User "system:serviceaccount:default:spark" cannot create resource "configmaps" in API group "" in the namespace "default". at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:292) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:893) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:372) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:86) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.setUpExecutorConfigMap(KubernetesClusterSchedulerBackend.scala:80) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:103) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220) at org.apache.spark.SparkContext.<init>(SparkContext.scala:581) at org.apache.spark.SparkContext OutputCommitCoordinatorEndpoint:OutputCommitCoordinatorstopped!23/01/0311:11:16INFOSparkContext:SuccessfullystoppedSparkContextExceptioninthread"main"io.fabric8.kubernetes.client.KubernetesClientException:Failureexecuting:POSTat:https://kubernetes.default.svc/api/v1/namespaces/default/configmaps.Message:Forbidden!Configuredserviceaccountdoesnthaveaccess.Serviceaccountmayhavebeenrevoked.configmapsisforbidden:User"system:serviceaccount:default:spark"cannotcreateresource"configmaps"inAPIgroup""inthenamespace"default".atio.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:639)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:576)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:543)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:504)atio.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:292)atio.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:893)atio.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:372)atio.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:86)atorg.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.setUpExecutorConfigMap(KubernetesClusterSchedulerBackend.scala:80)atorg.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:103)atorg.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)atorg.apache.spark.SparkContext.<init>(SparkContext.scala:581)atorg.apache.spark.SparkContext.getOrCreate(SparkContext.scala:2690)
    at org.apache.spark.sql.SparkSession B u i l d e r . Builder. Builder.anonfun$getOrCreate 2 ( S p a r k S e s s i o n . s c a l a : 949 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession 2(SparkSession.scala:949)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:943)
    at org.apache.spark.examples.SparkPi . m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(UnknownSource)atjava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(UnknownSource)atjava.base/java.lang.reflect.Method.invoke(UnknownSource)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:955)
    at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    23/01/03 11:11:16 INFO ShutdownHookManager: Shutdown hook called
    23/01/03 11:11:16 INFO ShutdownHookManager: Deleting directory /tmp/spark-11f14be3-edd1-46d3-ad14-02b0863b471c
    23/01/03 11:11:16 INFO ShutdownHookManager: Deleting directory /var/data/spark-eb05dcf0-aefc-4b8f-af2e-144f59c7e234/spark-f71c7f9f-575f-4e66-82f4-d52dfbb7da6f

将serviceaccount、role、rolebinding删除,根据官网查询重新创建serviceaccount

kubectl create serviceaccount spark

kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default

第二天来了,日志报错信息变了:
23/01/04 10:30:01 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: Incomplete HDFS URI, no host: hdfs:///user/spark/applicationHistory
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:143)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access 200 ( F i l e S y s t e m . j a v a : 94 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m 200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem 200(FileSystem.java:94)atorg.apache.hadoop.fs.FileSystemCache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem C a c h e . g e t ( F i l e S y s t e m . j a v a : 2685 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . g e t ( F i l e S y s t e m . j a v a : 373 ) a t o r g . a p a c h e . s p a r k . u t i l . U t i l s Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.spark.util.Utils Cache.get(FileSystem.java:2685)atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)atorg.apache.spark.util.Utils.getHadoopFileSystem(Utils.scala:1938)
at org.apache.spark.deploy.history.EventLogFileWriter.(EventLogFileWriters.scala:60)
at org.apache.spark.deploy.history.SingleEventLogFileWriter.(EventLogFileWriters.scala:213)
at org.apache.spark.deploy.history.EventLogFileWriter . a p p l y ( E v e n t L o g F i l e W r i t e r s . s c a l a : 181 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . E v e n t L o g g i n g L i s t e n e r . < i n i t > ( E v e n t L o g g i n g L i s t e n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 609 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t .apply(EventLogFileWriters.scala:181) at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66) at org.apache.spark.SparkContext.<init>(SparkContext.scala:609) at org.apache.spark.SparkContext .apply(EventLogFileWriters.scala:181)atorg.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66)atorg.apache.spark.SparkContext.<init>(SparkContext.scala:609)atorg.apache.spark.SparkContext.getOrCreate(SparkContext.scala:2690)
at org.apache.spark.sql.SparkSession B u i l d e r . Builder. Builder.anonfun$getOrCreate 2 ( S p a r k S e s s i o n . s c a l a : 949 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession 2(SparkSession.scala:949)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:943)
at org.apache.spark.examples.SparkPi . m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(UnknownSource)atjava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(UnknownSource)atjava.base/java.lang.reflect.Method.invoke(UnknownSource)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:955)
at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/01/04 10:30:01 INFO SparkUI: Stopped Spark web UI at http://spark-pi-b62034857c548d5f-driver-svc.default.svc:4040
23/01/04 10:30:01 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
23/01/04 10:30:01 INFO KubernetesClusterSchedulerBackend K u b e r n e t e s D r i v e r E n d p o i n t : A s k i n g e a c h e x e c u t o r t o s h u t d o w n 23 / 01 / 0410 : 30 : 02 W A R N E x e c u t o r P o d s W a t c h S n a p s h o t S o u r c e : K u b e r n e t e s c l i e n t h a s b e e n c l o s e d . 23 / 01 / 0410 : 30 : 02 I N F O M a p O u t p u t T r a c k e r M a s t e r E n d p o i n t : M a p O u t p u t T r a c k e r M a s t e r E n d p o i n t s t o p p e d ! 23 / 01 / 0410 : 30 : 02 I N F O M e m o r y S t o r e : M e m o r y S t o r e c l e a r e d 23 / 01 / 0410 : 30 : 02 I N F O B l o c k M a n a g e r : B l o c k M a n a g e r s t o p p e d 23 / 01 / 0410 : 30 : 02 I N F O B l o c k M a n a g e r M a s t e r : B l o c k M a n a g e r M a s t e r s t o p p e d 23 / 01 / 0410 : 30 : 02 I N F O O u t p u t C o m m i t C o o r d i n a t o r KubernetesDriverEndpoint: Asking each executor to shut down 23/01/04 10:30:02 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed. 23/01/04 10:30:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 23/01/04 10:30:02 INFO MemoryStore: MemoryStore cleared 23/01/04 10:30:02 INFO BlockManager: BlockManager stopped 23/01/04 10:30:02 INFO BlockManagerMaster: BlockManagerMaster stopped 23/01/04 10:30:02 INFO OutputCommitCoordinator KubernetesDriverEndpoint:Askingeachexecutortoshutdown23/01/0410:30:02WARNExecutorPodsWatchSnapshotSource:Kubernetesclienthasbeenclosed.23/01/0410:30:02INFOMapOutputTrackerMasterEndpoint:MapOutputTrackerMasterEndpointstopped!23/01/0410:30:02INFOMemoryStore:MemoryStorecleared23/01/0410:30:02INFOBlockManager:BlockManagerstopped23/01/0410:30:02INFOBlockManagerMaster:BlockManagerMasterstopped23/01/0410:30:02INFOOutputCommitCoordinatorOutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/01/04 10:30:02 INFO SparkContext: Successfully stopped SparkContext
Exception in thread “main” java.io.IOException: Incomplete HDFS URI, no host: hdfs:///user/spark/applicationHistory
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:143)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access 200 ( F i l e S y s t e m . j a v a : 94 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m 200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem 200(FileSystem.java:94)atorg.apache.hadoop.fs.FileSystemCache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem C a c h e . g e t ( F i l e S y s t e m . j a v a : 2685 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . g e t ( F i l e S y s t e m . j a v a : 373 ) a t o r g . a p a c h e . s p a r k . u t i l . U t i l s Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.spark.util.Utils Cache.get(FileSystem.java:2685)atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)atorg.apache.spark.util.Utils.getHadoopFileSystem(Utils.scala:1938)
at org.apache.spark.deploy.history.EventLogFileWriter.(EventLogFileWriters.scala:60)
at org.apache.spark.deploy.history.SingleEventLogFileWriter.(EventLogFileWriters.scala:213)
at org.apache.spark.deploy.history.EventLogFileWriter . a p p l y ( E v e n t L o g F i l e W r i t e r s . s c a l a : 181 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . E v e n t L o g g i n g L i s t e n e r . < i n i t > ( E v e n t L o g g i n g L i s t e n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 609 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t .apply(EventLogFileWriters.scala:181) at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66) at org.apache.spark.SparkContext.<init>(SparkContext.scala:609) at org.apache.spark.SparkContext .apply(EventLogFileWriters.scala:181)atorg.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66)atorg.apache.spark.SparkContext.<init>(SparkContext.scala:609)atorg.apache.spark.SparkContext.getOrCreate(SparkContext.scala:2690)
at org.apache.spark.sql.SparkSession B u i l d e r . Builder. Builder.anonfun$getOrCreate 2 ( S p a r k S e s s i o n . s c a l a : 949 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession 2(SparkSession.scala:949)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:943)
at org.apache.spark.examples.SparkPi . m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(UnknownSource)atjava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(UnknownSource)atjava.base/java.lang.reflect.Method.invoke(UnknownSource)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:955)
at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/01/04 10:30:02 INFO ShutdownHookManager: Shutdown hook called
23/01/04 10:30:02 INFO ShutdownHookManager: Deleting directory /var/data/spark-675d5456-cebb-4cb1-9270-c0c5ce33ea55/spark-fa1cd25a-ef1c-437a-9b2f-e006fcc2d790
23/01/04 10:30:02 INFO ShutdownHookManager: Deleting directory /tmp/spark-daa6e6c5-3506-49eb-ae1a-59432da6fa89

把spark中的配置改为原来没经过任何修改的配置,重新打镜像上传到镜像私服

bin/docker-image-tool.sh -r 192.168.183.60:5000 -t spark_not_modify build

  • mkdir -p /opt/spark
  • mkdir -p /opt/spark/examples
  • mkdir -p /opt/spark/work-dir
  • touch /opt/spark/RELEASE
  • rm /bin/sh
  • ln -sv /bin/bash /bin/sh
    ‘/bin/sh’ -> ‘/bin/bash’
  • echo auth required pam_wheel.so use_uid
  • chgrp root /etc/passwd
  • chmod ug+rw /etc/passwd
  • rm -rf /var/cache/apt/archives

Successfully built 34484d863935
Successfully tagged 192.168.183.60:5000/spark:spark_not_modify

bin/spark-submit
–master k8s://https://192.168.183.50:6443
–deploy-mode cluster
–name spark-pi
–class org.apache.spark.examples.SparkPi
–conf spark.executor.instances=3
–conf spark.kubernetes.authenticate.driver.serviceAccountName=spark
–conf spark.kubernetes.container.image=192.168.183.60:5000/spark:spark_not_modify
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar

用新打的镜像还是报链接不上hdfs的异常,异常信息和上述日志一样,此处省略

准备把spark配置的hdfs请求方式为ip加端口号的形式,
首先通过命令获取hdfs链接地址以及端口号:
hdfs getconf -confKey fs.default.name

hdfs://hadoop101:9000

重新打镜像:
bin/docker-image-tool.sh -r 192.168.183.60:5000 -t spark_hdfs_url build

docker push 192.168.183.60:5000/spark:spark_hdfs_url

curl http://192.168.183.60:5000/v2/_catalog
{“repositories”:[“spark”,“spark/spark”]}

2)获取某个镜像的标签列表:

curl http://192.168.183.60:5000/v2/spark/tags/list

bin/spark-submit
–master k8s://https://192.168.183.50:6443
–deploy-mode cluster
–name spark-pi
–class org.apache.spark.examples.SparkPi
–conf spark.executor.instances=3
–conf spark.kubernetes.authenticate.driver.serviceAccountName=spark
–conf spark.kubernetes.container.image=192.168.183.60:5000/spark:spark_hdfs_url
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar

报错信息如下:
23/01/05 03:57:00 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop101
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:320)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access 200 ( F i l e S y s t e m . j a v a : 94 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m 200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem 200(FileSystem.java:94)atorg.apache.hadoop.fs.FileSystemCache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem C a c h e . g e t ( F i l e S y s t e m . j a v a : 2685 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . g e t ( F i l e S y s t e m . j a v a : 373 ) a t o r g . a p a c h e . s p a r k . u t i l . U t i l s Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.spark.util.Utils Cache.get(FileSystem.java:2685)atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)atorg.apache.spark.util.Utils.getHadoopFileSystem(Utils.scala:1938)
at org.apache.spark.deploy.history.EventLogFileWriter.(EventLogFileWriters.scala:60)
at org.apache.spark.deploy.history.SingleEventLogFileWriter.(EventLogFileWriters.scala:213)
at org.apache.spark.deploy.history.EventLogFileWriter . a p p l y ( E v e n t L o g F i l e W r i t e r s . s c a l a : 181 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . E v e n t L o g g i n g L i s t e n e r . < i n i t > ( E v e n t L o g g i n g L i s t e n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 609 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t .apply(EventLogFileWriters.scala:181) at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66) at org.apache.spark.SparkContext.<init>(SparkContext.scala:609) at org.apache.spark.SparkContext .apply(EventLogFileWriters.scala:181)atorg.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66)atorg.apache.spark.SparkContext.<init>(SparkContext.scala:609)atorg.apache.spark.SparkContext.getOrCreate(SparkContext.scala:2690)
at org.apache.spark.sql.SparkSession B u i l d e r . Builder. Builder.anonfun$getOrCreate 2 ( S p a r k S e s s i o n . s c a l a : 949 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession 2(SparkSession.scala:949)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:943)
at org.apache.spark.examples.SparkPi . m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j d k . i n t e r n a l . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( U n k n o w n S o u r c e ) a t j a v a . b a s e / j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(UnknownSource)atjava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(UnknownSource)atjava.base/java.lang.reflect.Method.invoke(UnknownSource)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:955)
at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1043 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1043) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:1043)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1052)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: hadoop101
… 35 more

将hadoop101改为IP地址:192.168.183.50

重新打包镜像

bin/docker-image-tool.sh -r 192.168.183.60:5000 -t spark_hdfs_url_ip build

docker push 192.168.183.60:5000/spark:spark_hdfs_url_ip

curl http://192.168.183.60:5000/v2/_catalog
{“repositories”:[“spark”,“spark/spark”]}

2)获取某个镜像的标签列表:

curl http://192.168.183.60:5000/v2/spark/tags/list

{“name”:“spark”,“tags”:[“spark_hdfs_url”,“spark_hdfs_url_ip”]}

bin/spark-submit
–master k8s://https://192.168.183.50:6443
–deploy-mode cluster
–name spark-pi
–class org.apache.spark.examples.SparkPi
–conf spark.executor.instances=3
–conf spark.kubernetes.authenticate.driver.serviceAccountName=spark
–conf spark.kubernetes.container.image=192.168.183.60:5000/spark:spark_hdfs_url_ip
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar

牛逼,在这一步终于成功了,不容易啊
任务提交之后运行日志:

23/01/05 13:46:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
23/01/05 13:46:22 INFO k8s.SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
23/01/05 13:46:23 INFO features.KerberosConfDriverFeatureStep: You have not specified a krb5.conf file locally or via a ConfigMap. Make sure that you have the krb5.conf locally on the driver image.
23/01/05 13:46:23 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/module/spark-3.2.1/conf) : spark-env.sh
23/01/05 13:46:24 INFO submit.LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-f7770d858077542c-driver
namespace: default
labels: spark-app-selector -> spark-9ed19e1effa14052aac3f3a1ee0a8f26, spark-role -> driver
pod uid: bc876800-9054-4c20-a4a8-93d6bda11058
creation time: 2023-01-05T05:46:23Z
service account name: spark
volumes: hadoop-properties, spark-local-dir-1, spark-conf-volume-driver, spark-token-f846f
node name: hadoop102
start time: 2023-01-05T05:46:23Z
phase: Pending
container status:
container name: spark-kubernetes-driver
container image: 192.168.183.60:5000/spark:spark_hdfs_url_ip
container state: waiting
pending reason: ContainerCreating
23/01/05 13:46:24 INFO submit.LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-f7770d858077542c-driver
namespace: default
labels: spark-app-selector -> spark-9ed19e1effa14052aac3f3a1ee0a8f26, spark-role -> driver
pod uid: bc876800-9054-4c20-a4a8-93d6bda11058
creation time: 2023-01-05T05:46:23Z
service account name: spark
volumes: hadoop-properties, spark-local-dir-1, spark-conf-volume-driver, spark-token-f846f
node name: hadoop102
start time: 2023-01-05T05:46:23Z
phase: Pending
container status:
container name: spark-kubernetes-driver
container image: 192.168.183.60:5000/spark:spark_hdfs_url_ip
container state: waiting
pending reason: ContainerCreating
23/01/05 13:46:24 INFO submit.LoggingPodStatusWatcherImpl: Waiting for application spark-pi with submission ID default:spark-pi-f7770d858077542c-driver to finish…
23/01/05 13:46:25 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Pending)
23/01/05 13:46:25 INFO submit.LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-f7770d858077542c-driver
namespace: default
labels: spark-app-selector -> spark-9ed19e1effa14052aac3f3a1ee0a8f26, spark-role -> driver
pod uid: bc876800-9054-4c20-a4a8-93d6bda11058
creation time: 2023-01-05T05:46:23Z
service account name: spark
volumes: hadoop-properties, spark-local-dir-1, spark-conf-volume-driver, spark-token-f846f
node name: hadoop102
start time: 2023-01-05T05:46:23Z
phase: Running
container status:
container name: spark-kubernetes-driver
container image: 192.168.183.60:5000/spark/spark:my_spark
container state: running
container started at: 2023-01-05T05:46:25Z
23/01/05 13:46:26 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:27 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:28 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:29 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:30 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:31 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:32 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:33 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:34 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:35 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:36 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:37 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:38 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:39 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:40 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:41 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:42 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:43 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Running)
23/01/05 13:46:43 INFO submit.LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-f7770d858077542c-driver
namespace: default
labels: spark-app-selector -> spark-9ed19e1effa14052aac3f3a1ee0a8f26, spark-role -> driver
pod uid: bc876800-9054-4c20-a4a8-93d6bda11058
creation time: 2023-01-05T05:46:23Z
service account name: spark
volumes: hadoop-properties, spark-local-dir-1, spark-conf-volume-driver, spark-token-f846f
node name: hadoop102
start time: 2023-01-05T05:46:23Z
phase: Succeeded
container status:
container name: spark-kubernetes-driver
container image: 192.168.183.60:5000/spark/spark:my_spark
container state: terminated
container started at: 2023-01-05T05:46:25Z
container finished at: 2023-01-05T05:46:42Z
exit code: 0
termination reason: Completed
23/01/05 13:46:43 INFO submit.LoggingPodStatusWatcherImpl: Application status for spark-9ed19e1effa14052aac3f3a1ee0a8f26 (phase: Succeeded)
23/01/05 13:46:43 INFO submit.LoggingPodStatusWatcherImpl: Container final statuses:

 container name: spark-kubernetes-driver
 container image: 192.168.183.60:5000/spark/spark:my_spark
 container state: terminated
 container started at: 2023-01-05T05:46:25Z
 container finished at: 2023-01-05T05:46:42Z
 exit code: 0
 termination reason: Completed

23/01/05 13:46:43 INFO submit.LoggingPodStatusWatcherImpl: Application spark-pi with submission ID default:spark-pi-f7770d858077542c-driver finished
23/01/05 13:46:43 INFO util.ShutdownHookManager: Shutdown hook called
23/01/05 13:46:43 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-acf8a434-76cb-44a6-ad84-4f89f0676b38

查看pod状态:

[root@hadoop101 spark-3.2.1]# kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-pi-e5bacf85801314d0-driver 0/1 Error 0 110m
spark-pi-f7770d858077542c-driver 0/1 Completed 0 31s

查看driver端日志:

[root@hadoop101 spark-3.2.1]# kubectl logs spark-pi-f7770d858077542c-driver
++ id -u

  • myuid=185
    ++ id -g
  • mygid=0
  • set +e
    ++ getent passwd 185
  • uidentry=
  • set -e
  • ‘[’ -z ‘’ ‘]’
  • ‘[’ -w /etc/passwd ‘]’
  • echo ‘185❌185:0:anonymous uid:/opt/spark:/bin/false’
  • SPARK_CLASSPATH=‘:/opt/spark/jars/*’
  • env
  • grep SPARK_JAVA_OPT_
  • sort -t_ -k4 -n
  • sed ‘s/[^=]=(.)/\1/g’
  • readarray -t SPARK_EXECUTOR_JAVA_OPTS
  • ‘[’ -n ‘’ ‘]’
  • ‘[’ -z ‘]’
  • ‘[’ -z ‘]’
  • ‘[’ -n ‘’ ‘]’
  • ‘[’ -z x ‘]’
  • SPARK_CLASSPATH=‘/opt/hadoop/conf::/opt/spark/jars/*’
  • ‘[’ -z x ‘]’
  • SPARK_CLASSPATH=‘/opt/spark/conf:/opt/hadoop/conf::/opt/spark/jars/*’
  • case “$1” in
  • shift 1
  • CMD=(“ S P A R K H O M E / b i n / s p a r k − s u b m i t " − − c o n f " s p a r k . d r i v e r . b i n d A d d r e s s = SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress= SPARKHOME/bin/sparksubmit"conf"spark.driver.bindAddress=SPARK_DRIVER_BIND_ADDRESS” --deploy-mode client “$@”)
  • exec /usr/bin/tini -s – /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.244.1.158 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.examples.SparkPi local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.2.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
    WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    23/01/05 05:46:27 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
    23/01/05 05:46:27 INFO SparkContext: Running Spark version 3.2.1
    23/01/05 05:46:27 INFO ResourceUtils: ==============================================================
    23/01/05 05:46:27 INFO ResourceUtils: No custom resources configured for spark.driver.
    23/01/05 05:46:27 INFO ResourceUtils: ==============================================================
    23/01/05 05:46:27 INFO SparkContext: Submitted application: Spark Pi
    23/01/05 05:46:27 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
    23/01/05 05:46:27 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
    23/01/05 05:46:27 INFO ResourceProfileManager: Added ResourceProfile id: 0
    23/01/05 05:46:27 INFO SecurityManager: Changing view acls to: 185,root
    23/01/05 05:46:27 INFO SecurityManager: Changing modify acls to: 185,root
    23/01/05 05:46:27 INFO SecurityManager: Changing view acls groups to:
    23/01/05 05:46:27 INFO SecurityManager: Changing modify acls groups to:
    23/01/05 05:46:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(185, root); groups with view permissions: Set(); users with modify permissions: Set(185, root); groups with modify permissions: Set()
    23/01/05 05:46:27 INFO Utils: Successfully started service ‘sparkDriver’ on port 7078.
    23/01/05 05:46:27 INFO SparkEnv: Registering MapOutputTracker
    23/01/05 05:46:28 INFO SparkEnv: Registering BlockManagerMaster
    23/01/05 05:46:28 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
    23/01/05 05:46:28 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
    23/01/05 05:46:28 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
    23/01/05 05:46:28 INFO DiskBlockManager: Created local directory at /var/data/spark-167eeae4-e9ce-4d9f-89d2-07a0209d03f3/blockmgr-325dd7e5-88ab-475c-8fe8-74507e838ed9
    23/01/05 05:46:28 INFO MemoryStore: MemoryStore started with capacity 413.9 MiB
    23/01/05 05:46:28 INFO SparkEnv: Registering OutputCommitCoordinator
    23/01/05 05:46:28 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
    23/01/05 05:46:28 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark-pi-f7770d858077542c-driver-svc.default.svc:4040
    23/01/05 05:46:28 INFO SparkContext: Added JAR local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar at file:/opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar with timestamp 1672897587336
    23/01/05 05:46:28 INFO SparkContext: The JAR local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar at file:/opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar has been added already. Overwriting of added jar is not supported in the current version.
    23/01/05 05:46:28 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
    23/01/05 05:46:29 INFO ExecutorPodsAllocator: Going to request 3 executors from Kubernetes for ResourceProfile Id: 0, target: 3, known: 0, sharedSlotFromPendingPods: 2147483647.
    23/01/05 05:46:29 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh
    23/01/05 05:46:29 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh
    23/01/05 05:46:29 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
    23/01/05 05:46:30 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 7079.
    23/01/05 05:46:30 INFO NettyBlockTransferService: Server created on spark-pi-f7770d858077542c-driver-svc.default.svc:7079
    23/01/05 05:46:30 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
    23/01/05 05:46:30 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark-pi-f7770d858077542c-driver-svc.default.svc, 7079, None)
    23/01/05 05:46:30 INFO BlockManagerMasterEndpoint: Registering block manager spark-pi-f7770d858077542c-driver-svc.default.svc:7079 with 413.9 MiB RAM, BlockManagerId(driver, spark-pi-f7770d858077542c-driver-svc.default.svc, 7079, None)
    23/01/05 05:46:30 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark-pi-f7770d858077542c-driver-svc.default.svc, 7079, None)
    23/01/05 05:46:30 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark-pi-f7770d858077542c-driver-svc.default.svc, 7079, None)
    23/01/05 05:46:30 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh
    23/01/05 05:46:30 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
    23/01/05 05:46:30 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh
    23/01/05 05:46:30 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
    23/01/05 05:46:31 INFO SingleEventLogFileWriter: Logging events to hdfs://192.168.183.50:9000/user/spark/applicationHistory/spark-9ed19e1effa14052aac3f3a1ee0a8f26.inprogress
    23/01/05 05:46:38 INFO KubernetesClusterSchedulerBackend K u b e r n e t e s D r i v e r E n d p o i n t : R e g i s t e r e d e x e c u t o r N e t t y R p c E n d p o i n t R e f ( s p a r k − c l i e n t : / / E x e c u t o r ) ( 10.244.1.159 : 56286 ) w i t h I D 2 , R e s o u r c e P r o f i l e I d 023 / 01 / 0505 : 46 : 38 I N F O B l o c k M a n a g e r M a s t e r E n d p o i n t : R e g i s t e r i n g b l o c k m a n a g e r 10.244.1.159 : 37122 w i t h 413.9 M i B R A M , B l o c k M a n a g e r I d ( 2 , 10.244.1.159 , 37122 , N o n e ) 23 / 01 / 0505 : 46 : 39 I N F O K u b e r n e t e s C l u s t e r S c h e d u l e r B a c k e n d KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.244.1.159:56286) with ID 2, ResourceProfileId 0 23/01/05 05:46:38 INFO BlockManagerMasterEndpoint: Registering block manager 10.244.1.159:37122 with 413.9 MiB RAM, BlockManagerId(2, 10.244.1.159, 37122, None) 23/01/05 05:46:39 INFO KubernetesClusterSchedulerBackend KubernetesDriverEndpoint:RegisteredexecutorNettyRpcEndpointRef(sparkclient://Executor)(10.244.1.159:56286)withID2,ResourceProfileId023/01/0505:46:38INFOBlockManagerMasterEndpoint:Registeringblockmanager10.244.1.159:37122with413.9MiBRAM,BlockManagerId(2,10.244.1.159,37122,None)23/01/0505:46:39INFOKubernetesClusterSchedulerBackendKubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.244.2.138:42788) with ID 1, ResourceProfileId 0
    23/01/05 05:46:39 INFO KubernetesClusterSchedulerBackendKaTeX parse error: Double subscript at position 1340: …ock broadcast_0_̲piece0 stored a…KubernetesDriverEndpoint: Asking each executor to shut down
    23/01/05 05:46:41 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed.
    23/01/05 05:46:42 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
    23/01/05 05:46:42 INFO MemoryStore: MemoryStore cleared
    23/01/05 05:46:42 INFO BlockManager: BlockManager stopped
    23/01/05 05:46:42 INFO BlockManagerMaster: BlockManagerMaster stopped
    23/01/05 05:46:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
    23/01/05 05:46:42 INFO SparkContext: Successfully stopped SparkContext
    23/01/05 05:46:42 INFO ShutdownHookManager: Shutdown hook called
    23/01/05 05:46:42 INFO ShutdownHookManager: Deleting directory /var/data/spark-167eeae4-e9ce-4d9f-89d2-07a0209d03f3/spark-d73204f1-02b2-4cfe-824f-15c2d9b04584
    23/01/05 05:46:42 INFO ShutdownHookManager: Deleting directory /tmp/spark-de42fd29-a049-4395-8a90-8c9c6c5f4a6a
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值