如何查看CPU总占用率?
top -bn 1 -i -c
sar -P 0 -u 1 5
I had a similar error. My analysis:
Pods on a same k8s node share the ephemeral storage, which (if no special configuration was used) is used by spark to store temp data of spark jobs (disk spillage and shuffle data). The amount of ephemeral storage of a node is basically the size of the available storage in your k8s node.
If some executor pods use up all of the ephemeral storage of a node, other pods will fail when they try to write data to ephemeral storage. In your case the failing pod is the driver pod, but it could have been any other pods on that node. In my case it was an executor that failed with a similar error message.
I would try to optimize the spark code first before changing the deployment configuration.
- reduce disk spillage, shuffle write
- split transforms if possible
- and increase the amount of executors as the last resource :)
If you know upfront the amount of storage required in each executor, maybe you can try to set the resources requests (and not limits) for ephemeral storage to right amount.