If the storage always full, before purchasing disks directly maybe we should check here first.
Check Linux Machine and Find Large File
Check each node and find bigger file
oc debug node/<nodename>
chroot /host
df -h
lsblk
du -a /dir/ | sort -n -r | head -n 20
du -a /dir/ | sort -n -r | head -n 20 showing top 20 files from big to small
Automatically pruning images
We could create a cronjob to pruning unnecessary images automatically. The cronjob probably looks like this.
Official Doc
Usually the cronjob created followed by official doc will be not be mistaken. But just check it in case.
The command to check cronjob:
oc get cronjob -A -o yaml
Check the Storage Class and Make Sure pv Config Correctly
The command to check storage class
oc get storageclass -o yaml
After that, you will know it is internal storage or external.
We can go “Logging” to see if pv config correctly. Of course the precondition is the cluster already installed OpenShift Logging and Elasticsearch
Make sure the ClusterLogging CR binds with pvc
oc exec -c elasticsearch $es_pod -- ls -lR /elasticsearch/persistent/logging-es/data/logging-es/nodes/0/indices/$index
Config Log Curator
Precondition : You already installed OpenShift Logging and Elasticsearch
We can edit Cluster Logging CR to config curator
oc edit clusterlogging instance
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
...
curation:
curator:
schedule: 30 3 * * *
type: curator
Level down kubelet log level
prune by pod/crictl
sudo podman system prune -a
sudo podman system prune -a -f
crictl rmi --prune
oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --prune-registry=false --ignore-invalid-refs=true