PVC
A Persistent Volume Claim (PVC) in Kubernetes is a request for a specific amount of storage resources. PVCs are used to dynamically provision persistent storage volumes in a Kubernetes cluster.
Here is a brief explanation of the components and concepts related to PVCs:
- Persistent Volume (PV): A PV is a piece of networked storage in the cluster, such as a physical disk or a network share. It is a resource that is provisioned by the cluster administrator and made available to be claimed by PVCs.
- Storage Class: A Storage Class defines the properties and parameters for dynamically provisioning a PV. It allows you to define the type, size, and access mode of the storage.
- Persistent Volume Claim (PVC): A PVC is a request for a specific amount of storage resources defined in terms of size and access mode (e.g., ReadWriteOnce, ReadOnlyMany, ReadWriteMany). When a pod wants to use persistent storage, it creates a PVC, and the cluster’s storage layer dynamically provisions and binds it to an available PV.
- Binding: When a PVC is created, Kubernetes attempts to find an available PV that satisfies the requirements specified in the PVC. If a suitable PV is found, the PVC gets bound to the PV.
- Mounting: Once a PVC is bound to a PV, the pod’s container can mount the PVC as a volume to access persistent storage. This allows the data to survive pod restarts and be shared across multiple pods if the PVC is created with an appropriate access mode.
To use a PVC:
- Define a Storage Class with the desired properties and parameters (optional if your cluster has a default Storage Class).
- Create a PVC that requests a specific amount of storage resources using the desired Storage Class.
- Create a pod and configure it to mount the PVC as a volume.
- When the pod is created, Kubernetes will automatically provision and bind a suitable PV to the PVC, and the pod’s container can use the PVC as a persistent storage volume.
PVCs help to decouple the applications from the underlying storage infrastructure, allowing for more flexibility and portable deployments.
maven pom依赖
<dependency>
<groupId>io.kubernetes</groupId>
<artifactId>client-java</artifactId>
<version>7.0.0</version>
</dependency>
创建POD && 挂载PVC && 拉去镜像策略等配置
apiVersion: v1
kind: Pod
metadata:
name: li55
namespace: default
spec:
containers:
- name: li55
image: your-harbor-image-name
imagePullPolicy: Always
command:
- java
- -jar
- /workspace/microDataTask.jar
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 2
memory: 2Gi
volumes:
- name: li55
persistentVolumeClaim:
claimName: li55
apiVersion: v1
kind: PersistentVolume
metadata:
name: li55
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
apiVersion: v1
kind: VolumeMount
metadata:
name: li55
namespace: default
spec:
mountPath: /home/li55
readOnly: false
V1ResourceRequirements resourceRequirements = new V1ResourceRequirementsBuilder()
.addToLimits("cpu", new Quantity(String.valueOf(2)))
.addToLimits("memory", new Quantity( 2 + "Gi"))
.addToRequests("cpu", new Quantity(String.valueOf(2)))
.addToRequests("memory", new Quantity(2 + "Gi"))
.build();
List<V1VolumeMount> volumeMountList = new LinkedList<>();
// 增加jar包ceph路径
V1VolumeMount workPath = new V1VolumeMountBuilder()
.withName("li55")
.withMountPath("/home/li55")
.withReadOnly(false)
.build();
volumeMountList.add(workPath);
List<V1Volume> volumeList = new LinkedList<>();
V1Volume dataVolume = new V1VolumeBuilder()
.withName("li55")
.withNewPersistentVolumeClaim()
.withClaimName("li55")
.endPersistentVolumeClaim()
.build();
volumeList.add(dataVolume);
V1Pod pod = new V1PodBuilder()
.withNewMetadata()
.withName(podName)
.withNamespace(namespace)
.endMetadata()
.withKind("Pod")
.withApiVersion("v1")
.withNewSpec()
.addNewContainer()
.withName(podName)
.withImage("your-harbor-image-name")
.withImagePullPolicy("Always")//Always,IfNotPresent
.withCommand("java", "-jar", "/workspace/microDataTask.jar", json.toJSONString())
.withResources(resourceRequirements)
.withVolumeMounts(volumeMountList)
.endContainer()
.withVolumes(volumeList)
// .withEnv(envVarList)
.withRestartPolicy("Never")
.withTerminationGracePeriodSeconds(2L)
.endSpec()
.build();