k8s || 启动pod和命名空间

在Kubernetes环境中,使用kubectl命令创建了一个包含3个副本的nginxpod部署,并通过部署控制器管理。接着,创建了一个命名空间用于资源隔离,并定义了ResourceQuota来限制该命名空间内的内存和CPU使用。当尝试创建超出配额限制的新pod时,操作被系统禁止,确保了资源的有效管理。
摘要由CSDN通过智能技术生成

使用kubectl去创建pod

创建nginx的3个pod

-r 3 replication 3个副本
--image=nginx 指定pod使用的镜像
create deployment 创建一个部署控制器,帮助我们去部署nginx,同时启动3个副本
[root@k8smaster ~]# kubectl create deployment k8s-nginx --image=nginx -r 3
deployment.apps/k8s-nginx created

[root@k8smaster ~]# kubectl get deploy
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
k8s-nginx   3/3     3            3           117s

[root@k8smaster ~]# kubectl get rs
NAME                   DESIRED   CURRENT   READY   AGE
k8s-nginx-75f95db655   3         3         3       26h

[root@k8smaster ~]# kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
k8s-nginx-75f95db655-8gpbd   1/1     Running   1          26h
k8s-nginx-75f95db655-t9lbv   1/1     Running   0          99m
k8s-nginx-75f95db655-x5ncm   1/1     Running   0          99m

[root@k8smaster ~]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP              NODE       NOMINATED NODE   READINESS GATES
k8s-nginx-75f95db655-8gpbd   1/1     Running   1          26h   10.244.249.12   k8snode1   <none>           <none>
k8s-nginx-75f95db655-t9lbv   1/1     Running   0          99m   10.244.249.13   k8snode1   <none>           <none>
k8s-nginx-75f95db655-x5ncm   1/1     Running   0          99m   10.244.249.14   k8snode1   <none>           <none>
[root@k8snode1 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
nginx                                                             latest     605c77e624dd   15 months ago   141MB

[root@k8snode2 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
nginx                                                             latest     605c77e624dd   15 months ago   141MB

命名空间:其实就是资源隔离

命名空间(Namespace) 提供一种机制,将同一集群中的资源划分为相互隔离的组。

命名空间是在多个用户之间划分集群资源的一种方法。

1.创建一个命名空间

[root@k8smaster ~]# kubectl create namespace quota-mem-cpu-example
namespace/quota-mem-cpu-example created

[root@k8smaster ~]# kubectl get ns
NAME                    STATUS   AGE
default                 Active   22h
kube-node-lease         Active   22h
kube-public             Active   22h
kube-system             Active   22h
quota-mem-cpu-example   Active   4s

2.新建一个配额资源对象,对这个资源对象进行赋值

[root@k8smaster ~]# vim quota-mem-cpu.yaml
[root@k8smaster ~]# cat quota-mem-cpu.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-demo
spec:
  hard:
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.memory: 2Gi

将命名空间和资源限制对象进行绑定

[root@k8smaster ~]# ls
quota-mem-cpu.yaml

3.创建 ResourceQuota

[root@k8smaster ~]# kubectl apply -f quota-mem-cpu.yaml --namespace=quota-mem-cpu-example
resourcequota/mem-cpu-demo created

4.查看 ResourceQuota 详情

[root@k8smaster ~]# kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml

5.创建 Pod

[root@k8smaster ~]# cat quota-mem-cpu-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: quota-mem-cpu-demo
spec:
  containers:
  - name: quota-mem-cpu-demo-ctr
    image: nginx
    resources:
      limits:
        memory: "800Mi"
        cpu: "800m"
      requests:
        memory: "600Mi"
        cpu: "400m"

[root@k8smaster ~]# kubectl apply -f quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example

6.确认Pod正在运行,并且其容器处于健康状态

[root@k8smaster ~]# kubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example
NAME                 READY   STATUS    RESTARTS   AGE
quota-mem-cpu-demo   1/1     Running   1          24h

7.再查看 ResourceQuota 的详情

kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml

[root@k8smaster ~]# kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"mem-cpu-demo","namespace":"quota-mem-cpu-example"},"spec":{"hard":{"limits.cpu":"2","limits.memory":"2Gi","requests.cpu":"1","requests.memory":"1Gi"}}}
  creationTimestamp: "2023-03-24T08:54:32Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:hard:
          .: {}
          f:limits.cpu: {}
          f:limits.memory: {}
          f:requests.cpu: {}
          f:requests.memory: {}
        f:used: {}
    manager: kube-controller-manager
    operation: Update
    time: "2023-03-24T08:54:32Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        f:hard:
          .: {}
          f:limits.cpu: {}
          f:limits.memory: {}
          f:requests.cpu: {}
          f:requests.memory: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2023-03-24T08:54:32Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:used:
          f:limits.cpu: {}
          f:limits.memory: {}
          f:requests.cpu: {}
          f:requests.memory: {}
    manager: kube-apiserver
    operation: Update
    time: "2023-03-24T09:02:02Z"
  name: mem-cpu-demo
  namespace: quota-mem-cpu-example
  resourceVersion: "38884"
  uid: e8b112a8-8137-403b-9066-7a5779204c25
spec:
  hard:
    limits.cpu: "2"
    limits.memory: 2Gi
    requests.cpu: "1"
    requests.memory: 1Gi
status:
  hard:
    limits.cpu: "2"
    limits.memory: 2Gi
    requests.cpu: "1"
    requests.memory: 1Gi
  used:
    limits.cpu: 800m
    limits.memory: 800Mi
    requests.cpu: 400m
    requests.memory: 600Mi

输出结果显示了配额以及有多少配额已经被使用。你可以看到 Pod 的内存和 CPU 请求值及限制值没有超过配额。

[root@k8smaster ~]# kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o jsonpath='{ .status.used }'
{"limits.cpu":"800m","limits.memory":"800Mi","requests.cpu":"400m","requests.memory":"600Mi"}[root@k8smaster ~]# 

8.尝试创建第二个 Pod

[root@k8smaster ~]# cat quota-mem-cpu-pod-2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: quota-mem-cpu-demo-2
spec:
  containers:
  - name: quota-mem-cpu-demo-2-ctr
    image: redis
    resources:
      limits:
        memory: "1Gi"
        cpu: "800m"
      requests:
        memory: "700Mi"
        cpu: "400m"

[root@k8smaster ~]# kubectl apply -f quota-mem-cpu-pod-2.yaml --namespace=quota-mem-cpu-example
Error from server (Forbidden): error when creating "quota-mem-cpu-pod-2.yaml": pods "quota-mem-cpu-demo-2" is forbidden: exceeded quota: mem-cpu-demo, requested: requests.memory=700Mi, used: requests.memory=600Mi, limited: requests.memory=1Gi

9.删除你的命名空间

kubectl delete namespace quota-mem-cpu-example
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

韩未零

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值