k8s创建pod加入容器_Pod资源与Rc资源

创建pod资源

pod是最小资源单位.任何的一个k8s资源都可以由yml清单文件来定义k8s yaml的主要组成:apiVersion: v1  api版本kind: pod   资源类型metadata:   属性spec:       详细信息      master节点1.创建存放pod的目录mkdir -p k8s_ymal/pod && cd k8s_ymal/2.编写yaml[root@k8s-master k8s_yaml]# cat k8s_pod.yml apiVersion: v1kind: Podmetadata:  name: nginx  labels:    app: webspec:  containers:    - name: nginx      image: 10.0.0.11:5000/nginx:1.13      ports:        - containerPort: 80注意:vim /etc/kubernetes/apiserver 删除那个serveraccept 然后重启apiserversystemctl restart kube-apiserver.service 3.创建资源[root@k8s-master pod]# kubectl create -f k8s_pod.yaml4.查看资源[root@k8s-master pod]# kubectl get podNAME      READY     STATUS              RESTARTS   AGEnginx     0/1       ContainerCreating   0          52s查看调度到哪一个节点[root@k8s-master pod]# kubectl get pod -o wideNAME      READY     STATUS              RESTARTS   AGE       IP        NODEnginx     0/1       ContainerCreating   0          1m            10.0.0.135.查看资源的描述[root@k8s-master pod]# kubectl describe pod nginxError syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"解决方案看下边:6. node节点操作:k8s-node2上传pod-infrastructure-latest.tar.gz和nginx[root@k8s-node-2 ~]# docker load -i pod-infrastructure-latest.tar.gz df9d2808b9a9: Loading layer [==================================================>] 202.3 MB/202.3 MB0a081b45cb84: Loading layer [==================================================>] 10.24 kB/10.24 kBba3d4cbbb261: Loading layer [==================================================>] 12.51 MB/12.51 MBLoaded image: docker.io/tianyebj/pod-infrastructure:latest[root@k8s-node-2 ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 10.0.0.11:5000/pod-infrastructure:latest[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/pod-infrastructure:latest[root@k8s-node-2 ~]# docker load -i docker_nginx1.13.tar.gz[root@k8s-node-2 ~]# docker tag docker.io/nginx:1.13 10.0.0.11:5000/nginx:1.13[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/nginx:1.13=====================================================================master节点:kubectl describe pod nginx    #查看描述状态kubectl get pod======================================================================7. node1和node2节点都要操作:7.1.修改镜像地址vim /etc/kubernetes/kubeletKUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"7.2.重启服务systemctl restart kubelet.service8. master节点验证:[root@k8s-master k8s_yaml]# kubectl get pod -o wideNAME      READY     STATUS    RESTARTS   AGE       IP            NODEnginx     1/1       Running   0          3m        172.18.92.2   10.0.0.13为什么创建一个pod资源,k8s需要启动两个容器业务容器:nginx ,基础容器:pod(是k8s最小的资源单位) ,定制化功能,为了实现k8s自带的那些高级功能===============================================================================================pod资源: 至少由两个容器组成,pod基础容器和业务容器组成(最多1+4)pod配置文件2:[root@k8s-master pod]# vim k8s_pod3.ymlapiVersion: v1kind: Podmetadata:  name: test  labels:    app: webspec:  containers:    - name: nginx      image: 10.0.0.11:5000/nginx:1.13      ports:        - containerPort: 80    - name: busybox      image: 10.0.0.11:5000/busybox:latest      command: ["sleep","1000"]      [root@k8s-master pod]# kubectl create -f k8s_pod3.yml pod "test" created[root@k8s-master pod]# kubectl get podNAME      READY     STATUS    RESTARTS   AGEnginx     1/1       Running   0          54mtest      2/2       Running   0          30s

ReplicationController资源

rc功能: 保证指定数量的pod始终存活,rc通过标签选择器select来关联pod

创建一个rc
[root@k8s-master k8s_yaml]# mkdir rc[root@k8s-master k8s_yaml]# cd rc/[root@k8s-master rc]# vim k8s_rc.ymlapiVersion: v1kind: ReplicationControllermetadata:  name: nginxspec:  replicas: 5  #5个pod  selector:    app: myweb  template:    #模板    metadata:      labels:        app: myweb    spec:      containers:      - name: myweb        image: 10.0.0.11:5000/nginx:1.13        ports:        - containerPort: 80[root@k8s-master rc]# kubectl create -f k8s_rc.yml replicationcontroller "nginx" created[root@k8s-master rc]# kubectl get rc NAME      DESIRED   CURRENT   READY     AGEnginx     5         5         5         4s[root@k8s-master rc]# kubectl get podNAME          READY     STATUS    RESTARTS   AGEnginx         1/1       Running   0          30mnginx-9b36r   1/1       Running   0          24snginx-jt31n   1/1       Running   0          24snginx-lhzgt   1/1       Running   0          24snginx-v8mzm   1/1       Running   0          24snginx-vcn83   1/1       Running   0          24snginx2        1/1       Running   0          11mtest          2/2       Running   0          8m[root@k8s-master rc]# kubectl get pod -o wideNAME          READY     STATUS    RESTARTS   AGE       IP            NODEnginx         1/1       Running   0          31m       172.18.7.2    10.0.0.13nginx-9b36r   1/1       Running   0          59s       172.18.7.4    10.0.0.13nginx-jt31n   1/1       Running   0          59s       172.18.81.3   10.0.0.12nginx-lhzgt   1/1       Running   0          59s       172.18.7.5    10.0.0.13nginx-v8mzm   1/1       Running   0          59s       172.18.81.4   10.0.0.12nginx-vcn83   1/1       Running   0          59s       172.18.81.5   10.0.0.12nginx2        1/1       Running   0          11m       172.18.81.2   10.0.0.12test          2/2       Running   0          8m        172.18.7.3    10.0.0.13模拟某一个node2节点故障:[root@k8s-node-2 ~]# systemctl stop kubelet.service [root@k8s-master rc]# kubectl get nodes NAME        STATUS     AGE10.0.0.12   Ready      9h10.0.0.13   NotReady   9h这时候master节点检测出node2已经故障,k8s会重试拉起node2,它会把故障的驱逐至另一个节点;其实是驱逐,但是它起的是新pod,这个就是rc的使命[root@k8s-master rc]# kubectl delete node 10.0.0.13node "10.0.0.13" deletedmaster删除node2后,它会很快的迁移至另一个节点[root@k8s-master rc]# kubectl get pod -o wideNAME          READY     STATUS    RESTARTS   AGE       IP            NODEnginx-jt31n   1/1       Running   0          5m        172.18.81.3   10.0.0.12nginx-ml7j3   1/1       Running   0          14s       172.18.81.7   10.0.0.12  #启动新podnginx-v8mzm   1/1       Running   0          5m        172.18.81.4   10.0.0.12nginx-vcn83   1/1       Running   0          5m        172.18.81.5   10.0.0.12nginx-vkgmv   1/1       Running   0          14s       172.18.81.6   10.0.0.12  #启动新podnginx2        1/1       Running   0          16m       172.18.81.2   10.0.0.12启动node3[root@k8s-node-2 ~]# systemctl start kubelet.service[root@k8s-master rc]# kubectl get nodesNAME        STATUS    AGE10.0.0.12   Ready     9h10.0.0.13   Ready     49s接下来创建的新Pod它会先选择node2,直到两个节点平等,如果配置不一样的话,它会优先选择配置好的;如果你在node节点清除容器,它会自动重新启动,kubelet它自带保护机制,这也是它的优势所在[root@k8s-master rc]# kubectl get pod -o wide --show-labels[root@k8s-master rc]# kubectl get rc -o wide总结:kubelet只监控本机的docker容器,如果本机的pod删除了,它会启动新的容器k8s集群,pod数量少了(kubectl down机),controller-manager启动新的pod(api-server找schema调度) rc和pod是通过标签选择器关联的
Rc的滚动升级
0. 前提是在我们的node节点上传nginx1.15版本--->docker load -i --->docker tag ---->docker push [root@k8s-node-2 ~]# docker load -i docker_nginx1.15.tar.gz[root@k8s-node-2 ~]# docker tag docker.io/nginx:latest 10.0.0.11:5000/nginx:1.15[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/nginx:1.15=================================================================1.编写升级版本yml文件[root@k8s-master rc]# cat k8s_rc2.yml apiVersion: v1kind: ReplicationControllermetadata:  name: nginx2spec:  replicas: 5  #副本5  selector:    app: myweb2    #标签选择器  template:    metadata:      labels:        app: myweb2    #标签    spec:      containers:      - name: myweb        image: 10.0.0.11:5000/nginx:1.15  #版本        ports:        - containerPort: 802.滚动升级以及验证:[root@k8s-master rc]# kubectl rolling-update nginx -f k8s_rc2.yml --update-period=5sCreated nginx2Scaling up nginx2 from 0 to 5, scaling down nginx from 5 to 0 (keep 5 pods available, don't exceed 6 pods)Scaling nginx2 up to 1Scaling nginx down to 4Scaling nginx2 up to 2Scaling nginx down to 3Scaling nginx2 up to 3Scaling nginx down to 2Scaling nginx2 up to 4Scaling nginx down to 1Scaling nginx2 up to 5Scaling nginx down to 0Update succeeded. Deleting nginxreplicationcontroller "nginx" rolling updated to "nginx2"[root@k8s-master rc]# kubectl get pod -o wideNAME           READY     STATUS    RESTARTS   AGE       IP            NODEnginx          1/1       Running   0          19m       172.18.7.2    10.0.0.13nginx2         1/1       Running   0          38m       172.18.81.2   10.0.0.12nginx2-0xhz7   1/1       Running   0          1m        172.18.81.7   10.0.0.12nginx2-8psw5   1/1       Running   0          1m        172.18.7.5    10.0.0.13nginx2-lqw6t   1/1       Running   0          56s       172.18.81.3   10.0.0.12nginx2-w7jpn   1/1       Running   0          1m        172.18.7.3    10.0.0.13nginx2-xntt8   1/1       Running   0          1m        172.18.7.4    10.0.0.13[root@k8s-master rc]# curl -I 172.18.7.3HTTP/1.1 200 OKServer: nginx/1.15.5  #升级至1.15Date: Mon, 27 Jan 2020 10:50:00 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 02 Oct 2018 14:49:27 GMTConnection: keep-aliveETag: "5bb38577-264"Accept-Ranges: bytes3.回滚操作并验证:回滚主要靠的是yaml文件[root@k8s-master rc]# kubectl rolling-update nginx2 -f k8s_rc.yml --update-period=2sCreated nginxScaling up nginx from 0 to 5, scaling down nginx2 from 5 to 0 (keep 5 pods available, don't exceed 6 pods)Scaling nginx up to 1Scaling nginx2 down to 4Scaling nginx up to 2Scaling nginx2 down to 3Scaling nginx up to 3Scaling nginx2 down to 2Scaling nginx up to 4Scaling nginx2 down to 1Scaling nginx up to 5Scaling nginx2 down to 0Update succeeded. Deleting nginx2replicationcontroller "nginx2" rolling updated to "nginx"[root@k8s-master rc]# kubectl get pod -o wide[root@k8s-master rc]# curl -I 172.18.81.5HTTP/1.1 200 OKServer: nginx/1.13.12Date: Mon, 27 Jan 2020 10:52:05 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Mon, 09 Apr 2018 16:01:09 GMTConnection: keep-aliveETag: "5acb8e45-264"Accept-Ranges: bytes总结:--update-period=2s如果忘了没指定更新时间,默认是1分钟回滚操作主要依赖的是yaml文件

9e7e82cebed74a437a31694d55e4ac0a.png

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值