玩K8S---harcor仓库搭建 & pod资源介绍

一: 深入理解Pod资源管理

1.1: Pod资源

  • 特点:

    最小部署单元

    一组容器的集合

    一个Pod中的容器共享网络命名空间

    Pod是短暂的

  • Pod容器分类:

    1:infrastructure container 基础容器

    维护整个Pod网络空间

    node节点操作

    查看容器的网络

    2:initcontainers 初始化容器

    先于业务容器开始执行,原先Pod中容器是并行开启,现在进行了改进

    3:container 业务容器

    并行启动

在Pod中容器分为以下几个类型:

  • Infrastructure Container:基础容器,维护整个Pod网络空间,对用户不可见

  • InitContainers:初始化容器,先于业务容器开始执行,一般用于业务容器的初始化工作

  • Containers:业务容器,具体跑应用程序的镜像

1.2: 镜像拉取策略

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: java
      image: lizhenliang/java-demo
      imagePullPolicy: IfNotPresent

imagePullPolicy 字段有三个可选值:

  • IfNotPresent:默认值,镜像在宿主机上不存在时才拉取(yaml创建时默认值)

  • Always:每次创建 Pod 都会重新拉取一次镜像(命令行创建默认值)

  • Never: Pod 永远不会主动拉取这个镜像

如果拉取公开的镜像,直接按照上述示例即可,但要拉取私有的镜像,是必须认证镜像仓库才可以,即docker login,而在K8S集群中会有多个Node,显然这种方式是很不放方便的!为了解决这个问题,K8s 实现了自动拉取镜像的功能。 以secret方式保存到K8S中,然后传给kubelet。

[root@master demo]# vim pod1.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx
      imagePullPolicy: Always "总是拉取官网镜像"
      command: [ "echo", "SUCCESS" ] "执行命令"
[root@master demo]# kubectl create -f pod1.yaml
pod/mypod created
[root@master demo]# kubectl get pods
mypod                          0/1     ContainerCreating   0          7s
[root@master demo]# kubectl logs mypod
SUCCESS
[root@master demo]# kubectl get pods "已完成状态"
mypod                          0/1     Completed   2          84s
[root@master demo]# kubectl get pods ""
mypod                          0/1     CrashLoopBackOff   4          3m31s
"CrashLoopBackOff表示Kubernetes试图启动该Pod,但是过程中出现错误,导致容器启动失败或者正在被删除"
[root@master demo]# vim pod1.yaml 

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx:1.14
      imagePullPolicy: Always "去掉命令字段"
[root@master demo]# kubectl apply -f pod1.yaml
pod/mypod created
[root@master demo]#  kubectl get pods
mypod                          1/1     Running   0          15s
[root@master demo]#  kubectl get pods -o wide
mypod                          1/1     Running   0          29s     172.17.96.4   192.168.100.180   <none>
//在所在节点查看版本
[root@node1 ~]# curl -I 172.17.96.4
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Mon, 12 Oct 2020 11:41:08 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
Connection: keep-alive
ETag: "5c0692e1-264"
Accept-Ranges: bytes

二: 部署docker和docker-compose环境

//安装docker
[root@localhost ~]# hostnamectl set-hostname harbor
[root@localhost ~]# su
[root@harbor ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
[root@harbor ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@harbor ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@harbor ~]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
[root@harbor ~]# sysctl -p
net.ipv4.ip_forward = 1
[root@harbor ~]# systemctl restart network
[root@harbor ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@harbor ~]# sudo tee /etc/docker/daemon.json <<-'EOF'
> {
>   "registry-mirrors": ["https://ye71id88.mirror.aliyuncs.com"]
> }
> EOF
[root@harbor ~]# sudo systemctl daemon-reload
[root@harbor ~]# yum -y install docker-ce
[root@harbor ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@harbor ~]# systemctl start docker.service
//安装docker-compose
[root@harbor ~]# rz -E		"上传docker-compose"
rz waiting to receive.
[root@harbor ~]# chmod +x docker-compose 
[root@harbor ~]# cp -p docker-compose /usr/local/bin/
"测试docker-compose命令应该能补全"
[root@harbor ~]# tar zxvf harbor-offline-installer-v1.2.2.tgz -C /usr/local
[root@harbor ~]# vim /usr/local/harbor/harbor.cfg
[root@harbor harbor]# vim /usr/local/harbor/harbor.cfg
hostname = 192.168.100.200 "修改为你Harbor仓库ip地址"

[root@harbor ~]# cd /usr/local/bin
[root@harbor bin]# ls
docker-compose
[root@harbor bin]# cd /usr/local/harbor/
[root@harbor harbor]# ls
common                     docker-compose.yml     harbor.v1.2.2.tar.gz  NOTICE
docker-compose.clair.yml   harbor_1_1_0_template  install.sh            prepare
docker-compose.notary.yml  harbor.cfg             LICENSE               upgrade
[root@harbor harbor]# sh install.sh "启动harbor"
...
 ----Harbor has been installed and started successfully.----

Now you should be able to visit the admin portal at http://192.168.100.200. 
For more details, please visit https://github.com/vmware/harbor .

mark

//node节点配置登陆harbor仓库,以node1为例
[root@node1 ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://ye71id88.mirror.aliyuncs.com"],
  "insecure-registries":["192.168.100.200"] "修改为仓库地址"
}
[root@node1 ~]# systemctl restart docker.service 
[root@node1 ~]# docker login 192.168.100.200
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

//查看生成的凭证
[root@node1 ~]# ls -a "只有登陆成功才会产生凭证文件"
.docker
[root@node1 ~]# cd .docker
[root@node1 .docker]# ls
config.json
[root@node1 .docker]# cat config.json 
{
	"auths": {
		"192.168.100.200": {
			"auth": "YWRtaW46SGFyYm9yMTIzNDU="
		}
	},
	"HttpHeaders": {
		"User-Agent": "Docker-Client/19.03.13 (linux)"
	}
}
[root@node1 ~]# cat .docker/config.json |base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjEwMC4yMDAiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==
"解码查看,解码输出为64位格式, 返回状态值0"

//node2节点同样操作
...
[root@node2 ~]# cat .docker/config.json |base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjEwMC4yMDAiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==

mark

//node节点上传拉去镜像,测试Harbor可用性
[root@node1 ~]# docker pull tomcat
Using default tag: latest
latest: Pulling from library/tomcat
Digest: sha256:1bab37d5d97bd8c74a474b2c1a62bbf1f1b4b62f151c8dcc472c7d577eb3479d
Status: Image is up to date for tomcat:latest
docker.io/library/tomcat:latest

[root@node1 ~]# docker tag tomcat 192.168.100.200/project/tomcat
[root@node1 ~]# docker push 192.168.100.200/project/tomcat
The push refers to repository [192.168.100.200/project/tomcat]
b654a29de9ee: Pushed 

mark

//进行进项下载问题就会出现,需要登录才能下载
//问题点:缺少仓库的凭据
[root@node2 ~]# docker logout 192.168.100.200
Removing login credentials for 192.168.100.200
[root@node2 ~]# docker pull 192.168.100.200/project/tomcat
Using default tag: latest
Error response from daemon: pull access denied for 192.168.100.200/project/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

//node节点下载tomcat镜像
[root@node2 ~]# docker pull tomcat:8.0.52

//master创建资源
[root@master ~]# vim tomcat-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      containers:
      - name: my-tomcat
        image: docker.io/tomcat:8.0.52
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31111
  selector:
    app: my-tomcat
[root@master demo]# kubectl create -f tomcat-deployment.yaml 
deployment.extensions/my-tomcat created
service/my-tomcat created
[root@master demo]# kubectl get pods,deploy,svc
NAME                             READY   STATUS    RESTARTS   AGE
pod/apache-7f7d9c5d59-7cxc9      1/1     Running   1          12d
pod/my-tomcat-57667b9d9-t5g9b    1/1     Running   0          103s
pod/my-tomcat-57667b9d9-wvknc    1/1     Running   0          103s
pod/nginx-web-7747766c6b-5hbz9   1/1     Running   1          99m
pod/nginx-web-7747766c6b-jrtcx   1/1     Running   1          99m
pod/nginx-web-7747766c6b-q4qwd   1/1     Running   1          99m

NAME                              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/apache      1         1         1            1           12d
deployment.extensions/my-tomcat   2         2         2            2           103s
deployment.extensions/nginx-web   3         3         3            3           99m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP          13d
service/my-tomcat    NodePort    10.0.0.199   <none>        8080:31111/TCP   103s
[root@master demo]# 

  • 问题处理

    Terminating pod无法删除

//如果遇到问题处理
//如果遇到处于Terminating状态的无法删除的资源如何处理
[root@localhost demo]# kubectl get pods
NAME                              READY   STATUS        RESTARTS   AGE
my-tomcat-57667b9d9-nklvj         1/1     Terminating   0          10h
my-tomcat-57667b9d9-wllnp         1/1     Terminating   0          10h
//这种情况下可以使用强制删除命令:
kubectl delete pod [pod name] --force --grace-period=0 -n [namespace]

[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-nklvj --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-tomcat-57667b9d9-nklvj" force deleted

[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-wllnp --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-tomcat-57667b9d9-wllnp" force deleted

[root@localhost demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
mypod                             1/1     Running   1          23h
nginx-7697996758-75shs            1/1     Running   1          2d21h
nginx-7697996758-b7tjw            1/1     Running   1          2d21h
nginx-7697996758-jddc5            1/1     Running   1          2d21h
nginx-deployment-d55b94fd-4px2w   1/1     Running   1          47h
nginx-deployment-d55b94fd-899hz   1/1     Running   1          47h
nginx-deployment-d55b94fd-d7fqn   1/1     Running   1          47h
  • node节点上传镜像到Harbor仓库
[root@node2 ~]# docker tag tomcat:8.0.52 192.168.100.200/project/tomcat8
[root@node2 ~]# docker push 192.168.100.200/project/tomcat8
The push refers to repository [192.168.100.200/project/tomcat8]
fe9cde45f959: Pushed 
2ef8c178f6e1: Pushed 
ec7635afeee4: Pushed 
5525ae859b17: Pushed 
5e4834f80277: Pushed 
6e85077a6fde: Pushed 
88ceb290c2a1: Pushed 
f469346f8162: Pushed 
29783d2ef871: Pushed 
d7ed640784f1: Pushed 
1618a71a1198: Pushed 
latest: digest: sha256:f3cfaf433cb95dafca20143ba99943249ab830d0aca484c89ffa36cf2a9fb4c9 size: 2625
[root@node2 ~]# 

mark

  • 创建serect资源,使用Harbor私有仓库下载
//删除之前创建的资源
kubectl delete -f tomcat-deployment.yaml
kubectl delete  svc my-tomcat
//创建serect资源
"查看节点登陆Harbor仓库凭证"
[root@node1 ~]# cat .docker/config.json |base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjEwMC4yMDAiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==

"创建serect资源"
[root@master demo]# vim registry-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: registry-pull-secret "serect资源名称"
data:
  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjEwMC4yMDAiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==
type: kubernetes.io/dockerconfigjson

[root@master demo]# kubectl create -f registry-pull-secret.yaml
secret/registry-pull-secret created
[root@master demo]# kubectl get secret
NAME                   TYPE                                  DATA   AGE
default-token-mpxqj    kubernetes.io/service-account-token   3      13d
registry-pull-secret   kubernetes.io/dockerconfigjson        1      7s

//修改tom-deployment.yaml 使用私有仓库中下载镜像
[root@master demo]# vim tomcat-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat-8
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      imagePullSecrets: "拉取的凭证资源"
      - name: registry-pull-secret "serect资源名称,对应上面创建的serect资源名称"
      containers:
      - name: my-tomcat
        image: 192.168.100.200/project/tomcat8 "镜像的地址"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service  "service服务"
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31111 "暴露固定端口31111"
  selector:
    app: my-tomcat "标签选择器匹配标签"

[root@master demo]# kubectl create -f tomcat-deployment.yaml 
deployment.extensions/my-tomcat created
service/my-tomcat created
[root@master demo]# kubectl get pods -w
NAME                           READY   STATUS    RESTARTS   AGE
my-tomcat-8-79d997db5d-jvpkb   1/1     Running   0          24s
my-tomcat-8-79d997db5d-s57b9   1/1     Running   0          24s
[root@master demo]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP          13d
my-tomcat    NodePort    10.0.0.199   <none>        8080:31111/TCP   30s

mark
mark

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值