1.总结pod基于coredns进行域名解析流程
coredns无法直接调用etcd查询域名,需要调用名为Kubernetes的Service来通过etcd查询域名
Service 与 Pod 的 DNS | Kubernetes
Kubernetes 为 Service 和 Pod 创建 DNS 记录 。 你可以使用一致的 DNS 名称而非 IP 地址访问 Service。 Kubelet 配置 Pod 的 DNS 在/etc/resolv.conf中
,以便运行中的容器可以通过名称而不是 IP 来查找服务。
集群中定义的 Service 被赋予 DNS 名称。 默认情况下,客户端 Pod 的 DNS 搜索列表会包含 Pod 自身的命名空间和集群的默认域。
Service 的命名空间
DNS 查询可能因为执行查询的 Pod 所在的名字空间而返回不同的结果。 不指定名字空间的 DNS 查询会被限制在 Pod 所在的名字空间内。 要访问其他名字空间中的 Service,需要在 DNS 查询中指定名字空间。
例如,假定名字空间 test
中存在一个 Pod,prod
名字空间中存在一个服务 data
。
Pod 查询 data
时没有返回结果,因为使用的是 Pod 的名字空间 test
。
Pod 查询 data.prod
时则会返回预期的结果,因为查询中指定了名字空间。
DNS 查询可以使用 Pod 中的 /etc/resolv.conf
展开。 Kubelet 为每个 Pod 配置此文件。 例如,对 data
的查询可能被展开为 data.test.svc.cluster.local
。 search
选项用于查找本地域名,它的取值会被用来展开查询。要进一步了解 DNS 查询,可参阅 resolv.conf 手册页面。
nameserver 10.32.0.10
search <namespace>.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
由于Pod的 A 和 AAAA 记录不是基于 Pod 名称创建,因此需要设置了 hostname
才会生成 Pod 的 A 或 AAAA 记录。 没有设置 hostname
但设置了 subdomain
的 Pod 只会为 无头 Service 创建 A 或 AAAA 记录(busybox-subdomain.my-namespace.svc.cluster-domain.example
) 指向 Pod 的 IP 地址。 另外,除非在服务上设置了 publishNotReadyAddresses=True
,否则只有 Pod 准备就绪才会有与之对应的记录。
2.总结rc、rs及deployment控制器的使用
3.总结nodeport类型的service访问流程(画图说明)
nodeport类型的service可以被集群外部主机访问,访问ip地址为集群任意一台主机地址,端口为nodeport端口
4.掌握pod挂载nfs的使用
在172.31.7.109主机部署NFS服务并创建/data/k8sdata/pool1和/data/k8sdata/pool2目录
vim 2-deploy_nfs.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-site2
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-81
template:
metadata:
labels:
app: ng-deploy-81
spec:
containers:
- name: ng-deploy-81
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/pool1
name: my-nfs-volume-pool1
- mountPath: /usr/share/nginx/html/pool2
name: my-nfs-volume-pool2
- mountPath: /etc/localtime
name: timefile
volumes:
- name: my-nfs-volume-pool1
nfs:
server: 172.31.7.109
path: /data/k8sdata/pool1
- name: my-nfs-volume-pool2
nfs:
server: 172.31.7.109
path: /data/k8sdata/pool2
- name: timefile
hostPath:
path: /etc/localtime
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-81
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30017
protocol: TCP
type: NodePort
selector:
app: ng-deploy-81
kubectl apply -f 2-deploy_nfs.ym
5.总结基于nfs实现静态pvc的使用
~# mkdir /data/k8sdata/myserver/myappdata -p
~# vim /etc/exports
/data/k8sdata/myserver/myappdata *(rw,no_root_squash)
~# systemctl restart nfs-server && systemctl enable nfs-server
apiVersion: v1
kind: PersistentVolume
metadata:
name: myserver-myapp-static-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
path: /data/k8sdata/myserver/myappdata
server: 172.31.7.109
case8-pv-static# kubectl apply -f 1-myapp-persistentvolume.yamlpersistentvolume/myserver-myapp-static-pv createdcase8-pv-static# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
myserver-myapp-static-pv 10Gi RWO Retain Available 9m25s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myserver-myapp-static-pvc
namespace: myserver
spec:
volumeName: myserver-myapp-static-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
case8-pv-static# kubectl apply -f 2-myapp-persistentvolumeclaim.yamlpersistentvolumeclaim/myserver-myapp-static-pvc createdcase8-pv-static# kubectl get pvc -n myserverNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmyserver-myapp-static-pvc Bound myserver-myapp-static-pv 10Gi RWO 96s
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: myserver-myapp
name: myserver-myapp-deployment-name
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-container
image: nginx:1.20.0
#imagePullPolicy: Always
volumeMounts:
- mountPath: "/usr/share/nginx/html/statics"
name: statics-datadir
volumes:
- name: statics-datadir
persistentVolumeClaim:
claimName: myserver-myapp-static-pvc
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-myapp-service
name: myserver-myapp-service-name
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30009
selector:
app: myserver-myapp-frontend
case8-pv-static# kubectl apply -f 3-myapp-webserver.yaml
6.总结deployment基于nfs及storageclass实现动态pvc的使用
1.创建k8s集群账号,赋予权限使其能够在k8s做一些比如创建PV的操作:
apiVersion: v1
kind: Namespace
metadata:
name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f 1-rbac.yaml
2.创建storageclass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
#- vers=4.1 #containerd有部分参数异常
#- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
- noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
#mountOptions: "vers=4.1,noresvport,noatime"
archiveOnDelete: "true" #删除pod时保留pod数据,默认为false时为不保留数据
kubectl apply -f 2-storageclass.yaml
3.创建NFS 制备器(provisioner),用于向storageclass提供后端nfs存储,使PVC能够调用storageclass创建pv:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
spec:
replicas: 1
strategy: #部署策略
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner #使用指定账号运行pod,该账号有创建PV都权限
containers:
- name: nfs-client-provisioner
#image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.31.7.109
- name: NFS_PATH
value: /data/volumes
volumes:
- name: nfs-client-root
nfs:
server: 172.31.7.109
path: /data/volumes
kubectl apply -f 3-nfs-provisioner.yaml
4.创建PVC,PVC引用storageclass,storageclass自动创建并绑定PV:
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myserver-myapp-dynamic-pvc
namespace: myserver
spec:
storageClassName: managed-nfs-storage #调用的storageclass 名称
accessModes:
- ReadWriteMany #访问权限
resources:
requests:
storage: 500Mi #空间大小
kubectl apply -f 4-create-pvc.yaml
5.创建deployment:
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: myserver-myapp
name: myserver-myapp-deployment-name
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-container
image: nginx:1.20.0
#imagePullPolicy: Always
volumeMounts:
- mountPath: "/usr/share/nginx/html/statics"
name: statics-datadir
volumes:
- name: statics-datadir
persistentVolumeClaim:
claimName: myserver-myapp-dynamic-pvc
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-myapp-service
name: myserver-myapp-service-name
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30010
selector:
app: myserver-myapp-frontend
kubectl apply -f 5-myapp-webserver.yaml
验证NFS存储服务器
7.pod基于configmap实现配置挂载和环境变量
创建configmap提供pod的配置文件
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
mysite: |
server {
listen 80;
server_name www.mysite.com;
index index.html index.php index.htm;
location / {
root /data/nginx/mysite;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
myserver: |
server {
listen 80;
server_name www.myserver.com;
index index.html index.php index.htm;
location / {
root /data/nginx/myserver;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx:1.20.0
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data/nginx/mysite
name: nginx-mysite-statics
- mountPath: /data/nginx/myserver
name: nginx-myserver-statics
- name: nginx-mysite-config
mountPath: /etc/nginx/conf.d/mysite/
- name: nginx-myserver-config
mountPath: /etc/nginx/conf.d/myserver/
volumes:
- name: nginx-mysite-config
configMap:
name: nginx-config
items:
- key: mysite
path: mysite.conf
- name: nginx-myserver-config
configMap:
name: nginx-config
items:
- key: myserver
path: myserver.conf
- name: nginx-myserver-statics
nfs:
server: 172.31.7.109
path: /data/k8sdata/myserver
- name: nginx-mysite-statics
nfs:
server: 172.31.7.109
path: /data/k8sdata/mysite
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-80
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 30019
protocol: TCP
type: NodePort
selector:
app: ng-deploy-80
kubectl apply -f 1-deploy_configmap.yml
创建configmap提供pod的环境变量
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
host: "172.31.7.189"
username: "user1"
password: "12345678"
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
env:
- name: HOST
valueFrom:
configMapKeyRef:
name: nginx-config
key: host
- name: USERNAME
valueFrom:
configMapKeyRef:
name: nginx-config
key: username
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: nginx-config
key: password
######
- name: "MySQLPass"
value: "123456"
ports:
- containerPort: 80
kubectl apply -f 2-deploy_configmap_env.yml
8.总结secret简介及常见类型、基于Secret实现Nginx tls认证
自签名证书:
mkdir certscd certs/openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.ca.com'openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.mysite.com'certs# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
kubectl create secret tls myserver-tls-key --cert=./server.crt --key=./server.key -n myserver
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: myserver
data:
default: |
server {
listen 80;
server_name www.mysite.com;
listen 443 ssl;
ssl_certificate /etc/nginx/conf.d/certs/tls.crt;
ssl_certificate_key /etc/nginx/conf.d/certs/tls.key;
location / {
root /usr/share/nginx/html;
index index.html;
if ($scheme = http ){ #未加条件判断,会导致死循环
rewrite / https://www.mysite.com permanent;
}
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: myserver-myapp-frontend-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-frontend
image: nginx:1.20.2-alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/myserver
- name: myserver-tls-key
mountPath: /etc/nginx/conf.d/certs
volumes:
- name: nginx-config
configMap:
name: nginx-config
items:
- key: default
path: mysite.conf
- name: myserver-tls-key
secret:
secretName: myserver-tls-key
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30018
protocol: TCP
- name: htts
port: 443
targetPort: 443
nodePort: 30029
protocol: TCP
selector:
app: myserver-myapp-frontend
case11-secret# kubectl apply -f 4-secret-tls.yaml
~# cat /etc/haproxy/haproxy.cfg
listen myserer-nginx-80
bind 172.31.7.189:80
mode tcp
server 172.31.7.111 172.31.7.111:30018 check inter 3s fall 3 rise 3
listen myserer-nginx-443
bind 172.31.7.189:443
mode tcp
server 172.31.7.111 172.31.7.111:30019 check inter 3s fall 3 rise 3
客户端配置hosts 解析,进入pod验证配置文件和证书是否添加,编辑配置文件,默认的官方镜像没有加载自定义配置
vi /etc/nginx/nginx.confinclude /etc/nginx/conf.d/*/*.conf;nginx -s reload
9.基于Secret实现私有镜像仓库的镜像下载认证
nerdctl login --username=rooroot@aliyun.com registry.cn-qingdao.aliyuncs.comkubectl create secret generic aliyun-registry-image-pull-key \--from-file=.dockerconfigjson=/root/.docker/config.json \--type=kubernetes.io/dockerconfigjson \-n myserver
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: myserver-myapp-frontend-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-frontend
image: harbor.magedu.net/baseimages/nginx:1.16.1-alpine-perl
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: aliyun-registry-image-pull-key
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend
namespace: myserver
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30018
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-frontend
kubectl apply -f 5-secret-imagePull.yaml
k8s拉取私有仓库镜像失败:
Failed to pull image "192.168.220.102:80/baseimages/alpine": rpc error: code = Unknown desc = failed to pull and unpack image "192.168.220.102:80/baseimages/alpine:latest": failed to resolve reference "192.168.220.102:80/baseimages/alpine:latest": failed to do request: Head "https://192.168.220.102:80/v2/baseimages/alpine/manifests/latest": http: server gave HTTP response to HTTPS client
k8s通过 crictl连接containerd拉取镜像。需要打开containerd配置文件,配置HTTP镜像仓库,添加如下内容:
vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.magedu.net:443".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.magedu.net".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.magedu.net"]
endpoint = ["http://harbor.magedu.net"]
systemctl restart containerd.service
测试拉取镜像
crictl pull harbor.magedu.net/baseimages/pause:3.9
10.总结StatefulSet、DaemonSet的特点及使用
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myserver-myapp
namespace: myserver
spec:
replicas: 3
serviceName: "myserver-myapp-service"
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-frontend
#image: registry.cn-qingdao.aliyuncs.com/zhangshijie/zookeeper:v3.4.14
image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-service
namespace: myserver
spec:
clusterIP: None
ports:
- name: http
port: 80
selector:
app: myserver-myapp-frontend
kubectl apply -f 1-Statefulset.yamlkubectl get pod -n myserver NAME READY STATUS RESTARTS AGE myserver-myapp-0 1/1 Running 0 4m57s myserver-myapp-1 1/1 Running 0 4m56s myserver-myapp-2 1/1 Running 0 4m54s
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: myserver-myapp
namespace: myserver
spec:
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
hostNetwork: true
hostPID: true
containers:
- name: myserver-myapp-frontend
image: nginx:1.20.2-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend
namespace: myserver
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30018
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-frontend
kubectl apply -f 1-DaemonSet-webserver.yaml