pod基于coredns进行域名解析流程
在k8s中pod进行域名解析是基于容器内的/etc/resolv.conf文件
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.10.0.2
options ndots:5
在这个文件中的nameserver 就是kube-dns的svc的CLUSTER-IP
root@k8s-deploy:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.10.0.2 <none> 53/UDP,53/TCP,9153/TCP 12d
所以,pod中所有域名的解析, 都要经过kube-dns的CLUSTER-IP进行解析,不论是 Kubernetes 内部域名还是外部的域名。
Kubernetes 中,域名的全称,必须是 servicename.namespace.svc.cluster.local 这种模式,servicename就是k8s中svc的名称
例如:当我们访问svc名称为nginx-svc时,就是:
在容器内,会根据 /etc/resolve.conf 进行解析流程。选择 nameserver 10.0.0.2 进行解析,然后,svc名称nginx-svc,依次带入 /etc/resolve.conf 中的 search 域,进行DNS查找,即找到符合规则的模式为止:
nginx-svc.default.svc.cluster.local
nginx-svc.svc.cluster.local
nginx-svc.cluster.local
总结rc、rs及deployment控制器的使用
Replication Controller:副本控制器(selector = !=) #第一代pod副本控制
ReplicationController 确保在任何时候都有特定数量的 Pod 副本处于运行状态。 换句话说,ReplicationController 确保一个 Pod 或一组同类的 Pod 总是可用的
https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicationcontroller/
https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/labels/
ReplicaSet:副本控制器,和副本控制器的区别是:对选择器的支持(selector 还支持in notin) #第二代pod副本控制器
示例:ReplicationController
apiVersion: v1
kind: ReplicationController
metadata:
name: ng-rc
spec:
replicas: 2
selector:
app: ng-rc-80
#app1: ng-rc-81
template:
metadata:
labels:
app: ng-rc-80
#app1: ng-rc-81
spec:
containers:
- name: ng-rc-80
image: nginx
ports:
- containerPort: 80
示例:ReplicaSet
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
spec:
replicas: 2
selector:
matchLabels:
app: ng-rs-80
#matchExpressions:
# - {key: app, operator: In, values: [ng-rs-80,ng-rs-81]}
template:
metadata:
labels:
app: ng-rs-80
spec:
containers:
- name: ng-rs-80
image: nginx
ports:
- containerPort: 80
总结nodeport类型的service访问流程
访问流程如下图:
说明:用户经过负载均衡lb,到达node的端口,再node服务器上又转发给service的ip和端口,service又转发给pod
掌握pod挂载nfs的使用
nfs 卷允许将现有的 NFS(网络文件系统)挂载到容器中,且不像 emptyDir会丢失数据,当删除 Pod 时,nfs卷的内容被保留,卷仅仅是被卸载,这意味着 NFS 卷可以预先上传好数据待pod启动后即可直接使用,并且网络存储可以在多 pod 之间共享同一份数据,即NFS 可以被多个pod同时挂载和读写
示例:
cat 1-deploy_nfs.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/mysite
name: my-nfs-volume
volumes:
- name: my-nfs-volume
nfs:
server: 172.31.7.109
path: /data/k8sdata
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-80
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 30016
protocol: TCP
type: NodePort
selector:
app: ng-deploy-80
cat 2-deploy_nfs.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-site2
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-81
template:
metadata:
labels:
app: ng-deploy-81
spec:
containers:
- name: ng-deploy-81
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/pool1
name: my-nfs-volume-pool1
- mountPath: /usr/share/nginx/html/pool2
name: my-nfs-volume-pool2
- mountPath: /etc/localtime
name: timefile
volumes:
- name: my-nfs-volume-pool1
nfs:
server: 172.31.7.109
path: /data/k8sdata/pool1
- name: my-nfs-volume-pool2
nfs:
server: 172.31.7.109
path: /data/k8sdata/pool2
- name: timefile
hostPath:
path: /etc/localtime
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-81
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30017
protocol: TCP
type: NodePort
selector:
app: ng-deploy-81
总结基于nfs实现静态pvc的使用
static:静态存储卷 ,需要 在使用前手动创建PV、然 后创建PVC并绑定到PV, 然后挂载至pod使用,适用于PV和PVC相对比较固 定的业务场景
Volume-静态存储卷示例:
#创建nfs存储
mkdir /data/k8sdata/myserver/myappdata -p
vim /etc/exports
/data/k8sdata/myserver/myappdata *(rw,no_root_squash)
systemctl restart nfs-server && systemctl enable nfs-server
#验证nfs
root@k8s-deploy:~/k8s-Resource-N79/case8-pv-static# showmount -e 10.0.0.200
Export list for 10.0.0.200:
/data/k8sdata/myserver/myappdata *
#创建pv
vim create-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: myserver-static-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
nfs:
path: /data/k8sdata/myserver/myappdata
server: 10.0.0.200
#执行创建
kubectl apply -f create-pv.yaml
#创建pvc
vim create-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myserver-static-pvc
namespace: myserver
spec:
volumeName: myserver-static-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1.5Gi
#执行创建
kubectl apply -f create-pvc.yaml
#创建测试pod
vim create-myaap.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: ng-dp-80
template:
metadata:
labels:
app: ng-dp-80
spec:
containers:
- name: ng-container
image: nginx:1.20.0
volumeMounts:
- mountPath: "/usr/share/nginx/html/statics"
name: static-dir
volumes:
- name: static-dir
persistentVolumeClaim:
claimName: myserver-static-pvc
---
apiVersion: v1
kind: Service
metadata:
name: ngx-svc
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30050
protocol: TCP
selector:
app: ng-dp-80
#创建
kubectl apply -f create-myaap.yaml
#创建测试页面
echo 123 > /data/k8sdata/myserver/myappdata/index.html
测试访问:
总结基于nfs及storageclass实现动态pvc的使用
https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
dynamin:动态存储卷,先 创建一个存储类 storageclass,后期pod 在使用PVC的时候可以通过 存储类动态创建PVC,适用 于有状态服务集群如 MySQL一主多从、 zookeeper集群等
Volume-动态存储卷示例:
#创建账号
vim 1-rbac.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
#创建
kubectl apply -f 1-rbac.yaml
#创建存储类
vim 2-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
#- vers=4.1 #containerd有部分参数异常
- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
- noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
#mountOptions: "vers=4.1,noresvport,noatime"
archiveOnDelete: "true" #删除pod时保留pod数据,默认为false时为不保留数据
#创建NFS provisioner
vim 3-nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
spec:
replicas: 1
strategy: #部署策略
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
#image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.0.0.200
- name: NFS_PATH
value: /data/volumes
volumes:
- name: nfs-client-root
nfs:
server: 10.0.0.200
path: /data/volumes
#创建
kubectl apply -f 3-nfs-provisioner.yaml
#创建pvc
vim 4-create-pvc.yaml
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myserver-myapp-dynamic-pvc
namespace: myserver
spec:
storageClassName: managed-nfs-storage #调用的storageclass 名称
accessModes:
- ReadWriteMany #访问权限
resources:
requests:
storage: 500Mi #空间大小
#创建
kubectl apply -f 4-create-pvc.yaml
#创建web服务
vim 5-myapp-webserver.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: myserver-myapp
name: myserver-myapp-deployment-name
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-container
image: nginx:1.20.0
#imagePullPolicy: Always
volumeMounts:
- mountPath: "/usr/share/nginx/html/statics"
name: statics-datadir
volumes:
- name: statics-datadir
persistentVolumeClaim:
claimName: myserver-myapp-dynamic-pvc
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-myapp-service
name: myserver-myapp-service-name
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30010
selector:
app: myserver-myapp-frontend
#创建
kubectl apply -f 5-myapp-webserver.yaml
测试访问:
pod基于configmap实现配置挂载和环境变量
Configmap将非机密性信息(如配置信息)和镜像解耦, 实现方式为将配置信息放到configmap对象中,然后在pod的中作为Volume挂载到pod中,从而实现导入配置的目的
使用场景:
通过Configmap给pod中的容器服务提供配置文件,配置文件以挂载到容器的形式使用
通过Configmap给pod定义全局环境变量
通过Configmap给pod传递命令行参数,如mysql -u -p中的账户名密码可以通过Configmap传递
注意事项:
Configmap需要在pod使用它之前创建
pod只能使用位于同一个namespace的Configmap,即Configmap不能跨namespace使用
通常用于非安全加密的配置场景
Configmap通常是小于1MB的配置
示例:
#配置nfs
mkdir -p /data/k8sdata/mysite
mkdir -p /data/k8sdata/myserver
vim /etc/exports
/data/k8sdata/mysite *(rw,no_root_squash)
/data/k8sdata/myserver *(rw,no_root_squash)
#重启nfs
systemctl restart nfs-server.service
#查看nfs
root@k8s-deploy:~/k8s-Resource-N79/case10-configmap# showmount -e
Export list for k8s-deploy.canghailyt.com:
/data/k8sdata/myserver *
/data/k8sdata/mysite *
/data/volumes *
/data/k8sdata/myserver/myappdata *
/data/k8sdata/kuboard *
#编辑configmap文件
vim nginx-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: myserver
data:
myserver: |
server {
listen 80;
server_name www.myserver.com;
index index.html index.php index.htm;
location / {
root /data/nginx/myserver;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
mysite: |
server {
listen 80;
server_name www.mysite.com;
index index.html index.php index.htm;
location / {
root /data/nginx/mysite;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
host: "10.0.0.199"
username: "wang"
passwd: "123456"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: nginx-80
template:
metadata:
labels:
app: nginx-80
spec:
containers:
- name: nginx-container
image: nginx:1.20.0
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data/nginx/mysite
name: nginx-mysite-web
- mountPath: /data/nginx/myserver
name: nginx-myserver-web
- mountPath: /etc/nginx/conf.d/mysite
name: config-mysite
- mountPath: /etc/nginx/conf.d/myserver
name: config-myserver
env:
- name: HOST
valueFrom:
configMapKeyRef:
name: nginx-config
key: host
- name: USER
valueFrom:
configMapKeyRef:
name: nginx-config
key: username
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: nginx-config
key: passwd
- name: "MYSQLPASS"
value: "123456"
volumes:
- name: config-myserver
configMap:
name: nginx-config
items:
- key: myserver
path: myserver.conf
- name: config-mysite
configMap:
name: nginx-config
items:
- key: mysite
path: mysite.conf
- name: nginx-myserver-web
nfs:
server: 10.0.0.200
path: /data/k8sdata/myserver
- name: nginx-mysite-web
nfs:
server: 10.0.0.200
path: /data/k8sdata/mysite
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: myserver
spec:
type: NodePort
selector:
app: nginx-80
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30060
protocol: TCP
#进入pod测试环境变量
kubectl exec -it nginx-deployment-6d8bfbdbb5-qdjng bash -n myserver
root@nginx-deployment-6d8bfbdbb5-qdjng:/# env | egrep "\b(HOST|USER|MYSQLPASS|PASSWORD)\b"
PASSWORD=123456
HOST=10.0.0.199
USER=wang
MYSQLPASS=123456
#验证nginx配置文件挂载
root@nginx-deployment-6d8bfbdbb5-qdjng:/# cat /etc/nginx/conf.d/myserver/myserver.conf
server {
listen 80;
server_name www.myserver.com;
index index.html index.php index.htm;
location / {
root /data/nginx/myserver;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
root@nginx-deployment-6d8bfbdbb5-qdjng:~# cat /etc/nginx/conf.d/mysite/mysite.conf
server {
listen 80;
server_name www.mysite.com;
index index.html index.php index.htm;
location / {
root /data/nginx/mysite;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
#验证web界面挂载
root@nginx-deployment-6d8bfbdbb5-qdjng:~# df -h | grep "10"
10.0.0.200:/data/k8sdata/mysite 24G 19G 4.2G 82% /data/nginx/mysite
10.0.0.200:/data/k8sdata/myserver 24G 19G 4.2G 82% /data/nginx/myserver
#测试访问web界面
root@k8s-deploy:~/k8s-Resource-N79/case10-configmap# curl www.mysite.com:30060
mysite
root@k8s-deploy:~/k8s-Resource-N79/case10-configmap# curl www.myserver.com:30060
myserver
总结secret简介及常见类型、基于Secret实现Nginx tls认证
Secret 的功能类似于 ConfigMap给pod提供额外的配置信息,但是Secret是一种包含少量敏感信息例如密码、令牌或密钥的对象
Secret 的名称必须是合法的 DNS子域名
每个Secret的大小最多为1MiB,主要是为了避免用户创建非常大的Secret进而导致API服务器和kubelet内存耗尽,不过创建很多小的Secret也可能耗尽内存,可以使用资源配额来约束每个名字空间中Secret的个数
在通过yaml文件创建secret时,可以设置data或stringData字段,data和stringData字段都是可选的,data字段中所有键值都必须是base64编码的字符串,如果不希望执行这种 base64字符串的转换操作,也可以选择设置stringData字段,其中可以使用任何非加密的字符串作为其取值
Pod 可以用三种方式的任意一种来使用 Secret:
作为挂载到一个或多个容器上的卷 中的文件(crt文件、key文件)。
作为容器的环境变量
由 kubelet 在为 Pod 拉取镜像时使用(与镜像仓库的认证)。
Secret简介类型:
Kubernetes默认支持多种不同类型的secret,用于一不同的使用场景,不同类型的secret的配置参数也不一样
Secret类型使用场景:
Secret类型-Opaque格式:
#Opaque格式-data类型数据-事先使用base64加密
root@k8s-deploy:~# echo admin | base64
YWRtaW4K
root@k8s-deploy:~# echo 123456 | base64
MTIzNDU2Cg==
#创建secret
vim secret-opaque-data.yaml
apiVersion: v1
kind: Secret
metadata:
name: data-secret
namespace: myserver
type: Opaque
data:
user: YWRtaW4K
password: MTIzNDU2Cg==
kubectl apply -f secret-opaque-data.yaml
#验证
kubectl get secrets -n myserver -o yaml
Opaque格式stringData类型数据-不用事先加密:
vim secret-opaque-stringdata.yaml
apiVersion: v1
kind: Secret
metadata:
name: stringdata-secret
namespace: myserver
type: Opaque
stringData:
user: 'admin'
password: '123456'
kubectl apply -f secret-opaque-stringdata.yaml
kubectl get secret -n myserver stringdata-secret -o yaml
#Opaque格式-data类型数据的使用
#创建deployment
vim secret-data-deployment.yaml
apiVersion: v1
kind: Secret
metadata:
name: nginx-data-secret
namespace: myserver
type: Opaque
data:
user: d2FuZwo=
password: MTIzNDU2Cg==
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: ng-dp-80
template:
metadata:
labels:
app: ng-dp-80
spec:
containers:
- name: nginx-container
image: nginx:1.20.0
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data/myserver/auth
name: secret-data
volumes:
- name: secret-data
secret:
secretName: nginx-data-secret
kubectl apply -f secret-data-deployment.yaml
#进入pod验证
kubectl exec -it nginx-deploy-6cb9488cf6-xrxdk bash -n myserver
root@nginx-deploy-6cb9488cf6-xrxdk:/# cat /data/myserver/auth/user
wang
root@nginx-deploy-6cb9488cf6-xrxdk:/# cat /data/myserver/auth/password
123456
#etcd服务器查看
root@k8s-etcd1:~# etcdctl get / --keys-only --prefix | grep nginx-data-secret
/registry/secrets/myserver/nginx-data-secret
root@k8s-etcd1:~# etcdctl get /registry/secrets/myserver/nginx-data-secret
#node服务器查看
root@k8s-node2:~# find /var/lib/kubelet/ -name user
/var/lib/kubelet/pods/194cdec6-9770-4dc8-a4ef-9a76a066c039/volumes/kubernetes.io~secret/secret-data/user
root@k8s-node2:~# find /var/lib/kubelet/ -name password
/var/lib/kubelet/pods/194cdec6-9770-4dc8-a4ef-9a76a066c039/volumes/kubernetes.io~secret/secret-data/password
Secret类型-kubernetes.io/tls-为nginx提供证书示例:
#上传证书
kubectl create secret tls myserver-tls-key --cert=./harbor.canghailyt.com.pem --key=./harbor.canghailyt.com.key -n myserver
#创建web服务并使用证书
vim tls-nginx.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: myserver
data:
default: |
server {
listen 80;
server_name harbor.canghailyt.com;
listen 443 ssl;
ssl_certificate /etc/nginx/conf.d/certs/tls.crt;
ssl_certificate_key /etc/nginx/conf.d/certs/tls.key;
location / {
root /usr/share/nginx/html;
index index.html;
if ($scheme = http ){ #未加条件判断,会导致死循环
rewrite / https://harbor.canghailyt.com permanent;
}
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: nginx-tls
template:
metadata:
labels:
app: nginx-tls
spec:
containers:
- name: nginx-container
image: nginx:1.20.2-alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/
- name: myserver-tls-key
mountPath: /etc/nginx/conf.d/certs
volumes:
- name: nginx-config
configMap:
name: nginx-config
items:
- key: default
path: mysite.conf
- name: myserver-tls-key
secret:
secretName: myserver-tls-key
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30030
protocol: TCP
- name: htts
port: 443
targetPort: 443
nodePort: 30034
protocol: TCP
selector:
app: nginx-tls
测试:
基于Secret实现私有镜像仓库的镜像下载认证
Secret-kubernetes.io/dockerconfigjson类型的示例:
存储docker registry的认证信息,在下载镜像的时候使用,这样每一个node节点就可以不登录也可以下载私有级别的镜像了
#创建secret
#方式1
kubectl create secret docker-registry harbor-secret \
--docker-server=harbor.canghailyt.com \
--docker-username=admin \
--docker-password=12345678
#方式2;一般使用这种
docker login harbor.canghailyt.com
kubectl create secret generic harbor-sc-key \
--from-file=.dockerconfigjson=/root/.docker/config.json \
--type=kubernetes.io/dockerconfigjson \
-n myserver
#编辑测试yaml文件
vim harbor-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: harbor-secret
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: harbor-secret-80
template:
metadata:
labels:
app: harbor-secret-80
spec:
containers:
- name: harbor-secret-80
image: harbor.canghailyt.com/base/nginx:1.16.1-alpine
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: harbor-sc-key
#查看是否成功
root@k8s-deploy:~/k8s-Resource-N79/case11-secret# kubectl get pod -n myserver
NAME READY STATUS RESTARTS AGE
harbor-secret-69cbd75c6-m7zlr 1/1 Running 0 105s
nginx-deployment-648bd9d9c8-cww8s 1/1 Running 0 49m
总结StatefulSet、DaemonSet的特点及使用
Statefulset
https://kubernetes.io/zh/docs/concepts/workloads/controllers/statefulset/
Statefulset为了解决有状态服务的集群部署、集群之间的数据同步问题(MySQL主从、Redis Cluster、ES集群等)
Statefulset所管理的Pod拥有唯一且固定的Pod名称
Statefulset按照顺序对pod进行启停、伸缩和回收
Statefulset创建从前向后,删除和更新从后向前
Headless Services(无头服务,请求的解析直接解析到pod IP)
示例:
cat 1-Statefulset.yaml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myserver-myapp
namespace: myserver
spec:
replicas: 3
serviceName: "myserver-myapp-service"
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-frontend
#image: registry.cn-qingdao.aliyuncs.com/zhangshijie/zookeeper:v3.4.14
image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-service
namespace: myserver
spec:
clusterIP: None
ports:
- name: http
port: 80
selector:
app: myserver-myapp-frontend
kubectl apply -f 1-Statefulset.yaml
#查看
root@k8s-deploy:~/k8s-Resource-N79/case12-Statefulset# kubectl get pod -n myserver
NAME READY STATUS RESTARTS AGE
myserver-myapp-0 1/1 Running 0 64s
myserver-myapp-1 0/1 ContainerCreating 0 12s
root@k8s-deploy:~/k8s-Resource-N79/case12-Statefulset# kubectl get pod -n myserver
NAME READY STATUS RESTARTS AGE
myserver-myapp-0 1/1 Running 0 2m2s
myserver-myapp-1 1/1 Running 0 70s
myserver-myapp-2 1/1 Running 0 8s
DaemonSet
https://kubernetes.io/zh/docs/concepts/workloads/controllers/daemonset/
DaemonSet 在当前集群中每个节点运行同一个pod,当有新的节点加入集群时也会为新的节点配置相同的pod,当节点从集群中移除时其pod也会被kubernetes回收,删除DaemonSet 控制器将删除其创建的所有的pod
DaemonSet 的一些典型用法:
在每个节点上运行集群守护进程
在每个节点上运行日志收集守护进程
在每个节点上运行监控守护进程
示例:
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: myserver-myapp
namespace: myserver
spec:
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
tolerations: #污点,排除master节点
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
hostNetwork: true #使用宿主机的网络
hostPID: true #使用宿主机的pid
containers:
- name: myserver-myapp-frontend
image: nginx:1.20.2-alpine
ports:
- containerPort: 80
#日志收集
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
#监控
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: monitoring
labels:
k8s-app: node-exporter
spec:
selector:
matchLabels:
k8s-app: node-exporter
template:
metadata:
labels:
k8s-app: node-exporter
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- image: prom/node-exporter:v1.3.1
imagePullPolicy: IfNotPresent
name: prometheus-node-exporter
ports:
- containerPort: 9100
hostPort: 9100
protocol: TCP
name: metrics
volumeMounts:
- mountPath: /host/proc
name: proc
- mountPath: /host/sys
name: sys
- mountPath: /host
name: rootfs
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --path.rootfs=/host
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /
hostNetwork: true
hostPID: true
---
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
k8s-app: node-exporter
name: node-exporter
namespace: monitoring
spec:
type: NodePort
ports:
- name: http
port: 9100
nodePort: 39100
protocol: TCP
selector:
k8s-app: node-exporter