一.k8s简介
Kubernetes主要由以下几个核心组件组成:
etcd:保存了整个集群的状态
apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现
等机制
controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等
scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上
kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡
除了核心组件,还有一些推荐的Add-ons:
kube-dns:负责为整个集群提供DNS服务
Ingress Controller:为服务提供外网入口
Heapster:提供资源监控
Dashboard:提供GUI
Federation:提供跨可用区的集群
Fluentd-elasticsearch:提供集群日志采集、存储与查询
• 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件
式应用执行环境
• 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服
务发现、DNS解析等)
• 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动
态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
• 接口层:kubectl命令行工具、客户端SDK以及集群联邦
• 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范
畴
• Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、
OTS应用、ChatOps等
• Kubernetes内部
二.k8s部署
所有节点都需要做如下配置
关闭节点的selinux和iptables防火墙
所有节点部署docker引擎
yum install -y docker-ce docker-ce-cli
配置网桥
vim /etc/sysctl.d/docker.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system
systemctl enable docker
systemctl start docker
scp /etc/sysctl.d/docker.conf root@172.25.3.4:/etc/sysctl.d/docker.conf
禁用swap分区
swapoff -a
注释/ect/fstab中swap定义
修改cgroup 配置 cgroup 驱动程序
"exec-opts": ["native.cgroupdriver=systemd"]
设置默认仓库
"registry-mirrors": ["https://reg.westos.org"]
将这两项填入同一配置文件,重启服务
[root@server2 docker]# cat daemon.json
{
"registry-mirrors": ["https://reg.westos.org"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl daemon-reload
systemctl restart docker
cgroup模式发生改变
安装部署软件kubeadm,并启动:
vim /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kuber
netes-el7-x86_64/
enabled=1
gpgcheck=0
yum install -y kubelet kubeadm kubectl
systemctl enable --now kubelet
server1作为主节点,在此基础上建立集群
看默认配置信息
[root@server1 docker]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
列出所需镜像
kuubeadm config images list --image-repository
[root@server1 docker]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.3
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.3
registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.3
registry.aliyuncs.com/google_containers/kube-proxy:v1.21.3
registry.aliyuncs.com/google_containers/pause:3.4.1
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:v1.8.0
拉取镜像
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
docker pull coredns/coredns:1.8.0
更改以上镜像tag为reg.westos.org格式,上传至命名为k8s仓库
更改tag
docker tag coredns/coredns:1.8.0 reg.westos.org/ks8/coredns/coredns:1.8.0
docker images |grep ^registry.aliyuncs.com | awk '{print $1":"$2}' | awk -F/ '{system("docker tag "$0" reg.westos.org/k8s/"$3"")}'
push上传
docker images | grep ^reg.westos.org/k8s | awk '{system("docker push "$1":"$2"")}'
初始化集群
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository reg.westos.org/k8s
[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository reg.westos.org/k8s
W0724 05:57:36.282798 5614 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0724 05:57:36.282865 5614 version.go:103] falling back to the local client version: v1.21.3
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local server1] and IPs [10.96.0.1 172.25.3.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost server1] and IPs [172.25.3.1 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost server1] and IPs [172.25.3.1 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.003139 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node server1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node server1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: p2zscb.9010o16wbhsimkf1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.25.3.1:6443 --token p2zscb.9010o16wbhsimkf1 \
--discovery-token-ca-cert-hash sha256:55c1a032c0997fe8276b9c67143e209a8cfe859440f594581716c617e9d3d723
加入集群命令,在其他docker端执行
kubeadm join 172.25.3.1:6443 --token p2zscb.9010o16wbhsimkf1 \
--discovery-token-ca-cert-hash sha256:55c1a032c0997fe8276b9c67143e209a8cfe859440f594581716c617e9d3d723
配置kubectl
执行命令并加入 .bash_profile
开启自动执行
export KUBECONFIG=/etc/kubernetes/admin.conf
echo export KUBECONFIG=/etc/kubernetes/admin.conf >> .bash_profile
配置kubectl命令补齐功能:
echo "source <(kubectl completion bash)" >> ~/.bashrc
查看集群部署情况
kubectl get pod -n kube-system
有两项没有配置成功,是因为没有安装flannel网络组件
安装flannel网络组件
下载flannel网络组件:https://github.com/coreos/flannel
docker pull quay.io/coreos/flannel:v0.14.0
更改tag并上传至harbor默认仓库
docker tag quay.io/coreos/flannel:v0.14.0 reg.westos.org/library/flannel:v0.14.0
docker push reg.westos.org/library/flannel:v0.14.0
下载yml脚本
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
脚本内容
[root@server1 ~]# cat kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
运行脚本
[root@server1 ~]# kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds unchanged
查看安装情况,成功
kubectl get pod -n kube-system
[root@server1 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7777df944c-brss6 1/1 Running 1 20h
coredns-7777df944c-mzblw 1/1 Running 1 20h
etcd-server1 1/1 Running 1 20h
kube-apiserver-server1 1/1 Running 1 20h
kube-controller-manager-server1 1/1 Running 2 20h
kube-flannel-ds-5wp7g 1/1 Running 1 16h
kube-flannel-ds-6hnhq 1/1 Running 1 19h
kube-flannel-ds-amd64-dfk9s 1/1 Running 1 19h
kube-flannel-ds-amd64-nb69x 1/1 Running 1 19h
kube-flannel-ds-amd64-t4pzt 1/1 Running 1 16h
kube-flannel-ds-jpsfr 1/1 Running 1 19h
kube-proxy-p78dh 1/1 Running 1 20h
kube-proxy-rpqxv 1/1 Running 1 19h
kube-proxy-tgvkq 1/1 Running 1 16h
kube-scheduler-server1 1/1 Running 2 20h
为集群加入节点 server2/4
kubeadm join 172.25.3.1:6443 --token p2zscb.9010o16wbhsimkf1 \
--discovery-token-ca-cert-hash sha256:55c1a032c0997fe8276b9c67143e209a8cfe859440f594581716c617e9d3d723
查看集群
[root@server1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
server1 Ready control-plane,master 20h v1.21.3
server2 Ready <none> 19h v1.21.3
server4 Ready <none> 16h v1.21.3