Kubernetes集群初始配置

配置IP

Ubuntu 18 修改IPvi /etc/netplan/50-cloud-init.yaml

network:
    ethernets:
        ens33:
          addresses: [192.168.141.110/24]
          gateway4: 192.168.141.2
          nameservers:
            addresses: [192.168.141.2]
    version: 2

指令生效:netplan apply

配置主机名

修改主机名

hostnamectl set-hostname you-computer-name

配置 hosts

cat >> /etc/hosts << EOF
192.168.123.110 you-computer-name
EOF

服务器信息

服务主机名IP/端口CPU/MEM说明
GitLabdocker-gitlab192.168.141.200:802 核 2G代码管理
Nexusdocker-nexus192.168.141.201:802 核 2G依赖管理
Harbordocker-harbor192.168.141.202:802 核 2G镜像管理
ZenTaodocker-zentao192.168.141.203:802 核 2G项目管理
主机名IP角色CPU/MEM磁盘
kubernetes-master192.168.141.110Master2 核 2G20G
kubernetes-node-01192.168.141.120Node2 核 4G20G
kubernetes-node-02192.168.141.121Node2 核 4G20G
kubernetes-node-03192.168.141.122Node2 核 4G20G
kubernetes-volumes192.168.141.130NFS2 核 2G按需扩容

Master

工作目录

/usr/local/kubernetes/cluster

# 导出配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml

修改配置

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # 修改为主节点 IP
  advertiseAddress: 192.168.141.110
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: kubernetes-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
# 国内不能访问 Google,修改为阿里云
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 修改版本号
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  # 配置 POD 所在网段为我们虚拟机不重叠的网段(这里用的是 Flannel 默认网段)
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

查看所需镜像

kubeadm config images list --config kubeadm.yml

拉取镜像

配置结点

kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log

配置 kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

加入结点

#master执行创建token
kubeadm token create 
#master执行查看token
kubeadm token list 
#mater执行创建sha256
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 
kubeadm join 10.10.44.161:6443 --token 54fd3m.d2ck6x78ocljcp1y --discovery-token-ca-cert-hash sha256:2dd957bc218a766d87ba421a557140303bd2c22a6172bd123642dc2b9009a4cc

输出

# 输出如下
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看Pod

kubectl get pod -n kube-system -o wide

NAME                                        READY   STATUS    RESTARTS   AGE     IP                NODE                 NOMINATED NODE   READINESS GATES
coredns-9d85f5447-jdz27                     0/1     Pending   0          27m     <none>            <none>               <none>           <none>
coredns-9d85f5447-sh2fc                     0/1     Pending   0          27m     <none>            <none>               <none>           <none>
etcd-kubernetes-master                      1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-apiserver-kubernetes-master            1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-controller-manager-kubernetes-master   1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-proxy-8pcvf                            1/1     Running   0          2m10s   192.168.123.122   kubernetes-node-03   <none>           <none>
kube-proxy-s6cq7                            1/1     Running   0          21m     192.168.123.120   kubernetes-node-01   <none>           <none>
kube-proxy-tdr2l                            1/1     Running   0          5m52s   192.168.123.121   kubernetes-node-02   <none>           <none>
kube-proxy-z95pn                            1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-scheduler-kubernetes-master            1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>

安装插件Calico

工作目录

/usr/local/kubernetes/cluster

下载 Calico 配置文件并修改

wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

修改配置文件

vi calico.yaml

192.168.0.0/16 修改为 10.244.0.0/16

- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16" 

# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16" 
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
  value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
  value: "ACCEPT"
# Disable IPv6 on Kubernetes.

部署calico

kubectl apply -f calico.yaml

查看pod状态

watch kubectl get pods --all-namespaces

NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-bc44d789c-f6rl7     1/1     Running   0          15m
kube-system   calico-node-ch885                           1/1     Running   0          15m
kube-system   calico-node-cs8jb                           1/1     Running   0          15m
kube-system   calico-node-g5bcb                           1/1     Running   0          15m
kube-system   calico-node-mddc5                           1/1     Running   0          15m
kube-system   coredns-9d85f5447-jdz27                     1/1     Running   0          59m
kube-system   coredns-9d85f5447-sh2fc                     1/1     Running   0          59m
kube-system   etcd-kubernetes-master                      1/1     Running   0          59m
kube-system   kube-apiserver-kubernetes-master            1/1     Running   0          59m
kube-system   kube-controller-manager-kubernetes-master   1/1     Running   0          59m
kube-system   kube-proxy-8pcvf                            1/1     Running   0          33m
kube-system   kube-proxy-s6cq7                            1/1     Running   0          53m
kube-system   kube-proxy-tdr2l                            1/1     Running   0          37m
kube-system   kube-proxy-z95pn                            1/1     Running   0          59m
kube-system   kube-scheduler-kubernetes-master            1/1     Running   0          59m

查看结点状态

kubectl get node

NAME                 STATUS   ROLES    AGE   VERSION
kubernetes-master    Ready    master   60m   v1.17.4
kubernetes-node-01   Ready    <none>   54m   v1.17.4
kubernetes-node-02   Ready    <none>   38m   v1.17.4
kubernetes-node-03   Ready    <none>   34m   v1.17.4

重置Kubernetes

重置指令

sudo kubeadm reset

输出

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "kubernetes-master" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0410 21:07:15.789401   71480 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重置iptables

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
sysctl net.bridge.bridge-nf-call-iptables=1

手动执行以下命令来清楚对应的残余网卡信息

sudo ip link del cni0
sudo ip link del flannel.1

Node NotReady

systemctl restart kube-proxy
systemctl restart kubelet

Master NotReady

systemctl restart kubelet
```# 配置IP
Ubuntu 18 修改IP`vi /etc/netplan/50-cloud-init.yaml`
```YML
network:
    ethernets:
        ens33:
          addresses: [192.168.141.110/24]
          gateway4: 192.168.141.2
          nameservers:
            addresses: [192.168.141.2]
    version: 2

指令生效:netplan apply

配置主机名

修改主机名

hostnamectl set-hostname you-computer-name

配置 hosts

cat >> /etc/hosts << EOF
192.168.123.110 you-computer-name
EOF

服务器信息

服务主机名IP/端口CPU/MEM说明
GitLabdocker-gitlab192.168.141.200:802 核 2G代码管理
Nexusdocker-nexus192.168.141.201:802 核 2G依赖管理
Harbordocker-harbor192.168.141.202:802 核 2G镜像管理
ZenTaodocker-zentao192.168.141.203:802 核 2G项目管理
主机名IP角色CPU/MEM磁盘
kubernetes-master192.168.141.110Master2 核 2G20G
kubernetes-node-01192.168.141.120Node2 核 4G20G
kubernetes-node-02192.168.141.121Node2 核 4G20G
kubernetes-node-03192.168.141.122Node2 核 4G20G
kubernetes-volumes192.168.141.130NFS2 核 2G按需扩容

Master

工作目录

/usr/local/kubernetes/cluster

# 导出配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml

修改配置

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # 修改为主节点 IP
  advertiseAddress: 192.168.141.110
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: kubernetes-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
# 国内不能访问 Google,修改为阿里云
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 修改版本号
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  # 配置 POD 所在网段为我们虚拟机不重叠的网段(这里用的是 Flannel 默认网段)
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

查看所需镜像

kubeadm config images list --config kubeadm.yml

拉取镜像

配置结点

kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log

配置 kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

加入结点

#master执行创建token
kubeadm token create 
#master执行查看token
kubeadm token list 
#mater执行创建sha256
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 
kubeadm join 10.10.44.161:6443 --token 54fd3m.d2ck6x78ocljcp1y --discovery-token-ca-cert-hash sha256:2dd957bc218a766d87ba421a557140303bd2c22a6172bd123642dc2b9009a4cc

输出

# 输出如下
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看Pod

kubectl get pod -n kube-system -o wide

NAME                                        READY   STATUS    RESTARTS   AGE     IP                NODE                 NOMINATED NODE   READINESS GATES
coredns-9d85f5447-jdz27                     0/1     Pending   0          27m     <none>            <none>               <none>           <none>
coredns-9d85f5447-sh2fc                     0/1     Pending   0          27m     <none>            <none>               <none>           <none>
etcd-kubernetes-master                      1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-apiserver-kubernetes-master            1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-controller-manager-kubernetes-master   1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-proxy-8pcvf                            1/1     Running   0          2m10s   192.168.123.122   kubernetes-node-03   <none>           <none>
kube-proxy-s6cq7                            1/1     Running   0          21m     192.168.123.120   kubernetes-node-01   <none>           <none>
kube-proxy-tdr2l                            1/1     Running   0          5m52s   192.168.123.121   kubernetes-node-02   <none>           <none>
kube-proxy-z95pn                            1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>
kube-scheduler-kubernetes-master            1/1     Running   0          27m     192.168.123.110   kubernetes-master    <none>           <none>

安装插件Calico

工作目录

/usr/local/kubernetes/cluster

下载 Calico 配置文件并修改

wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

修改配置文件

vi calico.yaml

192.168.0.0/16 修改为 10.244.0.0/16

- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16" 

# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16" 
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
  value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
  value: "ACCEPT"
# Disable IPv6 on Kubernetes.

部署calico

kubectl apply -f calico.yaml

查看pod状态

watch kubectl get pods --all-namespaces

NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-bc44d789c-f6rl7     1/1     Running   0          15m
kube-system   calico-node-ch885                           1/1     Running   0          15m
kube-system   calico-node-cs8jb                           1/1     Running   0          15m
kube-system   calico-node-g5bcb                           1/1     Running   0          15m
kube-system   calico-node-mddc5                           1/1     Running   0          15m
kube-system   coredns-9d85f5447-jdz27                     1/1     Running   0          59m
kube-system   coredns-9d85f5447-sh2fc                     1/1     Running   0          59m
kube-system   etcd-kubernetes-master                      1/1     Running   0          59m
kube-system   kube-apiserver-kubernetes-master            1/1     Running   0          59m
kube-system   kube-controller-manager-kubernetes-master   1/1     Running   0          59m
kube-system   kube-proxy-8pcvf                            1/1     Running   0          33m
kube-system   kube-proxy-s6cq7                            1/1     Running   0          53m
kube-system   kube-proxy-tdr2l                            1/1     Running   0          37m
kube-system   kube-proxy-z95pn                            1/1     Running   0          59m
kube-system   kube-scheduler-kubernetes-master            1/1     Running   0          59m

查看结点状态

kubectl get node

NAME                 STATUS   ROLES    AGE   VERSION
kubernetes-master    Ready    master   60m   v1.17.4
kubernetes-node-01   Ready    <none>   54m   v1.17.4
kubernetes-node-02   Ready    <none>   38m   v1.17.4
kubernetes-node-03   Ready    <none>   34m   v1.17.4

重置Kubernetes

重置指令

sudo kubeadm reset

输出

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "kubernetes-master" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0410 21:07:15.789401   71480 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重置iptables

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
sysctl net.bridge.bridge-nf-call-iptables=1

手动执行以下命令来清楚对应的残余网卡信息

sudo ip link del cni0
sudo ip link del flannel.1

Node NotReady

systemctl restart kube-proxy
systemctl restart kubelet

Master NotReady

systemctl restart kubelet
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值