Kubernetes 学习总结(2) 安装与入门

本例使用kubeadmin工具安装。需要 Kubelet 、docker 运行在所有host上。
前提:时间同步、关闭 firewalld 、iptables 、本地解析(/etc/hosts) (过程略)

本地环境:docker79、docker78、docker77;IP地址分别为192.168.20.79、192.168.20.78、192.168.20.77
1、配置yum

[root@docker79 ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@docker79 ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@docker79 ~]# rpm --import  rpm-package-key.gpg
[root@docker79 ~]# rpm --import yum-key.gpg
[root@docker79 ~]# vim /etc/yum.repos.d/kubernetes.repo
[root@docker79 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
[root@docker79 ~]#
[root@docker79 ~]# cd /etc/yum.repos.d/
[root@docker79 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@docker79 yum.repos.d]#

[root@docker79 ~]# yum repolist
源标识                    源名称                    状态
base/7/x86_64            CentOS-7 - Base          9,911
docker-ce-stable/x86_64  Docker CE Stable - x86_64   17
extras/7/x86_64           CentOS-7 - Extra           401
kubernetes                kubernetes                 246
updates/7/x86_64          CentOS-7 - Updates         1,308
repolist: 11,883
[root@docker79 ~]#
[root@docker79 ~]# yum install docker-ce kubelet kubeadm kubectl  ipvsadm
………
已安装:
  docker-ce.x86_64 0:18.06.1.ce-3.el7     kubeadm.x86_64 0:1.11.2-0     kubectl.x86_64 0:1.11.2-0
  kubelet.x86_64 0:1.11.2-0
作为依赖被安装:
  audit-libs-python.x86_64 0:2.8.1-3.el7_5.1           checkpolicy.x86_64 0:2.5-6.el7
  container-selinux.noarch 2:2.68-1.el7                cri-tools.x86_64 0:1.11.0-0
  kubernetes-cni.x86_64 0:0.6.0-0                      libcgroup.x86_64 0:0.41-15.el7
  libseccomp.x86_64 0:2.3.1-3.el7                      libsemanage-python.x86_64 0:2.5-11.el7
  policycoreutils-python.x86_64 0:2.5-22.el7           python-IPy.noarch 0:0.75-6.el7
  setools-libs.x86_64 0:3.3.8-2.el7                    socat.x86_64 0:1.7.3.2-2.el7
作为依赖被升级:
  audit.x86_64 0:2.8.1-3.el7_5.1                  audit-libs.x86_64 0:2.8.1-3.el7_5.1
完毕!

2、设置Docker

[root@docker79 ~]# vim /usr/lib/systemd/system/docker.service
[root@docker79 ~]# grep Environment /usr/lib/systemd/system/docker.service
Environment="HTTPS_PROXY=http://proxy.domainname.com:3128"
Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"
[root@docker79 ~]# systemctl daemon-reload
[root@docker79 ~]# systemctl start docker
[root@docker79 ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a
[root@docker79 ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-862.3.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.51GiB
Name: docker79
ID: SQLL:4WEQ:POHM:S5F6:6GPY:UIUK:L3XQ:DI7K:WT47:JPVF:AFLG:LPET
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
**HTTPS Proxy: http://proxy.domainname.com:3128**
**No Proxy: 127.0.0.0/8,192.168.0.0/16**
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
[root@k8s-master-dev ~]#

说明:
本例中docker 的 Cgroup Driver: 类型为 cgroupfs ,所以在k8s集群初始化时或node加入集群时都会有关于Cgroup Driver的Warning提示,如果需要将Cgroup Driver 的类型指定为systemd,可在/etc/docker/目录中创建daemon.json文件 ,内容如下:

[root@k8s-master-dev ~]# cat /etc/docker/daemon.json
{"registry-mirrors": ["http://9645cd65.m.daocloud.io"]}

然后需要重启docker 生效

[root@docker79 ~]# tail -2 /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
[root@docker79 ~]# sysctl --system
[root@docker79 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
[root@docker79 ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[root@docker79 ~]#
[root@docker79 ~]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/etc/systemd/system/kubelet.service
/usr/bin/kubelet
[root@docker79 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=  (设置忽略swap)

3、kubeadm 部署kubernetes

[root@docker79 ~]# systemctl stop kubelet
[root@docker79 ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@docker79 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@docker79 ~]#
[root@docker79 ~]# vim /etc/sysconfig/kubelet
[root@docker79 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@docker79 ~]#

说明:本例中我先将所需要images 下载并找包成gz文件,如果使用不同版本或不同环境 中可以使用 kubeadm config images list 命令查看 所需要images及版本,然后再使用docker pull 命令将其下载。

[root@docker79 ~]# tar xfz k8s.gcr.io.tar.gz
[root@docker79 ~]# cd k8s.gcr.io
[root@docker79 k8s.gcr.io]# for image in `ls`; do docker load < $image ; done
5bef08742407: Loading layer [=================>]  4.221MB/4.221MB
594f5d257cbe: Loading layer [=================>]  9.335MB/9.335MB
53cb05deeb1b: Loading layer [=================>]  32.66MB/32.66MB
Loaded image: k8s.gcr.io/coredns:1.1.3
0314be9edf00: Loading layer [=================>]   1.36MB/1.36MB
fd10c5022c9c: Loading layer [=================>]  194.9MB/194.9MB
77066773b816: Loading layer [=================>]   22.9MB/22.9MB
Loaded image: k8s.gcr.io/etcd-amd64:3.2.18
f9d9e4e6e2f0: Loading layer [=================>]  1.378MB/1.378MB
428ad8419125: Loading layer [=================>]  185.5MB/185.5MB
Loaded image: k8s.gcr.io/kube-apiserver-amd64:v1.11.2
d5095f39a884: Loading layer [=================>]  154.1MB/154.1MB
Loaded image: k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
582b548209e1: Loading layer [=================>]   44.2MB/44.2MB
e20569a478ed: Loading layer [=================>]  3.358MB/3.358MB
ada0a9dc1320: Loading layer [=================>]  52.06MB/52.06MB
Loaded image: k8s.gcr.io/kube-proxy-amd64:v1.11.2
a4a9cf804060: Loading layer [=================>]  55.61MB/55.61MB
Loaded image: k8s.gcr.io/kube-scheduler-amd64:v1.11.2
e17133b79956: Loading layer [=================>]  744.4kB/744.4kB
Loaded image: k8s.gcr.io/pause:3.1
[root@docker79 k8s.gcr.io]#
[root@docker79 ~]# kubeadm init --help
[root@docker79 ~]#kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] using Kubernetes version: v1.11.2
.........
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join 192.168.20.79:6443 --token c0kgd5.pqmp9p4luuwmvv1x --discovery-token-ca-cert-hash sha256:d69f0a84d23d143f7f48aaec0c19e0f268fad8e9145dec37cb3989f55eb8a534
[root@docker79 ~]#
[root@docker79 ~]# mkdir .kube
[root@docker79 ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@docker79 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
[root@docker79 ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
[root@docker79 ~]#
[root@docker79 ~]# kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
docker79   NotReady   master    14m       v1.11.2
[root@docker79 ~]#

4、部署flannel
参考 :https://github.com/coreos/flannel

[root@docker79 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@docker79 ~]# ls kube-flannel.yml
kube-flannel.yml
[root@docker79 ~]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@docker79 ~]#
[root@docker79 ~]# kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
docker79   *NotReady*   master    9m        v1.11.2
[root@docker79 ~]#
说明:需要等待一会儿(flannel相关pod处于running) status的状态会变至 Ready
[root@docker79 ~]# kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
docker79   Ready     master    25m       v1.11.2
[root@docker79 ~]# kubectl get pods -n kube-system
NAME                        READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-fhcqd     1/1       Running   0          25m
coredns-78fcdf6894-pqjm4     1/1       Running   0          25m
etcd-docker79                1/1       Running   0          24m
kube-apiserver-docker79      1/1       Running   0          24m
kube-controller-manager-docker79 1/1   Running   0          25m
kube-flannel-ds-amd64-rlk27  1/1       Running   0          2m
kube-proxy-s7q7r             1/1       Running   0          25m
kube-scheduler-docker79      1/1       Running   0          24m
[root@docker79 ~]#
[root@docker79 ~]# kubectl get ns
NAME          STATUS    AGE
default       Active    26m
kube-public   Active    26m
kube-system   Active    26m
[root@docker79 ~]#

5、将各node 加入kubernetes cluster

[root@docker79 ~]# scp /etc/yum.repos.d/kubernetes.repo docker78:/etc/yum.repos.d/
[root@docker79 ~]# scp /etc/yum.repos.d/docker-ce.repo docker78:/etc/yum.repos.d/
[root@docker79 ~]# scp \*.gpg docker78:/root/
[root@docker79 ~]# scp k8s.gcr.io.tar.gz docker78:/root/
[root@docker79 ~]# scp /etc/sysctl.conf docker78:/etc/
[root@docker79 ~]# scp /etc/sysconfig/kubelet docker78:/etc/sysconfig/

[root@docker**78** ~]# rpm --import rpm-package-key.gpg
[root@docker**78** ~]# rpm --import yum-key.gpg
[root@docker**78** ~]# yum install docker-ce kubelet kubeadm kubectl
……………
[root@docker**79 **~]# scp /usr/lib/systemd/system/docker.service 
[root@docker**79 **~]# scp /etc/sysconfig/kubelet docker78:/etc/sysconfig/
[root@docker**79** ~]#

[root@docker**78** ~]# systemctl enable docker
[root@docker**78** ~]# systemctl enable kubelet
[root@docker**78** ~]# systemctl start docker
[root@docker78 ~]# tar xfz k8s.gcr.io.tar.gz
[root@docker78 ~]# cd k8s.gcr.io
[root@docker78 k8s.gcr.io]# for image in `ls` ; do docker load < $image ; done
[root@docker78 ~]# kubeadm join 192.168.20.79:6443 --token c0kgd5.pqmp9p4luuwmvv1x --discovery-token-ca-cert-hash sha256:d69f0a84d23d143f7f48aaec0c19e0f268fad8e9145dec37cb3989f55eb8a534 --ignore-preflight-errors=Swap
......

docker77 操作相同(略)

6、补充:将node 移除k8s cluster
1) kubectl drain nodename --delete-local-data --force --ignore-daemonsets #删除一个节点前,先驱逐上面的pod
2) kubectl delete node nodename #然后删除节点
3) 在被移除的node上执行 kubeadm reset ,否则该节点再次重新加入k8s集群时会失败

7、kubectl基本使用
Kubectl 命令连接 API server,实现 各种资源的增、删、改、查。可管理的对象很多:pod/service/controller (replicaset,deployment, statefulet,daemonset,job,cronjob,node)。

(1) 查看cluster 信息

[root@docker79 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.20.79:6443
KubeDNS is running at https://192.168.20.79:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

(2) 查看版本信息

[root@docker79 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
[root@docker79 ~]#

(3) kubectl 基础使用

[root@docker79 ~]# **kubectl run nginx-deploy --image=nginx:latest --replicas=1 --dry-run**
deployment.apps/nginx-deploy created (dry run)
[root@docker79 ~]# kubectl run nginx-deploy --image=nginx:latest --replicas=1
deployment.apps/nginx-deploy created
[root@docker79 ~]# kubectl get deployment
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1         1         1            0           12s
[root@docker79 ~]# kubectl get deployment
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1         1         1            1           1m
[root@docker79 ~]# kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
nginx-deploy-7f497cdbcf-k8rgl   1/1       Running   0          1m
[root@docker79 ~]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP           NODE       NOMINATED NODE
nginx-deploy-7f497cdbcf-k8rgl   1/1       Running   0          1m       ** 10.244.1.2**   **docker78**   <none>
[root@docker79 ~]#
[root@docker79 ~]# **kubectl get nodes --show-labels**
NAME       STATUS    ROLES     AGE       VERSION   LABELS
docker77   Ready     <none>    15m       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker77
docker78   Ready     <none>    15m       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker78
docker79   Ready     master    29m       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker79,node-role.kubernetes.io/master=
[root@docker79 ~]#
[root@docker79 ~]# **elinks --dump http://10.244.1.2   ** (访问pod IP)
                               Welcome to nginx!
   If you see this page, the nginx web server is successfully installed and
   working. Further configuration is required.
   For online documentation and support please refer to [1]nginx.org.
   Commercial support is available at [2]nginx.com.
   Thank you for using nginx.
References
   Visible links
   1. http://nginx.org/
   2. http://nginx.com/
[root@docker79 ~]#
[root@docker79 ~]# kubectl delete pod nginx-deploy-7f497cdbcf-k8rgl
pod "nginx-deploy-7f497cdbcf-k8rgl" deleted
[root@docker79 ~]# kubectl get pods -o wide (删除pod之后controller自动再次创建)
NAME                            READY     STATUS    RESTARTS   AGE       IP           NODE       NOMINATED NODE
nginx-deploy-7f497cdbcf-r2tn6   1/1       Running   0          17s       **10.244.1.3**   docker78   <none>
[root@docker79 ~]#
[root@docker79 ~]#** kubectl expose deployment nginx-deploy --name=nginx --port=80 --target-port=80 --protocol=TCP --type=ClusterIP**
service/nginx exposed
[root@docker79 ~]# kubectl get svc  (创建service , service信息如下)
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   40m
nginx        ClusterIP   10.100.115.103   <none>        80/TCP    9s
[root@docker79 ~]# **elinks --dump http://10.100.115.103 ** (访问service IP)
                               Welcome to nginx!
   If you see this page, the nginx web server is successfully installed and
   working. Further configuration is required.
   For online documentation and support please refer to [1]nginx.org.
   Commercial support is available at [2]nginx.com.
   Thank you for using nginx.
References
   Visible links
   1. http://nginx.org/
   2. http://nginx.com/
[root@docker79 ~]#

[root@docker79 ~]# **kubectl get pods -n kube-system**  (关注DNS pod)
NAME                               READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-lkkxl           1/1       Running   0          45m
coredns-78fcdf6894-rw42j           1/1       Running   0          45m
etcd-docker79                      1/1       Running   0          44m
kube-apiserver-docker79            1/1       Running   0          44m
kube-controller-manager-docker79   1/1       Running   0          44m
kube-flannel-ds-amd64-kx6x6        1/1       Running   0          31m
kube-flannel-ds-amd64-tz62z        1/1       Running   0          33m
kube-flannel-ds-amd64-wcnmh        1/1       Running   0          31m
kube-proxy-h6ltq                   1/1       Running   0          31m
kube-proxy-lwqkr                   1/1       Running   0          31m
kube-proxy-wz6pz                   1/1       Running   0          45m
kube-scheduler-docker79            1/1       Running   0          44m
[root@docker79 ~]#
[root@docker79 ~]# **kubectl get svc -n kube-system**  (系统DNS的service如下)
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   46m
[root@docker79 ~]#

[root@docker79 ~]# **kubectl run client --image=busybox:latest --replicas=1 -it --restart=Never**
If you don't see a command prompt, try pressing enter.
/ # **cat /etc/resolv.conf**
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ #
/ #** wget -O - -q http://nginx**     (利用DNS可解析 service name)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>  
…...

[root@docker79 ~]# **dig -t A nginx.default.svc.cluster.local @10.96.0.10**
......
;; ANSWER SECTION:
nginx.default.svc.cluster.local. 5 IN   A   10.100.115.103
;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: 日 9月 23 18:56:58 CST 2018
;; MSG SIZE  rcvd: 107
[root@docker79 ~]#

[root@docker79 ~]# **kubectl scale --replicas=2 deployment nginx-deploy**  (将原deploy扩展)
deployment.extensions/nginx-deploy scaled
[root@docker79 ~]# kubectl get pods -o wide
NAME                            READY     STATUS         RESTARTS   AGE       IP           NODE       NOMINATED NODE
client                          0/1       Completed      0          8m        10.244.1.4   docker78   <none>
nginx-deploy-7f497cdbcf-4wjfr   0/1       ErrImagePull   0          10s       10.244.2.2   docker77   <none>
nginx-deploy-7f497cdbcf-r2tn6   1/1       Running        0          20m       10.244.1.3   docker78   <none>
[root@docker79 ~]#

[root@docker**78 **~]#** docker images**
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
***nginx  ***                                    latest              06144b287844        2 weeks ago         109MB
k8s.gcr.io/kube-controller-manager-amd64   v1.11.2             38521457c799        6 weeks ago         155MB
k8s.gcr.io/kube-proxy-amd64                v1.11.2             46a3cd725628        6 weeks ago         97.8MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.2             821507941e9c        6 weeks ago         187MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.2             37a1403e6c1a        6 weeks ago         56.8MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        4 months ago        45.6MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        5 months ago        219MB
quay.io/coreos/flannel                     v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        9 months ago        742kB
quay.io/coreos/flannel                     v0.9.1              2b736d06ca4c        10 months ago       51.3MB
[root@docker78 ~]#** docker ps**
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
532bb3de6463        nginx                  "nginx -g 'daemon of…"   2 minutes ago       Up 2 minutes                           \* k8snginx-deploynginx-deploy-*7f497cdbcf-k8rgl_default_d191ad57-bf1b-11e8-aca7-000c295011ce_0
4ec26e0ccf80        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_nginx-deploy-7f497cdbcf-k8rgl_default_d191ad57-bf1b-11e8-aca7-000c295011ce_0
e265f9a85a24        f0fad859c909           "/opt/bin/flanneld -…"   16 minutes ago      Up 16 minutes                           k8s_kube-flannel_kube-flannel-ds-amd64-wcnmh_kube-system_07aa1723-bf1a-11e8-aca7-000c295011ce_0
3baf108c6aa4        46a3cd725628           "/usr/local/bin/kube…"   16 minutes ago      Up 16 minutes                           k8s_kube-proxy_kube-proxy-lwqkr_kube-system_07aa126d-bf1a-11e8-aca7-000c295011ce_0
94c09a2c0507        k8s.gcr.io/pause:3.1   "/pause"                 16 minutes ago      Up 16 minutes                           k8s_POD_kube-proxy-lwqkr_kube-system_07aa126d-bf1a-11e8-aca7-000c295011ce_0
22858af6f86f        k8s.gcr.io/pause:3.1   "/pause"                 16 minutes ago      Up 16 minutes                           k8s_POD_kube-flannel-ds-amd64-wcnmh_kube-system_07aa1723-bf1a-11e8-aca7-000c295011ce_0
[root@docker78 ~]#

Deployment升级:

kubectl set image deployment DeployNAME PodNAME=ImageNAME:NewVersion

Deployment回滚:

kubectl rollout undo deployment DeployNAME

Resource编辑:

kubectl edit svc SvcName

8、排错
1) 加入cluster 时出现如下提示:

[discovery] Created cluster-info discovery client, requesting info from "https://192.168.20.79:6443"
[discovery] Failed to connect to API Server "192.168.20.79:6443": token id "nhgji7" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token

在master上查看token如下

[root@k8s-master-dev ~]# kubeadm token list
TOKEN     TTL       EXPIRES   USAGES    DESCRIPTION   EXTRA GROUPS
[root@k8s-master-dev ~]#

说明 cluster创建时的token已经过期(超过24小时),需要重新创建token。如下:

[root@k8s-master-dev ~]# kubeadm token create
i1p283.mudnqd3raawz2o01
[root@k8s-master-dev ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
i1p283.mudnqd3raawz2o01   23h       2019-04-03T14:46:43+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

[root@k8s-master-dev ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
92ac0cd4d6224025da32385a4d46df7e80ee11c203bdb53e61e827baa1211536
[root@k8s-master-dev ~]#

然后在node 执行如下操作:

[root@k8s-node5-dev ~]# kubeadm join 192.168.20.79:6443 --token i1p283.mudnqd3raawz2o01 --discovery-token-ca-cert-hash sha256:92ac0cd4d6224025da32385a4d46df7e80ee11c203bdb53e61e827baa1211536 --ignore-preflight-errors=Swap

2) 加入cluster时出现如下提示:

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused

需要修改kubelet配置文件,并重启kubelet,如下操作:

[root@k8s-node6-dev ~]# vim /etc/sysconfig/kubelet
[root@k8s-node6-dev ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@k8s-node6-dev ~]#
[root@k8s-node6-dev ~]# systemctl restart kubelet
[root@k8s-node6-dev ~]# kubeadm join 192.168.20.79:6443 --token i1p283.mudnqd3raawz2o01 --discovery-token-ca-cert-hash sha256:92ac0cd4d6224025da32385a4d46df7e80ee11c203bdb53e61e827baa1211536 --ignore-preflight-errors=Swap

3) 加入cluster时出现如下提示:

[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition

说明待加入node之前加入过其它k8s cluster ,之前的csr请求信息没有清除干净,需要在 待加入的node节点上执行 kubeadm reset ,然后再次执行join 命令即可。

转载于:https://blog.51cto.com/caiyuanji/2240546

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值