使用kubeadm创建kubernetes集群

关于kubernetes的介绍,网上有很多的资料可以查阅,本文主要将如何使用kubeadm去搭建一个最简单的集群。

关于kubeadm

kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随着kubernetes的更新也一直在更新,他只是为了我们能更加快速的去初始化一个kubernetes的集群,当然,在生产环境中,我们还是直接用二进制的方式,脚本,absible/saltstack这些东西去部署吧,好了,废话少说直接开始吧!!!

镜像源

kubernetes的一套容器的编排工具,所以我们需要配置容器的镜像源和kubernetes的镜像源,一共是两个。

容器的镜像源

大家可以去国内的镜像源中下载那个repo文件,但是需要注意的是,下载下来的repo文件是需要修改的,因为这些repo文件的还是指向docker的官网的,所以尽量修改成国内的镜像源,我上次就是没有注意看,直接拿来用了,结果下载的速度离奇的慢,慢的我怀疑人生,一度以为图书馆的网络出问题,最后发现是源没有写对,这次修改成国内的镜像源之后,速度明显快了 好几十倍。(清华大学镜像源,关掉了gpgcheck)

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/x86_64/stable
enabled=1
gpgcheck=0
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/linux/centos/gpg
kubernetes的镜像源

使用kubernetes源的目的是为了装kubeadm,kubelet,kubectl
kubeadm是一个工具,是一个安装kuberrnets的方式
kubelet:节点上最主要的一个工作代理,汇报节点状态并实现容器组的生命周期管理
kubectl:kubernetes的命令行工具。
这里选用阿里云上的镜像源,好像也只有阿里云上有这个镜像

[kubernetes]
name=kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enable=1
gpgcheck=0
主机的准备环节

操作系统:CentOS Linux release 7.4.1708 (Core)
内核版本:3.10.0-693.el7.x86_64
主机:KVM虚拟机,1G内存,用起来比较卡,下次稍微大点
google建议Swap关闭,不然会各种各样的问题,但是我没有关,毕竟是测试,自己玩的
firewalld关掉,并且开机不启动
关闭selinux
在/etc/hosts文件中,加入主机名解析
在/etc/sysctl.conf文件中加入这三条,并使用sysctl -p使之生效,如果遇到错误,使用modprobe br_netfilter,在使用sysctl -p

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
安装docker-ce,kubeadm,kubelet,kubectl

kubeadm的安装方法大概是首先在master上安装,kubeadm,kubelet,然后使用这两个工具将其他的组件全部运行为pod,注意这些pod里面包含的docker镜像全在google的官网上,所以使用普通的网络是无法直接访问的,具体的方法,我们之后再说。

[root@node2 ~]# yum install -y docker-ce kubeadm kubelet kubectl 
[root@node2 ~]# systemctl start docker
[root@node2 ~]# systemctl enable docker kubelet 
[root@node2 ~]# rpm -ql  kubelet
/etc/kubernetes/manifests   ##清单应用
/etc/sysconfig/kubelet ##配置文件
/etc/systemd/system/kubelet.service  ##开机启动文件
/usr/bin/kubelet  ##主程序

由于我开启了Swap分区,所以需要修改一个配置文件

KUBELET_EXTRA_ARGS=--fail-swap-on=false

去掉必须关闭Swap的限制
开始初始化集群的master节点:

[root@node2 ~]# kubeadm init --help
Run this command in order to set up the Kubernetes master.

Usage:
  kubeadm init [flags]

Flags:
      --apiserver-advertise-address string   The IP address the API Server will advertise it's listening on. Specify '0.0.0.0' to use the address of the default network interface.
      --apiserver-bind-port int32            Port for the API Server to bind to. (default 6443)
      --apiserver-cert-extra-sans strings    Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
      --cert-dir string                      The path where to save and store the certificates. (default "/etc/kubernetes/pki")
      --config string                        Path to kubeadm config file. WARNING: Usage of a configuration file is experimental.
      --cri-socket string                    Specify the CRI socket to connect to. (default "/var/run/dockershim.sock")
      --dry-run                              Don't apply any changes; just output what would be done.
      --feature-gates string                 A set of key=value pairs that describe feature gates for various features. Options are:
                                             Auditing=true|false (ALPHA - default=false)
                                             CoreDNS=true|false (default=true)
                                             DynamicKubeletConfig=true|false (BETA - default=false)
  -h, --help                                 help for init
      --ignore-preflight-errors strings      A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --kubernetes-version string            Choose a specific Kubernetes version for the control plane. (default "stable-1")
      --node-name string                     Specify the node name.
      --pod-network-cidr string              Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
      --service-cidr string                  Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")
      --service-dns-domain string            Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local")
      --skip-token-print                     Skip printing of the default bootstrap token generated by 'kubeadm init'.
      --token string                         The token to use for establishing bidirectional trust between nodes and masters. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
      --token-ttl duration                   The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)

Global Flags:
      --rootfs string   [EXPERIMENTAL] The path to the 'real' host root filesystem.
  -v, --v Level         log level for V logs

可以看出来选项不是很多,大多数我们使用默认的都可以,

kubeadm init  --pod-network-cidr=10.244.0.0/16  --apiserver-advertise-address=0.0.0.0 --ignore-preflight-errors=Swap --ignore-preflight-errors=SystemVerification

需要注意的是需要忽略Swap和SystemVerification,这个时候,kubelet就会去google官网官网下载自己需要的一些docker容器了,但是如果此时不能正常的访问google的官网的话,是拿不到自己需要的资源的,所以这个时候,一把都是去找个代理去做的,我自己在这块儿就是用的自己的代理,ss的全局代理,反正用起来挺麻烦的,不过好在还是拿下来了。
此时就需要慢慢的等待,等待他将image拉下来后,然后
此时看提示信息,可以看到他除了kubernetes的组件之外还启动了一个coredns和etcd这两个pod,
并且提示你需要复制一个文件,还有一段kubeadm join的一段信息(这个信息很重要,最后复制保存下来)

kubeadm join 192.168.122.12:6443 --token hea07c.l30j7up9t4p2pm8d --discovery-token-ca-cert-hash sha256:b2adb42a6292b42f39136914fba8a1dfb7fa8f9d4fbcc3d37ba7fc5feb7551e7

正常拿下来后,查看docker的镜像,可以看到以下的几个镜像

[root@node2 ~]# docker image ls
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.12.2             15e9da1ca195        3 weeks ago         96.5MB    ##按道理来说,不应该有这个,但是不知道为什么还会有
k8s.gcr.io/kube-apiserver            v1.12.2             51a9c329b7c5        3 weeks ago         194MB
k8s.gcr.io/kube-controller-manager   v1.12.2             15548c720a70        3 weeks ago         164MB
k8s.gcr.io/kube-scheduler            v1.12.2             d6d57c76136c        3 weeks ago         58.3MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        7 weeks ago         220MB  ##其实就是一个数据库
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        2 months ago        39.2MB  ###集群的内部解析
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        10 months ago       742kB ###一个很重要的base镜像

查看container运行情况,显示运行有多个容器

[root@node2 ~]# docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                                            NAMES
a89801c926b6        367cdc8433a4                       "/coredns -conf /etc…"   2 hours ago         Up 2 hours                                                           k8s_coredns_coredns-576cbf47c7-sf95g_kube-system_f7495d12-e7f5-11e8-9e77-525400b43f1e_3
369b3b74050f        367cdc8433a4                       "/coredns -conf /etc…"   2 hours ago         Up 2 hours                                                           k8s_coredns_coredns-576cbf47c7-fv6k5_kube-system_f724b32a-e7f5-11e8-9e77-525400b43f1e_3
45e2180246da        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_coredns-576cbf47c7-sf95g_kube-system_f7495d12-e7f5-11e8-9e77-525400b43f1e_7
ea7e754923f1        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_coredns-576cbf47c7-fv6k5_kube-system_f724b32a-e7f5-11e8-9e77-525400b43f1e_8
ad72d570ce93        15e9da1ca195                       "/usr/local/bin/kube…"   2 hours ago         Up 2 hours                                                           k8s_kube-proxy_kube-proxy-s8wgp_kube-system_f73d90c2-e7f5-11e8-9e77-525400b43f1e_4
0ead1107587f        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_kube-proxy-s8wgp_kube-system_f73d90c2-e7f5-11e8-9e77-525400b43f1e_4
5c89657c2ad3        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_kube-flannel-ds-amd64-h55kz_kube-system_3fef9d67-e7f8-11e8-9e77-525400b43f1e_4
4cc234320203        51a9c329b7c5                       "kube-apiserver --au…"   2 hours ago         Up 2 hours                                                           k8s_kube-apiserver_kube-apiserver-node2_kube-system_7d43f56d770b50dfc0f979605707fcdf_6
722c59bd7005        d6d57c76136c                       "kube-scheduler --ad…"   2 hours ago         Up 2 hours                                                           k8s_kube-scheduler_kube-scheduler-node2_kube-system_ee7b1077c61516320f4273309e9b4690_6
7faf133870cf        15548c720a70                       "kube-controller-man…"   2 hours ago         Up 2 hours                                                           k8s_kube-controller-manager_kube-controller-manager-node2_kube-system_f19ad71fa7d45949d1d3547f3ebe8636_6
3698ae8a02d3        3cab8e1b9802                       "etcd --advertise-cl…"   2 hours ago         Up 2 hours                                                           k8s_etcd_etcd-node2_kube-system_3e9075c7b35675029d6b1eebf05295c1_6
d0d34eacb232        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_kube-scheduler-node2_kube-system_ee7b1077c61516320f4273309e9b4690_4
9de145432cdd        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_kube-controller-manager-node2_kube-system_f19ad71fa7d45949d1d3547f3ebe8636_4
256e31cc3b66        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_kube-apiserver-node2_kube-system_7d43f56d770b50dfc0f979605707fcdf_4
b9f115543bc4        k8s.gcr.io/pause:3.1               "/pause"                 2 hours ago         Up 2 hours                                                           k8s_POD_etcd-node2_kube-system_3e9075c7b35675029d6b1eebf05295c1_4

查看pod的状态,除了重启次数和时间有的不正常,其他的都还好,这个是因为我是前一天下午做好的集群,第二天的下午才写的文章

[root@node2 ~]# kubectl get pods -n kube-system -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE
coredns-576cbf47c7-fv6k5        1/1     Running   3          22h   10.244.0.9       node2   <none>
coredns-576cbf47c7-sf95g        1/1     Running   3          22h   10.244.0.8       node2   <none>
etcd-node2                      1/1     Running   6          22h   192.168.122.12   node2   <none>
kube-apiserver-node2            1/1     Running   6          22h   192.168.122.12   node2   <none>
kube-controller-manager-node2   1/1     Running   6          22h   192.168.122.12   node2   <none>
kube-proxy-s8wgp                1/1     Running   4          22h   192.168.122.12   node2   <none>
kube-scheduler-node2            1/1     Running   6          22h   192.168.122.12   node2   <none>

查看node 的状态

[root@node2 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node2   NotReady    master   22h   v1.12.2

此时,你会发现,虽然node2上kubernetes的各个组件都有了,并且正常运行,但是master的状态却是notready的,这个是因为缺少一个网络附件,有了网络附件,才可以通信,才可以ready,这里选用flannel的网络附件,在github上就可以找到他,而且用起来很方便。
https://github.com/coreos/flannel
可以看到这样的一句话,所以用起来很简单的
Flannel can be added to any existing Kubernetes cluster though it’s simplest to add flannel before any pods using the pod network have been started.

For Kubernetes v1.7+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

See Kubernetes for more details.
在命令行中输入

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

就可以自动下载镜像,并且运行为pod,等下载玩,运行为pod之后,可以看到docker的image,docker的container,kubertenes的pod都有所体现,此时再查看node的状态

[root@node2 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node2   Ready    master   22h   v1.12.2

然后准备初始化node节点,

[root@node2 ~]# yum install -y docker-ce kubeadm kubelet
[root@node2 ~]# systemctl start docker
[root@node2 ~]# systemctl enable docker kubelet 

使用刚刚提示的命令,需要注意的是,哦去需要把kubelet的配置文件也复制过去,这一步的操作分为两个步骤,第一步,先去192.168.122.12通过6443端口,获取一份资源清单,然后再去google官网获取资源,同样需要找代理,而且也需要忽略Swap和系统认证。

kubeadm join 192.168.122.12:6443 --token hea07c.l30j7up9t4p2pm8d --discovery-token-ca-cert-hash sha256:b2adb42a6292b42f39136914fba8a1dfb7fa8f9d4fbcc3d37ba7fc5feb7551e7  --ignore-preflight-errors=Swap --ignore-preflight-errors=SystemVerification

等待完成后,这个节点上也会出现多个pod,这两个pod是运行node3上面的

kube-flannel-ds-amd64-zdt9t     1/1     Running   4          21h   192.168.122.13   node3   <none>
kube-proxy-4fq5w                1/1     Running   3          21h   192.168.122.13   node3   <none>

同样的方式,在node4上也join进master上,等所有的pod启动后,再查看master这边的信息。
查看node状态

[root@node2 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node2   Ready    master   22h   v1.12.2
node3   Ready    <none>   21h   v1.12.2
node4   Ready    <none>   21h   v1.12.2

查看namespace的状态

[root@node2 ~]# kubectl get ns
NAME          STATUS   AGE
default       Active   22h
kube-public   Active   22h
kube-system   Active   22h

查看pods的状态

[root@node2 ~]# kubectl get pods -n kube-system -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE
coredns-576cbf47c7-fv6k5        1/1     Running   3          22h   10.244.0.9       node2   <none>
coredns-576cbf47c7-sf95g        1/1     Running   3          22h   10.244.0.8       node2   <none>
etcd-node2                      1/1     Running   6          22h   192.168.122.12   node2   <none>
kube-apiserver-node2            1/1     Running   6          22h   192.168.122.12   node2   <none>
kube-controller-manager-node2   1/1     Running   6          22h   192.168.122.12   node2   <none>
kube-flannel-ds-amd64-fdrtv     1/1     Running   6          21h   192.168.122.14   node4   <none>
kube-flannel-ds-amd64-h55kz     1/1     Running   4          22h   192.168.122.12   node2   <none>
kube-flannel-ds-amd64-zdt9t     1/1     Running   4          21h   192.168.122.13   node3   <none>
kube-proxy-4fq5w                1/1     Running   3          21h   192.168.122.13   node3   <none>
kube-proxy-s8wgp                1/1     Running   4          22h   192.168.122.12   node2   <none>
kube-proxy-x7znw                1/1     Running   1          21h   192.168.122.14   node4   <none>
kube-scheduler-node2            1/1     Running   6          22h   192.168.122.12   node2   <none>

查看node2的信息(很详细)

[root@node2 ~]# kubectl describe node node2
Name:               node2
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=node2
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"1e:78:5b:12:6d:06"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.122.12
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 14 Nov 2018 18:13:28 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 15 Nov 2018 16:59:17 +0800   Wed, 14 Nov 2018 18:13:28 +0800   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 15 Nov 2018 16:59:17 +0800   Wed, 14 Nov 2018 18:13:28 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 15 Nov 2018 16:59:17 +0800   Wed, 14 Nov 2018 18:13:28 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 15 Nov 2018 16:59:17 +0800   Wed, 14 Nov 2018 18:13:28 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 15 Nov 2018 16:59:17 +0800   Wed, 14 Nov 2018 18:30:41 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.122.12
  Hostname:    node2
Capacity:
 cpu:                1
 ephemeral-storage:  7254Mi
 hugepages-2Mi:      0
 memory:             1016260Ki
 pods:               110
Allocatable:
 cpu:                1
 ephemeral-storage:  6845733263
 hugepages-2Mi:      0
 memory:             913860Ki
 pods:               110
System Info:
 Machine ID:                 4eec2011bf49440ca058061b0e2a3aca
 System UUID:                CB46F587-48E6-4670-8C2C-5BDEB191EF94
 Boot ID:                    1959c6ae-7000-446d-a728-e7f70c90ea81
 Kernel Version:             3.10.0-693.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.0
 Kubelet Version:            v1.12.2
 Kube-Proxy Version:         v1.12.2
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                             CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                             ------------  ----------  ---------------  -------------
  kube-system                coredns-576cbf47c7-fv6k5         100m (10%)    0 (0%)      70Mi (7%)        170Mi (19%)
  kube-system                coredns-576cbf47c7-sf95g         100m (10%)    0 (0%)      70Mi (7%)        170Mi (19%)
  kube-system                etcd-node2                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-node2             250m (25%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-node2    200m (20%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-amd64-h55kz      100m (10%)    100m (10%)  50Mi (5%)        50Mi (5%)
  kube-system                kube-proxy-s8wgp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-node2             100m (10%)    0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests     Limits
  --------  --------     ------
  cpu       850m (85%)   100m (10%)
  memory    190Mi (21%)  390Mi (43%)
Events:     <none>

查看pod的详细信息

[root@node2 ~]# kubectl describe pod -n kube-system kube-apiserver-node2
Name:               kube-apiserver-node2
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               node2/192.168.122.12
Start Time:         Thu, 15 Nov 2018 13:57:34 +0800
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash: 7d43f56d770b50dfc0f979605707fcdf
                    kubernetes.io/config.mirror: 7d43f56d770b50dfc0f979605707fcdf
                    kubernetes.io/config.seen: 2018-11-14T18:13:05.944111167+08:00
                    kubernetes.io/config.source: file
                    scheduler.alpha.kubernetes.io/critical-pod: 
Status:             Running
IP:                 192.168.122.12
Containers:
  kube-apiserver:
    Container ID:  docker://4cc234320203b407b0215623765701e44b865e2efe52bd1f9cda00eb535392b3
    Image:         k8s.gcr.io/kube-apiserver:v1.12.2
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver@sha256:094929baf3a7681945d83a7654b3248e586b20506e28526121f50eb359cee44f
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --authorization-mode=Node,RBAC
      --advertise-address=192.168.122.12
      --allow-privileged=true
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Thu, 15 Nov 2018 14:26:12 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 15 Nov 2018 14:24:58 +0800
      Finished:     Thu, 15 Nov 2018 14:24:59 +0800
    Ready:          True
    Restart Count:  6
    Requests:
      cpu:        250m
    Liveness:     http-get https://192.168.122.12:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/pki from etc-pki (ro)
      /etc/ssl/certs from ca-certs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-pki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pki
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值