Docker k8s集群的搭建

[root@server1 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm 

关闭swap分区

[root@server1 rpm]# swapoff -a
[root@server1 rpm]# vim /etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server1 rpm]# systemctl enable kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@server2 k8s]# swapoff -a
[root@server2 k8s]# vim /etc/fstab 
注释swap
[root@server2 k8s]# systemctl enable kubelet

列出需要加载的镜像

[root@server1 k8s]# kubeadm config images list
I0529 20:14:24.882048    8689 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0529 20:14:24.882095    8689 version.go:94] falling back to the local client version: v1.12.2
k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

加载镜像

[root@server1 k8s]# docker load -i kube-apiserver.tar 
8a788232037e: Loading layer   1.37MB/1.37MB
507564533658: Loading layer  192.8MB/192.8MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.12.2
[root@server1 k8s]# docker load -i kube-controller-manager.tar 
0faf148c8565: Loading layer    163MB/163MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.12.2
[root@server1 k8s]# docker load -i kube-proxy.tar 
0c1604b64aed: Loading layer   44.6MB/44.6MB
dc6f419d40a2: Loading layer  3.407MB/3.407MB
2d9b7a4a23dd: Loading layer  50.33MB/50.33MB
Loaded image: k8s.gcr.io/kube-proxy:v1.12.2
[root@server1 k8s]# docker load -i pause.tar 
e17133b79956: Loading layer  744.4kB/744.4kB
Loaded image: k8s.gcr.io/pause:3.1
[root@server1 k8s]# docker load -i etcd.tar 
f9d9e4e6e2f0: Loading layer  1.378MB/1.378MB
7882cc107ed3: Loading layer  195.1MB/195.1MB
43f7b6974634: Loading layer  23.45MB/23.45MB
Loaded image: k8s.gcr.io/etcd:3.2.24
[root@server1 k8s]# docker load -i coredns.tar 
9198eadacc0a: Loading layer  542.2kB/542.2kB
9949e50e3468: Loading layer  38.94MB/38.94MB
Loaded image: k8s.gcr.io/coredns:1.2.2
[root@server1 k8s]# docker load -i kube-scheduler.tar 
0d5e977176bb: Loading layer  57.19MB/57.19MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.12.2
[root@server1 k8s]# docker load -i flannel.tar 
cd7100a72410: Loading layer  4.403MB/4.403MB
3b6c03b8ad66: Loading layer  4.385MB/4.385MB
93b0fa7f0802: Loading layer  158.2kB/158.2kB
4165b2148f36: Loading layer  36.33MB/36.33MB
b883fd48bb96: Loading layer   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.10.0-amd64

server2同上

初始化集群

[root@server1 k8s]# vim kube-flannel.yml 
[root@server1 k8s]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.76.1

在这里插入图片描述

若出现此报错

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

解决

[root@server1 k8s]# sysctl -a | grep net.*iptables
net.bridge.bridge-nf-call-iptables = 0
[root@server1 k8s]# sysctl -w net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1

建立k8s 用户 设置环境

[root@server1 k8s]# useradd k8s
[root@server1 k8s]# vim /etc/sudoers
k8s     ALL=(ALL)       NOPASSWD:ALL
[root@server1 k8s]# su - k8s
[k8s@server1 ~]$ mkdir -p $HOME/.kube
[k8s@server1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@server1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

解决kubectl补不全的问题

[k8s@server1 ~]$ echo "source <(kubectl completion bash)" >> ./.bashrc
[k8s@server1 ~]$ logout
[root@server1 k8s]# su - k8s
Last login: Wed May 29 21:38:53 CST 2019 on pts/0
[k8s@server1 ~]$ kubectl 
[root@server1 k8s]# cp kube-flannel.yml /home/k8s/
[root@server1 k8s]# su - k8s 
Last login: Wed May 29 20:39:17 CST 2019 on pts/0
[k8s@server1 ~]$ kubectl apply -f kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

查看运行容器

[k8s@server1 ~]$ sudo docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
322bf9c26387        k8s.gcr.io/pause:3.1   "/pause"                 5 seconds ago       Up 2 seconds                            k8s_POD_coredns-576cbf47c7-6thn7_kube-system_cf97197e-820c-11e9-8f2f-5254007b768a_0
1597d386fd20        k8s.gcr.io/pause:3.1   "/pause"                 5 seconds ago       Up 3 seconds                            k8s_POD_coredns-576cbf47c7-j7h7m_kube-system_cfbb0884-820c-11e9-8f2f-5254007b768a_0
0ef7a47c15f8        f0fad859c909           "/opt/bin/flanneld -…"   9 seconds ago       Up 8 seconds                            k8s_kube-flannel_kube-flannel-ds-amd64-zfwzx_kube-system_0de55390-820f-11e9-8f2f-5254007b768a_0
f1886b7fc53c        k8s.gcr.io/pause:3.1   "/pause"                 13 seconds ago      Up 11 seconds                           k8s_POD_kube-flannel-ds-amd64-zfwzx_kube-system_0de55390-820f-11e9-8f2f-5254007b768a_0
18e30e4e092e        96eaf5076bfe           "/usr/local/bin/kube…"   16 minutes ago      Up 16 minutes                           k8s_kube-proxy_kube-proxy-8dkvj_kube-system_cf980b29-820c-11e9-8f2f-5254007b768a_0
e49ead492ac4        k8s.gcr.io/pause:3.1   "/pause"                 16 minutes ago      Up 16 minutes                           k8s_POD_kube-proxy-8dkvj_kube-system_cf980b29-820c-11e9-8f2f-5254007b768a_0
26d8b32cf73e        6e3fa7b29763           "kube-apiserver --au…"   17 minutes ago      Up 16 minutes                           k8s_kube-apiserver_kube-apiserver-server1_kube-system_72c8d114946f9c6b8ec4ce24886d38ad_0
9e37cc84f57f        a84dd4efbe5f           "kube-scheduler --ad…"   17 minutes ago      Up 16 minutes                           k8s_kube-scheduler_kube-scheduler-server1_kube-system_ee7b1077c61516320f4273309e9b4690_0
8dbbebb350fd        b9a2d5b91fd6           "kube-controller-man…"   17 minutes ago      Up 17 minutes                           k8s_kube-controller-manager_kube-controller-manager-server1_kube-system_f19ad71fa7d45949d1d3547f3ebe8636_0
6740b8475ebd        b57e69295df1           "etcd --advertise-cl…"   17 minutes ago      Up 17 minutes                           k8s_etcd_etcd-server1_kube-system_62901bda05af0d9d9b9185862b776eb8_0
0f9c4f8bd9d0        k8s.gcr.io/pause:3.1   "/pause"                 17 minutes ago      Up 17 minutes                           k8s_POD_kube-scheduler-server1_kube-system_ee7b1077c61516320f4273309e9b4690_0
b804fb5a6977        k8s.gcr.io/pause:3.1   "/pause"                 17 minutes ago      Up 17 minutes                           k8s_POD_kube-controller-manager-server1_kube-system_f19ad71fa7d45949d1d3547f3ebe8636_0
189048c72de7        k8s.gcr.io/pause:3.1   "/pause"                 17 minutes ago      Up 17 minutes                           k8s_POD_etcd-server1_kube-system_62901bda05af0d9d9b9185862b776eb8_0
e8fe75328f1b        k8s.gcr.io/pause:3.1   "/pause"                 17 minutes ago      Up 17 minutes                           k8s_POD_kube-apiserver-server1_kube-system_72c8d114946f9c6b8ec4ce24886d38ad_0

根据master节点初始化集群的结果,加入集群

[root@server2 k8s]# modprobe ip_vs_sh
[root@server2 k8s]# modprobe ip_vs_wrr
[root@server2 k8s]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@server2 k8s]# kubeadm join 172.25.76.1:6443 --token 6ay0gy.recdfjt30p8wqhbz --discovery-token-ca-cert-hash sha256:f8489d173e9a36801c23786ab98c2d0835717a8e48a3ff66c054b02eedb5dc54
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server "172.25.76.1:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.25.76.1:6443"
[discovery] Requesting info from "https://172.25.76.1:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.25.76.1:6443"
[discovery] Successfully established connection with API Server "172.25.76.1:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "server2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在server1上查看nodes是否ready

[k8s@server1 ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
server1   Ready    master   11m   v1.12.2
server2   Ready    <none>   18s   v1.12.2

查看所有namespace下的pod

[k8s@server1 ~]$ kubectl get pod --all-namespaces 
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-gc545          1/1     Running   0          12m
kube-system   coredns-576cbf47c7-wnk4k          1/1     Running   0          12m
kube-system   etcd-server1                      1/1     Running   0          12m
kube-system   kube-apiserver-server1            1/1     Running   0          12m
kube-system   kube-controller-manager-server1   1/1     Running   0          12m
kube-system   kube-flannel-ds-amd64-j29dt       1/1     Running   0          111s
kube-system   kube-flannel-ds-amd64-wzw4w       1/1     Running   0          3m32s
kube-system   kube-proxy-9hnfp                  1/1     Running   0          12m
kube-system   kube-proxy-tgt87                  1/1     Running   0          111s
kube-system   kube-scheduler-server1            1/1     Running   0          12m

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值