Kubadm 部署K8S

K8s 部署方式:
1、二进制安装/yum安装
2、kubeadm 安装
master节点安装组件:
docker、kubelet、kubeadm
kube-proxy 是 动态的可被k8s 管理的pod
api-server、kube-controller、etcd、 是托管在pod
node节点组件
docker、kubelet
kube-proxy 是 动态的可被k8s 管理的pod
flannel 是 动态的可被k8s 管理的pod

其他组件都是托管在docker之中。

环境:
master:192.168.64.100
node1: 192.168.64.101
初始化环境:
1、基于主机名通讯
2、时间同步
3、防火墙关闭
4、swapoff -a && sysctl -w vm.swappiness=0

1、在 master 和node节点安装
yum install docker-ce kubelet kubeadm kubectl -y
2、更改docker 默认拉取镜像的源,并启动docker服务
vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify

the default is not to use systemd for cgroups because the delegate issues still

exists and systemd currently does not support the cgroup feature set required

for containers run by docker

Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.0/8"

重启服务

systemctl daemon-reload
systemctl restart docker
为确保Iptables 可以正常使用
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
3、开机自自动kubectl ,此时不要启动 kubectl 服务,此时启动kubectl服务会失败,因为好多服务都没有安装。所以保持开机自启动就好,依赖服务安装完后,此服务即可用。
kubelet 服务是控制 docker 容器的创建的,若此服务不开启,则无法创建docker容器

4、使用kubeadm 初始化集群
[root@master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
I0815 05:28:52.664180 13276 kernel_validator.go:81] Validating kernel version
I0815 05:28:52.664344 13276 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.64.100]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.64.100 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 46.002420 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: ji0mr8.2tarxkicqj7mvj1h
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.64.100:6443 --token ji0mr8.2tarxkicqj7mvj1h --discovery-token-ca-cert-hash sha256:88ab312ef69e359c83ce5da9f50540df03bb62302ec19077a207b07b1b283f0e

初始化完成后,查看此时 会有哪些docker容器在运行
[root@master ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c2d6290ec36 d5c25579d0ff "/usr/local/bin/kube…" About an hour ago Up About an hour k8s_kube-proxy_kube-proxy-x85p7_kube-system_18ba6bee-a00a-11e8-8a31-000c29f28532_0
e64f80454d2c k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-proxy-x85p7_kube-system_18ba6bee-a00a-11e8-8a31-000c29f28532_0
4c3dd0740462 52096ee87d0e "kube-controller-man…" About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master_kube-system_a9f354204abfbec0f4838f05b851e86e_0
be122d8ac8af 816332bd9d11 "kube-apiserver --au…" About an hour ago Up About an hour k8s_kube-apiserver_kube-apiserver-master_kube-system_ff9f8fddc99416bd4092d2fc87eb0994_0
8bd6f3452b82 b8df3b177be2 "etcd --advertise-cl…" About an hour ago Up About an hour k8s_etcd_etcd-master_kube-system_2cc1c8a24b68ab9b46bca47e153e74c6_0
5adeab822135 272b3a60cd68 "kube-scheduler --ad…" About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master_kube-system_537879acc30dd5eff5497cb2720a6d64_0
2caa9d86c767 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master_kube-system_a9f354204abfbec0f4838f05b851e86e_0
bf4b523bf32a k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master_kube-system_ff9f8fddc99416bd4092d2fc87eb0994_0
9e8cda639547 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master_kube-system_537879acc30dd5eff5497cb2720a6d64_0
c1df45d8257c k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_etcd-master_kube-system_2cc1c8a24b68ab9b46bca47e153e74c6_0
[root@master ~]#
按照提示,创建k8s 所需要的目录()
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

在master上检查集群状态
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 1h v1.11.2
此时node节点的状态是NotReady, 原因是为flannel 网络并没有安装

部署flannel
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

##flanel官方https://github.com/coreos/flannel
稍等查看 image
[root@master ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy-amd64 v1.11.1 d5c25579d0ff 4 weeks ago 97.8MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.1 272b3a60cd68 4 weeks ago 56.8MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.1 816332bd9d11 4 weeks ago 187MB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.1 52096ee87d0e 4 weeks ago 155MB
k8s.gcr.io/coredns 1.1.3 b3b94275d97c 2 months ago 45.6MB
k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 6 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 7 months ago 742kB

此时看到 image 已经下载到了本地,查看当前状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 1h v1.11.2

查看当前的pods
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-g4rx9 1/1 Running 0 1h
coredns-78fcdf6894-mcqnr 1/1 Running 0 1h
etcd-master 1/1 Running 0 3m
kube-apiserver-master 1/1 Running 0 3m
kube-controller-manager-master 1/1 Running 0 3m
kube-flannel-ds-amd64-qdl9j 1/1 Running 0 5m
kube-proxy-x85p7 1/1 Running 0 1h
kube-scheduler-master 1/1 Running 0 3m

查看当前的namespace
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 1h
kube-public Active 1h
kube-system Active 1h

把node1添加到集群:
[root@node1 ~]# kubeadm join 192.168.64.100:6443 --token ji0mr8.2tarxkicqj7mvj1h --discovery-token-ca-cert-hash sha256:88ab312ef69e359c83ce5da9f50540df03bb62302ec19077a207b07b1b283f0e
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:

  1. Run 'modprobe -- ' to load missing kernel modules;

    1. Provide the missing builtin kernel ipvs support

    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    I0815 07:14:04.604244 16656 kernel_validator.go:81] Validating kernel version
    I0815 07:14:04.604390 16656 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [discovery] Trying to connect to API Server "192.168.64.100:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.64.100:6443"
    [discovery] Requesting info from "https://192.168.64.100:6443" again to validate TLS against the pinned public key
    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.64.100:6443"
    [discovery] Successfully established connection with API Server "192.168.64.100:6443"
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [preflight] Activating the kubelet service
    [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

This node has joined the cluster:

  • Certificate signing request was sent to master and a response
    was received.
  • The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@node1 ~]#
此时在master上查看 集群状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 1h v1.11.2
node1 NotReady <none> 18s v1.11.2
此时node01在下载image 并且启动docker容器,稍等片刻
[root@node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
88daba6559b2 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-q4bqm_kube-system_bef1838b-a017-11e8-8a31-000c29f28532_0
4f81a08bdac0 k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-flannel-ds-amd64-rk8sp_kube-system_bef18c8c-a017-11e8-8a31-000c29f28532_0
[root@node1 ~]#

在master上查看集群状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 1h v1.11.2
node1 Ready <none> 2m v1.11.2

转载于:https://blog.51cto.com/shyln/2160116

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值