Kubernetes集群的搭建(1.18.2)
实验环境
节点 | OS | 角色 |
---|---|---|
192.168.1.10 | Centos7.6 | Kubernetes master |
192.168.1.20 | Centos7.6 | Kubernetes node1 |
192.168.1.30 | Centos7.6 | Kubernetes node2 |
安装节点介绍
- kubelet:运行于所有Node上,负责启动容器和Pod。
- kubeadm:负责初始化集群。
- kubectl:k8s的命令行工具,通过其可以部署以及管理应用和各种资源。
准备工作
master开启免密登录
[root@localhost ~]# ssh-keygen
[root@localhost ~]# ssh-copy-id -i root@192.168.1.10
[root@localhost ~]# ssh-copy-id -i root@192.168.1.20
[root@localhost ~]# ssh-copy-id -i root@192.168.1.30
所有节点修改主机名
[root@localhost ~]# vim /etc/hosts
192.168.1.10 k8smaster
192.168.1.20 k8snode1
192.168.1.30 k8snode2
[root@localhost ~]# scp /etc/hosts root@192.168.1.20:/etc/hosts
[root@localhost ~]# scp /etc/hosts root@192.168.1.30:/etc/hosts
master:
[root@localhost ~]# hostname k8smaster
[root@localhost ~]# bash
node1:
[root@localhost ~]# hostname k8snode1
[root@localhost ~]# bash
node2:
[root@localhost ~]# hostname k8snode2
[root@localhost ~]# bash
所有节点关闭Seliunx和防火墙
[root@k8smaster ~]# vim /etc/selinux/config
SELINUX=disabled
[root@k8smaster ~]# scp /etc/selinux/config root@k8snode1:/etc/selinux/
[root@k8smaster ~]# scp /etc/selinux/config root@k8snode2:/etc/selinux/
[root@k8smaster ~]# setenforce 0
[root@k8smaster ~]# getenforce
Permissive
[root@k8smaster ~]# systemctl disable firewalld.service && systemctl stop firewalld.service && systemctl enable docker.service && systemctl restart docker.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
所有节点关闭swap分区
[root@k8smaster ~]# swapoff -a
[root@k8smaster ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
所有节点添加模块
[root@k8smaster ~]# modprobe ip_vs_rr
[root@k8smaster ~]# modprobe br_netfilter
[root@k8smaster ~]# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
所有节点添加参数
[root@k8smaster ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.swappiness = 0
[root@k8smaster ~]# scp /etc/sysctl.conf root@k8snode1:/etc/
[root@k8smaster ~]# scp /etc/sysctl.conf root@k8snode2:/etc/
[root@k8smaster ~]# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.swappiness = 0
所有节点下载epel包
[root@k8smaster ~]# yum -y install epel-release
找到阿里云的kubernetes配置文件,所有节点安装
[root@k8smaster ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
查看镜像源
[root@k8smaster ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@k8smaster ~]# yum -y install kubelet kubeadm kubectl
所有节点启动kubernetes服务
[root@k8smaster ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
所有节点添加命令补全
[root@k8smaster ~]# source <(kubeadm completion bash)
[root@k8smaster ~]# source <(kubectl completion bash)
[root@k8smaster ~]# vim ~/.bashrc
source <(kubeadm completion bash)
source <(kubectl completion bash)
查看版本
[root@k8smaster ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:54:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
master节点初始化集群
[root@k8smaster ~]# kubeadm init --apiserver-advertise-address=192.168.1.10 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.2 --pod-network-cidr=10.244.0.0/16
master配置kubectl认证信息
[root@k8smaster ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
master配置flanne网络
[root@k8smaster ~]# wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
[root@k8smaster ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
如果拉取不下来,有两种解决方法
1.科学上网
2.添加Hosts配置参数(以下要使用的方法)
[root@k8smaster ~]# vim /etc/hosts
# GitHub hosts
192.30.253.112 github.com
192.30.253.119 gist.github.com
151.101.184.133 assets-cdn.github.com
151.101.184.133 raw.githubusercontent.com
151.101.184.133 gist.githubusercontent.com
151.101.184.133 cloud.githubusercontent.com
151.101.184.133 camo.githubusercontent.com
151.101.184.133 avatars0.githubusercontent.com
151.101.184.133 avatars1.githubusercontent.com
151.101.184.133 avatars2.githubusercontent.com
151.101.184.133 avatars3.githubusercontent.com
151.101.184.133 avatars4.githubusercontent.com
151.101.184.133 avatars5.githubusercontent.com
151.101.184.133 avatars6.githubusercontent.com
151.101.184.133 avatars7.githubusercontent.com
151.101.184.133 avatars8.githubusercontent.com
151.101.185.194 github.global.ssl.fastly.net
下载flannel并打标签(三台节点)
[root@k8smaster ~]# docker pull quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64
[root@k8smaster ~]# docker tag quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64
node1、2加入集群
[root@k8snode1 ~]# kubeadm join 192.168.1.10:6443 --token np8qtt.7mk9fj6jz8nmss2o \
> --discovery-token-ca-cert-hash sha256:d58b5b30aeccf8091e21861fc22810e00960969a36497cc5a03651e65e1932bd
查看kubernetes集群状态
[root@k8smaster ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7ff77c879f-9tp8p 1/1 Running 0 25m
kube-system coredns-7ff77c879f-kblqn 1/1 Running 0 25m
kube-system etcd-k8smaster 1/1 Running 0 25m
kube-system kube-apiserver-k8smaster 1/1 Running 0 25m
kube-system kube-controller-manager-k8smaster 1/1 Running 0 25m
kube-system kube-flannel-ds-amd64-4fr87 1/1 Running 0 2m57s
kube-system kube-flannel-ds-amd64-kkprv 1/1 Running 0 3m2s
kube-system kube-flannel-ds-amd64-vjq2p 1/1 Running 0 20m
kube-system kube-proxy-6pzzn 1/1 Running 0 25m
kube-system kube-proxy-mr4cz 1/1 Running 0 3m2s
kube-system kube-proxy-s4985 1/1 Running 0 2m57s
kube-system kube-scheduler-k8smaster 1/1 Running 0 25m
[root@k8smaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 26m v1.18.2
k8snode1 Ready <none> 3m36s v1.18.2
k8snode2 Ready <none> 3m31s v1.18.2