1、初始化环境准备:
1.1、环境装备
192.168.230.140 k8s-node01
192.168.230.141 k8s-node02
192.168.230.143 k8s-master01
1、2、设置系统主机名
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
hostnamectl set-hostname k8s-master01
1.3、分别修改各机器的host文件*
cat >/etc/hosts << EOF
192.168.230.140 k8s-node01
192.168.230.141 k8s-node02
192.168.230.143 k8s-master01
EOF
1.4、安装所需的依赖包
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
1.5、设置防火墙为 Iptables 并设置空规则
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
1.6、关闭 SELINUX
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
1.7、调整内核参数,对于 K8S
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
1.8、调整系统时区
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond
1.9、关闭系统不需要服务
systemctl stop postfix && systemctl disable postfix
1.10、设置 rsyslogd 和 systemd journald
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
1.11、升级系统内核为 4.44
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
grub2-set-default 'CentOS Linux (4.4.230-1.el7.elrepo.x86_64) 7 (Core)'
reboot
注意:以上步骤需要每一台机器均执行。
2、安装 Docker 以及配置私有仓库
2.1 安装docker软件
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum update -y && yum install -y docker-ce
2.2 创建 /etc/docker 目录
mkdir /etc/docker
2.3 配置 daemon.json
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"registry-mirrors": ["https://t8jbtg0w.mirror.aliyuncs.com"],
"insecure-registries": ["https://hub.cfzq.com"],
} EOF
注意:其中registry-mirrors为阿里云加速器地址,insecure-registries为私有仓库地址
2.4 重启docker并开机启动
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
2.5连接私有仓库
docker login https://hub.cfzq.com
2.6 测试拉取私有仓库文件
注意:上述步骤所有机器均需执行
3、Kubeadm 部署安装k8s集群:
3.1、kube-proxy开启ipvs的前置条件
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules &&
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
3.2、安装 Kubeadm (所有机器)
3.2.1 配置kubernetes yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
3.2.2 安装kubeadm、kubectl、kubelet
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service
kubelet:运行在cluster所有节点上,负责启动POD和容器
kubeadm:用于初始化cluster
kubectl:kubectl是kubenetes命令行工具,通过kubectl可以部署和管理应用,查看各种资源,创建,删除和更新组件
3.2.3 安装kubeadm基础镜像
上传kubeadm-basic.images.tar.gz。解压,并导入images(需要在每台机器上执行)。
具体可新建如下脚本执行:
#!/bin/bash
ls /root/kubeadm-basic.images > /tmp/image-list.txt
cd /root/kubeadm-basic.images
for i in $(cat /tmp/image-list.txt)
do
docker load -i $i
done
rm -rf /tmp/image-list.txt
3.3 初始化主节点
3.3.1 生成kubeadm-config.yaml文件
kubeadm config print init-defaults > kubeadm-config.yaml
修改配置如下:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.230.143 #需修改
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1#需修改
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"#需新增
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
3.3.2 根据生成的文件进行kubeadm初始化
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
注意:kubeadm-init.log需保留,后期可能会用到
3.3.3 根据生成的日志在master执行如下命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
3.3.4 分别在两台机器执行如下命令,将该节点加入k8s集群中。
kubeadm join 192.168.230.143:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:b4beebdc57cc809471996300fe3d49a4a66e1cc595807f06d80ee52ec927187e
注意:如日志没有保存可通过如下方式:
1、获取hash值:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
2、查看token:
kubeadm token list
部署网络flannel
wget -e robots=off https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
分别每台机器安装fanneld会报错,可手工下载tar -zxvf flannel-v0.12.0-linux-amd64.tar.gz
然后 docker load < flanneld-v0.12.0-amd64.docke
kubectl get pod -n kube-system,如下图所示:
上述k8s安装完成
journalctl -xefu kubelet查看k8s集群报错日志
注意kubelet的从group dirver保持和docker的一致,不然会报如下错: