文章目录
一 .搭建前环境准备
优化服务
1 . 永久关闭manager 功能
[root@master2 ~]# systemctl stop NetworkManager && systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
2. 永久关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
3 . 永久关闭核心防护
setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config
4 . 关闭swap: (NODE 节点需要此步)
swapoff -a # 临时
vim /etc/fstab
#永久#UUID=96e44026-9869-495e-9153-46e5527c561e swap swap defaults 0 0
Mount -a 生效
5. 设置hosts
设置主机名: hostnamectl set-hostname <hostname>
在master添加hosts:
cat >> /etc/hosts << EOF
192.168.100.13 master
192.168.100.14 node1
192.168.100.15 node2
EOF
6. 将桥接的IPv4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
7. 时间同步:
yum install ntpdate -y ntpdate
ntpdate time.windows.com
8. 开启路由转发
[root@k8s-node1 ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
使其生效
Sysctl -p
部署
安装docker 环境
1.所有节点安装Docker/kubeadm/kubelet
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
2、 安装Docker
[root@node2 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@node2 ~]# yum -y install docker-ce-18.06.1.ce-3.el7
[root@node2 ~]# systemctl enable docker && systemctl start docker
[root@node2 ~]# docker --version
!!!!!!!此步备选!!!!!!
3. 配置kubernetes的yum源
[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
!!!!!!!此步备选!!!!!!
4.加速优化 阿里云镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://*******.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
service network restart
5.添加阿里云YUM软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
执行下列命令刷新yum源缓存
[root@master ~]# yum clean all
[root@master ~]# yum makecache
[root@master ~]# yum repolist
得到这面这个列表,说明源配置正确
源标识 源名称 状态
base/7/x86_64 CentOS-7 - Base 10,070
docker-ce-stable/x86_64 Docker CE Stable - x86_64 82
extras/7/x86_64 CentOS-7 - Extras 413
kubernetes Kubernetes 570
updates/7/x86_64 CentOS-7 - Updates 1,134
repolist: 12,269
安装kubeadm,kubelet和kubectl
由于版本更新频繁,这里指定版本号部署:
yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
systemctl enable kubelet
yum remove -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
已安装:
kubeadm.x86_64 0:1.17.0-0 kubectl.x86_64 0:1.17.0-0 kubelet.x86_64 0:1.17.0-0
作为依赖被安装:
conntrack-tools.x86_64 0:1.4.4-7.el7
cri-tools.x86_64 0:1.13.0-0
kubernetes-cni.x86_64 0:0.8.7-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
系统就会帮我们自动安装kubeadm了,一共会安装kubelet、kubeadm、kubectl、kubernetes-cni这四个程序。
kubeadm:k8集群的一键部署工具,通过把k8的各类核心组件和插件以pod的方式部署来简化安装过程
kubelet:运行在每个节点上的node agent,k8集群通过kubelet真正的去操作每个节点上的容器,由于需要直接操作宿主机的各类资源,所以没有放在pod里面,还是通过服务的形式装在系统里面
kubectl:kubernetes的命令行工具,通过连接api-server完成对于k8的各类操作
kubernetes-cni:k8的虚拟网络设备,通过在宿主机上虚拟一个cni0网桥,来完成pod之间的网络通讯,作用和docker0类似。
安装完后,执行
查看版本
[root@master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:17:50Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
部署Kubernetes Master 配置
在192.168.100.13(Master)执行。
[root@master ~]# systemctl enable kubelet
[root@master ~]#
kubeadm init \
--apiserver-advertise-address=192.168.100.13 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
运行下列命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装flannel
Wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
安装flannel 资源
Kubectl create -f kube-flannel.yaml
节点执行 加入群集命令
kubeadm join 192.168.100.13:6443 --token fbopry.k1sa42g0vmawyw6d \
--discovery-token-ca-cert-hash sha256:c3ddd21f38ac435846bf64a3d998fbc41cd7d6f00d03dc55dd51267aa5359bc2
[root@master opt]# kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady master 50m v1.17.0
node1 Ready <none> 39m v1.17.0
node2 Ready <none> 39m v1.17.0
如果 master不健康,重启一下
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 105m v1.17.0
node1 Ready <none> 93m v1.17.0
node2 Ready <none> 93m v1.17.0
[root@master ~]#
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-9d85f5447-2n8rc 1/1 Running 0 109m
coredns-9d85f5447-pxx2r 1/1 Running 0 109m
etcd-master 1/1 Running 1 110m
kube-apiserver-master 1/1 Running 1 110m
kube-controller-manager-master 1/1 Running 1 110m
kube-flannel-ds-amd64-45xh8 1/1 Running 1 100m
kube-flannel-ds-amd64-5gkc2 1/1 Running 0 98m
kube-flannel-ds-amd64-l5vfb 1/1 Running 0 98m
kube-proxy-k7wbk 1/1 Running 0 98m
kube-proxy-pzdnc 1/1 Running 1 109m
kube-proxy-qbl9p 1/1 Running 0 98m
kube-scheduler-master 1/1 Running 1 110m
默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:
# kubeadm token create
# kubeadm token list
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924 # kubeadm join 192.168.100.110:6443 --token nuja6n.o3jrhsffiqs9swnu --discovery-token-ca-cert-hash sha256:63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924
kubeadm token create --print-join-command
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/
在Kubernetes集群中创建一个pod,验证是否正常运行:
[root@master ~]# kubectl create deployment nginx2 --image=nginx
deployment.apps/nginx2 created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-86c57db685-7jjfj 1/1 Running 0 114s
nginx2-c74d46595-865pz 1/1 Running 0 41s
tomcat-7989d99887-q98c9 0/1 ContainerCreating 0 11m
[root@master ~]# kubectl expose deployment nginx2 --port=80 --target-port=80 --type=NodePort
service/nginx2 exposed
[root@master ~]# kubectl get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-86c57db685-7jjfj 1/1 Running 0 2m52s 10.244.2.3 node1 <none> <none>
pod/nginx2-c74d46595-865pz 1/1 Running 0 99s 10.244.2.4 node1 <none> <none>
pod/tomcat-7989d99887-q98c9 0/1 ContainerCreating 0 12m <none> node2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 150m <none>
service/nginx NodePort 10.96.159.82 <none> 80:30309/TCP 7m44s app=nginx
service/nginx2 NodePort 10.96.112.255 <none> 80:31360/TCP 24s app=nginx2
访问地址:https://NodeIP:Port
wget在使用HTTPS协议时,默认会去验证网站的证书,而这个证书验证经常会失败。
解决方案
原命令加上"--no-check-certificate"选项,就能排除掉这个错误。
EX:
[root@POT-DOG ~]# wget --no-check-certificate https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
--2017-10-04 22:50:02-- https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
安装dashboad 图像界面
下载 yaml 文件
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort "添加"
ports:
- port: 443
targetPort: 8443
nodePort: 30001 "添加"
selector:
k8s-app: kubernetes-dashboard
[root@master opt]# kubectl create -f dashboard.yaml
[root@master opt]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.174.112 <none> 8000/TCP 3m8s
kubernetes-dashboard NodePort 10.96.91.228 <none> 443:30001/TCP 3m8s
创建service account并绑定默认cluster-admin管理员集群角色:以及获取token 口令
[root@master opt]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master opt]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@master opt]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-lkpgz
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 01b310f9-0bbd-4674-9bfa-b8af229ea996
Type: kubernetes.io/service-account-token
Data
====
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik0yS0lrY3dXY2VDc254S1B0cldSWWNHcjA5bmd3MnBOQXNBQmkxaUJBVDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbGtwZ3oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDFiMzEwZjktMGJiZC00Njc0LTliZmEtYjhhZjIyOWVhOTk2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.CL2SyYWpcBacMYFXh1wAHNI6dkZFOkKt9vZv-WRNUgJ2hztT-me5eCQCx9Q-qDC3H0GrU8fgixCIMKeq4tUN74_B9ZJXiASHIu_jGTH3x7eg6VXcHYNiwmfj9yzVRJvoVMYqdmbw2pEaKyxTYdOK27cNyiyFF-7B7TUp1m_EwVbLzToLi40PSjpi--iqhOIpLVGSvthtZTLoGpCzYFG4PKtFvYkHgUqKtSbYWZjl8GeLnFXbo_Wfxw7KzvaRmVmIVHT6jElxnO29JVPVRScoAsUKFuw8GrMc4GcehQT-gIE_Nmk9ayDnsyu_wLjahsqCyx9a6Odnr6Z9Csiy9wjO7w
ca.crt: 1025 bytes
[root@master opt]#