centos7.6 一主两从安装 k8s 包括 nfs、calico、sc、metrics-server 等

1. 前置步骤(所有节点)

• centos 版本为 7.6 或 7.7、CPU 内核数量大于等于 2,且内存大于等于 4G
• hostname 不是 localhost,且不包含下划线、小数点、大写字母
• 任意节点上 IP 地址 可互通(无需 NAT 映射即可相互访问),且没有防火墙、安全组隔离

这里我使用得环境如下,因为隐私问题,ip只暴露最后两位用来区分

服务器ip系统配置k8s名称
xx.xx.xx.52centos7.68H32G1Tnode1
xx.xx.xx.47centos7.68H32G1Tnode2
xx.xx.xx.56centos7.68H32G1Tmaster
1.1 修改hostname
## 分别修改三台master node1 node2 
hostnamectl set-hostname master

cat >>/etc/hosts<<EOF
xx.xx.xx.52 node1
xx.xx.xx.47 node2
xx.xx.xx.56 master                         
EOF
1.2 关闭防火墙

你也可以查一下需要开放那些端口(我懒)

systemctl stop firewalld
systemctl disable firewalld
1.3 关闭selinux
### 查看

sestatus

### 临时关闭
setenforce 0

### 永久关闭
vi /etc/selinux/config
### 将SELINUX改为 SELINUX=disabled  
1.4 关闭swap分区
swapoff -a
echo "vm.swappiness=0" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
sed -i 's$/dev/mapper/centos-swap$#/dev/mapper/centos-swap$g' /etc/fstab 
## 查看
free -m
1.5 设置时间同步

这里是用得 chrony

yum -y install chrony

## 修改同步服务器地址为阿里云
## 注意 以下两行同时复制
sed -i.bak '3,6d' /etc/chrony.conf && sed -i '3cserver ntp1.aliyun.com iburst' \
/etc/chrony.conf

## 启动 开机自启动

systemctl start chronyd && systemctl enable chronyd

#### 查看同步结果 

chronyc sources
### 一般出现 ^*开头的就是成功
1.6 设置内核,桥接ipv4
## 向内核配置文件中写入以下内容
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 执行以下命令生效 
modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf

或者执行下面这个 ,看好了在操作

#将桥接的 IPv4 流量传递到 iptables 的链:
# 修改 /etc/sysctl.conf
# 如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 执行命令以应用
sysctl -p
1.7 检查DNS设置 确保可用
cat /etc/resolv.conf
1.8 安装ipvs
### 向文件中写入以下内容

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF


## 修改权限以及查看是否已经正确加载所需的内核模块

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
## 输出如下
nf_conntrack_ipv4      15053  0 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
1.9 安装 ipvsadm
yum -y install ipset ipvsadm
1.10
yum install -y ebtables socat ipset conntrack 

2. 安装docker(所有节点)


##卸载旧版本
sudo yum remove docker \
  docker-client \
  docker-client-latest \
  docker-common \
  docker-latest \
  docker-latest-logrotate \
  docker-logrotate \
  docker-engine
## 先安装yum-utils
yum install -y yum-utils

### 添加阿里云docker的yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

### 加载
yum makecache fast 

### 安装
yum -y install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9

# 启动开机启动 docker
systemctl enable docker && systemctl start docker 
systemctl status docker
## 设置docker镜像加速器,这里我用得我得阿里云,大家可以用自己得 获取其他渠道得加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
### 重启docker生效
sudo systemctl restart docker
### 查看
docker info 
### 或者这样 查看
docker info|grep "Registry Mirrors" -A 1
### 修改docker Cgroup Driver为systemd --> 这里是k8s 推荐如此设置
sed -i.bak "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
systemctl daemon-reload
### 重启docker生效
systemctl restart docker

3. 安装k8s(所有节点)

### 设置源
cat >/etc/yum.repos.d/kubernetes.repo <<EOF                                                                                        
[kubernetes]                                                                                                                                                  
name=Kubernetes                                                                                                                                               
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64                                                                                  
enabled=1                                                                                                                                                     
gpgcheck=0                                                                                                                                                    
repo_gpgcheck=0                                                                                                                                               
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg                                                                                               
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg                                                                                      
EOF
### 加载
yum makecache fast
### 安装kubelet-1.17.9 kubectl-1.17.9
yum install -y kubelet-1.17.9 kubectl-1.17.9 kubeadm-1.17.9

#开机启动和重启kubelet
systemctl enable kubelet && systemctl start kubelet

4. 准备镜像(所有节点)

这里是我先提前下载好了需要得镜像源,有些下载得是某些大佬做的,这一步得目的是为了让安装得时候更快些!

具体操作如下,直接复制即可

#下载master节点需要的镜像【选做】
#创建一个.sh文件,内容如下,
## 这个室所有节点都要得
#!/bin/bash
images=(
	kube-apiserver:v1.17.9
    kube-proxy:v1.17.9
	kube-controller-manager:v1.17.9
	kube-scheduler:v1.17.9
	coredns:1.6.5
	etcd:3.4.3-0
    pause:3.1
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

4. 初始化(master节点)



#1、初始化master节点,这里填写主节点得ip,下面两个ip随便写
#我这里安装得版本是1.17.9 其他版本的可以自己指定
kubeadm init \
--apiserver-advertise-address=xx.xx.xx.56 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.17.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16


#service网络和pod网络;docker service create 
#docker container --> ip brigde
#Pod ---> ip 地址,整个集群 Pod 是可以互通。255*255
#service ---> 

#2、配置 kubectl
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


#3、提前保存令牌,这个还是比较重要得,没有保存得话后续自己生成还是对小白很不友好得
kubeadm join xx.xx.xx.56:6443 --token lezew9.9nbf4l38268ad579 \
    --discovery-token-ca-cert-hash sha256:233333367d28b3e131ad5a386af0fd3f044efbc268755eb6b57d18685ccb10f 

#5、部署网络插件
#上传网络插件,并部署
#kubectl apply -f calico-3.13.1.yaml
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

#网络好的时候,就没有下面的操作了
calico:
image: calico/cni:v3.14.0
image: calico/cni:v3.14.0
image: calico/pod2daemon-flexvol:v3.14.0
image: calico/node:v3.14.0
image: calico/kube-controllers:v3.14.0




#6、查看状态,等待就绪
watch kubectl get pod -n kube-system -o wide

5. 加入节点(基本集群完成)

只要在从节点的主机上执行之前保存的令牌就可

kubeadm join xx.xx.xx.56:6443 --token lezew9.9nbf4l38268ad579 \
    --discovery-token-ca-cert-hash sha256:233333367d28b3e131ad5a386af0fd3f044efbc268755eb6b57d18685ccb10f 

6. 搭建nfs作为sc(主从都要)

6.1 安装环境(所有节点)
### 三台机器都要装
yum install -y nfs-utils
6.2 服务端nfs配置(主节点)

# 主节点。执行命令 vi /etc/exports,创建 exports 文件,文件内容如下:
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
#/nfs/data  172.26.248.0/20(rw,no_root_squash)
#执行以下命令,启动 nfs 服务
# 创建共享目录
mkdir -p /nfs/data
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
exportfs -r
#检查配置是否生效
exportfs
# 输出结果如下所示
/nfs/data /nfs/data

测试一下,没有这一步也行

#测试Pod直接挂载NFS了
### 哪个ip地址是你自己公网得ip
apiVersion: v1
kind: Pod
metadata:
  name: vol-nfs
  namespace: default
spec:
  volumes:
  - name: html
    nfs:
      path: /nfs/data   #1000G
      server: xx.xx.xx.56 # 你自己得地址
  containers:
  - name: myapp
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
6.3 搭建NFS-Client(从节点)
#服务器端防火墙开放111、662、875、892、2049的 tcp / udp 允许,否则远端客户无法连接。
#安装客户端工具
yum install -y nfs-utils


#执行以下命令检查 nfs 服务器端是否有设置共享目录
# showmount -e $(nfs服务器的IP)
showmount -e xx.xx.xx.56
# 输出结果如下所示
Export list for xx.xx.xx.56
/nfs/data *

#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmount
mkdir /root/nfsmount
# mount -t nfs $(nfs服务器的IP):/root/nfs_root /root/nfsmount
#高可用备份的方式
mount -t nfs xx.xx.xx.56:/nfs/data /root/nfsmount
# 写入一个测试文件
echo "hello nfs server" > /root/nfsmount/test.txt

#在 nfs 服务器上执行以下命令,验证文件写入成功
cat /data/volumes/test.txt

7 设置动态供应

7.1 基本信息
字段名称填入内容备注
名称nfs-storage自定义存储类名称
NFS Serverxx.xx.xx.56NFS服务的IP地址
NFS Path/nfs/dataNFS服务所共享的路径
7.2 创建nfs.yaml(注意只复制代码)地址换成你自己的

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
#vi nfs-deployment.yaml;创建nfs-client的授权
kind: Deployment
apiVersion: apps/v1
metadata:
   name: nfs-client-provisioner
spec:
   replicas: 1
   strategy:
     type: Recreate
   selector:
     matchLabels:
        app: nfs-client-provisioner
   template:
      metadata:
         labels:
            app: nfs-client-provisioner
      spec:
         serviceAccount: nfs-provisioner
         containers:
            -  name: nfs-client-provisioner
               image: lizhenliang/nfs-client-provisioner
               volumeMounts:
                 -  name: nfs-client-root
                    mountPath:  /persistentvolumes
               env:
                 -  name: PROVISIONER_NAME #供应者的名字
                    value: storage.pri/nfs #名字虽然可以随便起,以后引用要一致
                 -  name: NFS_SERVER
                    value: xx.xx.xx.56
                 -  name: NFS_PATH
                    value: /nfs/data
         volumes:
           - name: nfs-client-root
             nfs:
               server: xx.xx.xx.56
               path: /nfs/data
##这个镜像中volume的mountPath默认为/persistentvolumes,不能修改,否则运行时会报错
7.4 创建动态供应sc (注意只复制代码)
#创建storageclass
# vi storageclass-nfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storage-nfs
provisioner: storage.pri/nfs
reclaimPolicy: Retain

"reclaim policy"有三种方式:Retain、Recycle、Deleted。

  • Retain
    • 保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源
    • 手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。
    • 手动清空后端存储volume上的数据。
    • 手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。
  • Delete
    • 删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class",
    • 默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用户手动为创建后的动态PV编辑"reclaim policy"
  • Recycle
    • 保留PV,但清空其上数据,已废弃
7.5 改变默认sc

# https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-class
##改变系统默认sc
kubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
7.6 验证动态供应
#vi  pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-claim-01
 # annotations:
 #   volume.beta.kubernetes.io/storage-class: "storage-nfs"
spec:
  storageClassName: storage-nfs  #这个class一定注意要和sc的名字一样
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

使用pvc

#vi testpod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: pvc-claim-01

8. 安装metrics-server

如果要安装 kubeshpere最好提前安装metrics-server,kubeshpere带的有问题
执行如下的yaml即可

#1、先安装metrics-server(yaml如下,已经改好了镜像和配置,可以直接使用),这样就能监控到pod。node的资源情况(默认只有cpu、memory的资源审计信息哟,更专业的我们后面对接 Prometheus)
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
## metrices-service 会监控 node内存占用
## 验证是否安装成功
kubectl top node

kubectl top pod

至此,基本的集群的环境就已经安装完成了,如果需要安装kubeshpere ,请移步基于k8s安装 kubeshpere3.0

以上

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值