k8s部署学习随笔----Kubernetes1.15.3

Kubeadm部署Kubernetes1.15.3集群

一、环境说明

主机名IP地址角色系统
k8s-node-1192.168.120.128k8s-masterCentos7.6
k8s-node-2192.168.120.129k8s-nodeCentos7.6
k8s-node-3192.168.120.130k8s-nodeCentos7.6

注意:官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid唯一(参考下面的命令查看)

ip link
cat /sys/class/dmi/id/product_uuid

二、环境配置

以下命令在三台主机上均需运行

1、设置阿里云yum源(可选)

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
rm -rf /var/cache/yum && yum makecache && yum -y update && yum -y autoremove

2、安装依赖包

yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

3、关闭防火墙

systemctl stop firewalld && systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

4、关闭SELinux

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

5、关闭swap分区

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

6、加载内核模块

k8s的1.8版本开始,kube-proxy引入了IPVS模式,IPVS模式与iptables同样基于Netfilter,但是采用的hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。

modprobe  br_netfilter
modprobe ip_vs # lvs基于4层的负载均很
modprobe ip_vs_rr  # 轮询
modprobe ip_vs_wrr # 加权轮询
modprobe ip_vs_sh  # 源地址散列调度算法
modprobe nf_conntrack_ipv4   # 连接跟踪模块
modprobe br_netfilter    # 遍历桥的数据包由iptables进行处理以进行过滤和端口转发
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- br_netfilter
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

lsmod |egrep ip_vs

7、设置内核参数

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100  
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

sysctl -p /etc/sysctl.d/k8s.conf

1.  overcommit_memory是什么?  
    overcommit_memory是一个内核对内存分配的一种策略。 具体可见/proc/sys/vm/overcommit_memory下的值
      
2.  overcommit_memory有什么作用?  
    overcommit_memory取值又三种分别为0, 1, 2
    overcommit_memory=0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
    overcommit_memory=1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
    overcommit_memory=2, 表示内核允许分配超过所有物理内存和交换空间总和的内存

net.bridge.bridge-nf-call-iptables # 设置网桥iptables网络过滤通告
net.ipv4.tcp_tw_recycle # 设置 IP_TW 回收
vm.swappiness  # 禁用swap    
vm.panic_on_oom   # 设置系统oom(内存溢出)
fs.inotify.max_user_watches # 允许用户最大监控目录数
fs.file-max  # 允许系统打开的最大文件数
fs.nr_open   # 允许单个进程打开的最大文件数
net.ipv6.conf.all.disable_ipv6 # 禁用ipv6
net.netfilter.nf_conntrack_max # 系统的最大连接数

8、安装Docker

1、首先卸载旧版:(官方建议)
yum -y remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
2、安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
3、设置安装源(阿里云)
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
4、启用测试库(可选)
yum-config-manager --enable docker-ce-edge
yum-config-manager --enable docker-ce-test
5、安装
yum makecache fast
yum list docker-ce --showduplicates | sort -r
yum -y install docker-ce
6、启动
systemctl start docker

开机自启

systemctl enable docker
7、配置 docker
  • Docker建议配置阿里云镜像加速

  • 安装完成后配置启动时的命令,否则docker会将iptables FORWARD chain(链)的默认策略设置为DROP

  • 另外Kubeadm建议将systemd设置为cgroup驱动,所以还要修改daemon.json

sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl daemon-reload
systemctl restart docker

9、安装 kubeadm 和 kubelet

配置源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum makecache fast

安装

yum install -y kubelet-1.15.3-0 kubeadm-1.15.3-0 kubectl-1.15.3-0
systemctl enable kubelet
注意:不要操作启动start

10、拉取所需镜像

先从阿里云拉取所需的镜像,不然会从谷歌拉取,导致拉取失败。

拉取镜像在master节点做:

kubeadm config images list | sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' | sh -x
docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk '{print "docker tag",$1":"$2,$1":"$2}' | sed -e 's/registry.cn-hangzhou.aliyuncs.com\/google_containers/k8s.gcr.io/2' | sh -x
docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk '{print "docker rmi """$1""":"""$2}' | sh -x 

导出镜像:
docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS=" "}{print $1,$2}') -o k8s-1.15.3-images.tar
scp给node节点

三、初始化集群

以下命令如无特殊说明,均在k8s-node-1上执行

1、使用kubeadm init初始化集群(注意修改最后为本机IP)

kubeadm init --kubernetes-version=v1.15.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.18.129
  # kubeadm reset注意再次初始化。重启全部network.清以下相关文件。
  # 不能改10.244.0.0/16

初始化成功后会输出类似下面的加入命令,暂时无需运行,先记录。

kubeadm join 192.168.120.128:6443 --token duz8m8.njvafly3p2jrshfx --discovery-token-ca-cert-hash sha256:60e15ba0f562a9f29124914a1540bd284e021a37ebdbcea128f4e257e25002db

2、为需要使用kubectl的用户进行配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查集群状态

kubectl get cs

3、安装 Pod Network(使用七牛云镜像)

创建 kube-flannel.yml 文件,去网上下载 kube-flannel.yml

curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
参考一下url中操作:https://blog.csdn.net/zyl974611232/article/details/107518192
并将下载的flanneld-v0.12.0-amd64.docker远程复制到各自节点中。
并使用docker load < flanneld-v0.12.0-amd64.docker将镜像保存至docker中,可通过docker images查看
kubectl apply -f kube-flannel.yml
rm -f kube-flannel.yml

心得:不要去更改kube-flannel.yml中的镜像地址,保持原样就会在本地拉取镜像地址。

使用下面的命令确保所有的Pod都处于Running状态,可能需要1-2分钟。

kubectl get pod --all-namespaces -o wide

4、向 Kubernetes 集群中添加Node节点

在k8s-node-2和k8s-node-3上运行之前在k8s-node-1输出的命令

kubeadm join 192.168.18.128.129:6443 --token duz8m8.njvafly3p2jrshfx --discovery-token-ca-cert-hash sha256:60e15ba0f562a9f29124914a1540bd284e021a37ebdbcea128f4e257e25002db

查看集群中的节点状态,可能要等等许久才Ready

kubectl get nodes

5、kube-proxy 开启 ipvs

kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
kubectl apply -f kube-proxy-configmap.yaml
rm -f kube-proxy-configmap.yaml
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' 

四、部署kubernetes-dashboard

1、生成访问证书

grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

注意: 
pkcs12 # 生成证书的文件格式
-export # 导出为pkcs12类型
-clcerts # 只输出客户端证书

将生成的kubecfg.p12证书导入到Windows中,直接双击打开,下一步导入即可。

注意:导入完成后需重启浏览器。

2、生成访问Token

新建文件admin-user.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

创建角色及绑定账号

kubectl create -f admin-user.yaml

获取Token

kubectl describe secret admin-user --namespace=kube-system

此次先记录下生成的Token

3、部署kubernetes-dashboard

创建 dashboard 配置文件

vim kubernetes-dashboard.yaml

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
vim kubernetes-dashboard.yaml
修改image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1为:
image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
kubectl apply -f kubernetes-dashboard.yaml

4、访问

https://192.168.18.129:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

# 获取登录令牌
kubectl describe secret admin-user --namespace=kube-system

192.168.120.128为MasterIP,6443为 apiserver-port

然后在登录选项中选择令牌登录,复制进刚刚生成的令牌即可。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值