使用二进制方式安装 Kubernetes 1

1.Kubernetes 架构设计图


Kubernetes 是由一个 Master 和多个 Node 组成,Master 通过 API 提供服务,并接收 Kubectl 发送过来的请求来调度管理整个集群。

Kubectl 是 K8s 平台的管理命令。

在这里插入图片描述

2.Kubernetes 常见组件介绍


  • APIServer: 所有服务的统一访问入口,并提供认证、授权、访问控制、API 注册和发现等机制;

  • Controller Manager(控制器): 主要就是用来维持 Pod 的一个副本数,比如故障检测、自动扩展、滚动更新等;

  • Scheduler(调度器): 主要就是用来分配任务到合适的节点上(资源调度)

  • ETCD: 键值对数据库,存放了 K8s 集群中所有的重要信息(持久化)

  • Kubelet: 直接和容器引擎交互,用来维护容器的一个生命周期;同时也负责 Volume(CVI)和网络(CNI)的管理;

  • Kube-Porxy: 用于将规则写入至 iptablesIPVS 来实现服务的映射访问;


其它组件:

  • CoreDNS:主要就是用来给 K8s 的 Service 提供一个域名和 IP 的对应解析关系。

  • Dashboard:主要就是用来给 K8s 提供一个 B/S 结构的访问体系(即,我们可以通过 Web 界面来对 K8s 进行管理)

  • Ingress Controller:主要就是用来实现 HTTP 代理(七层),官方的 Service 仅支持 TCP\UDP 代理(四层)

  • Prometheus:主要就是用来给 K8s 提供一个监控能力,使我们能够更加清晰的看到 K8s 相关组件及 Pod 的使用情况。

  • ELK:主要就是用来给 K8s 提供一个日志分析平台。


Kubernetes 工作原理:

  • 用户可以通过 Kubectl 命令来提交需要运行的 Docker Container 到 K8s 的 APIServer 组件中;

  • 接着 APIServer 接收到用户提交的请求后,会将请求存储到 ETCD 这个键值对存储中;

  • 然后由 Controller Manager 组件来创建出用户定义的控制器类型(Pod ReplicaSet Deployment DaemonSet 等)

  • 然后 Scheduler 组件会对 ETCD 进行扫描,并将用户需要运行的 Docker Container 分配到合适的主机上;

  • 最后由 Kubelet 组件来和 Docker 容器进行交互,创建、删除、停止容器等一系列操作。

kube-proxy 主要就是为 Service 提供服务的,来实现内部从 Pod 到 Service 和外部 NodePort 到 Service 的访问。

二、Kubernetes 二进制方式安装

=======================================================================================

我们下面的安装方式就是单纯的使用二进制方式安装,并没有对 Kube-APIServer 组件进行高可用配置,因为像我们安装 K8s 的话,其实主要还是为了学习 K8s,通过 K8s 来完成某些事情,所以并不需要关心高可用这块的东西。

要是对 Kubernetes 做高可用的话,其实并不难,像一些在云上的 K8s,一般都是通过 SLB 来代理到两台不同服务器上,来实现高可用;而像云下的 K8s,基本上也是如上,我们可以通过 Keepalived 加 Nginx 来实现高可用。


准备工作:

| 主机名 | 操作系统 | IP 地址 | 所需组件 |

| :-- | :-- | :-- | :-- |

| k8s-master01 | CentOS 7.4 | 192.168.1.1 | 所有组件都安装 (合理利用资源) |

| k8s-master02 | CentOS 7.4 | 192.168.1.2 | 所有组件都安装 |

| k8s-node | CentOS 7.4 | 192.168.1.3 | docker kubelet kube-proxy |

1)在各个节点上配置主机名,并配置 Hosts 文件

[root@localhost ~]# hostnamectl set-hostname k8s-master01

[root@localhost ~]# bash

[root@k8s-master01 ~]# cat <> /etc/hosts

192.168.1.1 k8s-master01

192.168.1.2 k8s-master02

192.168.1.3 k8s-node01

END

2)在 k8s-master01 上配置 SSH 密钥对,并将公钥发送给其余主机

[root@k8s-master01 ~]# ssh-keygen -t rsa # 三连回车

[root@k8s-master01 ~]# ssh-copy-id root@192.168.1.1

[root@k8s-master01 ~]# ssh-copy-id root@192.168.1.2

[root@k8s-master01 ~]# ssh-copy-id root@192.168.1.3

3)编写 K8s 初始环境脚本

[root@k8s-master01 ~]# vim k8s-init.sh

#!/bin/bash

#****************************************************************#

ScriptName: k8s-init.sh

Initialize the machine. This needs to be executed on every machine.

Mkdir k8s directory

yum -y install wget ntpdate && ntpdate ntp1.aliyun.com

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

yum -y install epel-release

mkdir -p /opt/k8s/bin/

mkdir -p /data/k8s/docker

mkdir -p /data/k8s/k8s

Disable the SELinux.

swapoff -a

sed -i ‘/swap/s/^/#/’ /etc/fstab

Turn off and disable the firewalld.

systemctl stop firewalld

systemctl disable firewalld

Modify related kernel parameters & Disable the swap.

cat > /etc/sysctl.d/k8s.conf << EOF

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.tcp_tw_recycle = 0

vm.swappiness = 0

vm.overcommit_memory = 1

vm.panic_on_oom = 0

net.ipv6.conf.all.disable_ipv6 = 1

EOF

sysctl -p /etc/sysctl.d/k8s.conf >& /dev/null

Add ipvs modules

cat > /etc/sysconfig/modules/ipvs.modules << EOF

#!/bin/bash

modprobe – ip_vs

modprobe – ip_vs_rr

modprobe – ip_vs_wrr

modprobe – ip_vs_sh

modprobe – br_netfilter

modprobe – nf_conntrack

modprobe – nf_conntrack_ipv4

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules

source /etc/sysconfig/modules/ipvs.modules

Install rpm

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel vim openssl-devel bash-completion

ADD k8s bin to PATH

echo ‘export PATH=/opt/k8s/bin:$PATH’ >> /root/.bashrc && chmod +x /root/.bashrc && source /root/.bashrc

[root@k8s-master01 ~]# bash k8s-init.sh

4)配置环境变量

[root@k8s-master01 ~]# vim environment.sh

#!/bin/bash

生成 EncryptionConfig 所需的加密 Key

export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

集群 Master 机器 IP 数组

export MASTER_IPS=(192.168.1.1 192.168.1.2)

集群 Master IP 对应的主机名数组

export MASTER_NAMES=(k8s-master01 k8s-master02)

集群 Node 机器 IP 数组

export NODE_IPS=(192.168.1.3)

集群 Node IP 对应的主机名数组

export NODE_NAMES=(k8s-node01)

集群所有机器 IP 数组

export ALL_IPS=(192.168.1.1 192.168.1.2 192.168.1.3)

集群所有 IP 对应的主机名数组

export ALL_NAMES=(k8s-master01 k8s-master02 k8s-node01)

Etcd 集群服务地址列表

export ETCD_ENDPOINTS=“https://192.168.1.1:2379,https://192.168.1.2:2379”

Etcd 集群间通信的 IP 和端口

export ETCD_NODES=“k8s-master01=https://192.168.1.1:2380,k8s-master02=https://192.168.1.2:2380”

Kube-apiserver 的 IP 和端口

export KUBE_APISERVER=“https://192.168.1.1:6443”

节点间互联网络接口名称

export IFACE=“ens32”

Etcd 数据目录

export ETCD_DATA_DIR=“/data/k8s/etcd/data”

Etcd WAL 目录. 建议是 SSD 磁盘分区. 或者和 ETCD_DATA_DIR 不同的磁盘分区

export ETCD_WAL_DIR=“/data/k8s/etcd/wal”

K8s 各组件数据目录

export K8S_DIR=“/data/k8s/k8s”

Docker 数据目录

export DOCKER_DIR=“/data/k8s/docker”

以下参数一般不需要修改

TLS Bootstrapping 使用的 Token. 可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ’ ’ 生成

BOOTSTRAP_TOKEN=“41f7e4ba8b7be874fcff18bf5cf41a7c”

最好使用当前未用的网段来定义服务网段和 Pod 网段

服务网段. 部署前路由不可达. 部署后集群内路由可达(kube-proxy 保证)

SERVICE_CIDR=“10.20.0.0/16”

Pod 网段. 建议 /16 段地址. 部署前路由不可达. 部署后集群内路由可达(flanneld 保证)

CLUSTER_CIDR=“10.10.0.0/16”

服务端口范围 (NodePort Range)

export NODE_PORT_RANGE=“1-65535”

Flanneld 网络配置前缀

export FLANNEL_ETCD_PREFIX=“/kubernetes/network”

Kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个 IP)

export CLUSTER_KUBERNETES_SVC_IP=“10.20.0.1”

集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)

export CLUSTER_DNS_SVC_IP=“10.20.0.254”

集群 DNS 域名(末尾不带点号)

export CLUSTER_DNS_DOMAIN=“cluster.local”

将二进制目录 /opt/k8s/bin 加到 PATH 中

export PATH=/opt/k8s/bin:$PATH

  • 上面像那些 IP 地址和网卡啥的,你们要改成自身对应的信息。

[root@k8s-master01 ~]# chmod +x environment.sh && source environment.sh

下面的这些操作,我们只需要在 k8s-master01 主机上操作即可(因为下面我们会通过 for 循环来发送到其余主机上)

1.创建 CA 证书和密钥


因为 Kubernetes 系统的各个组件需要使用 TLS 证书对其通信加密以及授权认证,所以我们需要在安装前先生成相关的 TLS 证书;我们可以使用 openssl cfssl easyrsa 来生成 Kubernetes 的相关证书,我们下面使用的是 cfssl 方式。


1)安装 cfssl 工具集

[root@k8s-master01 ~]# mkdir -p /opt/k8s/cert

[root@k8s-master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /opt/k8s/bin/cfssl

[root@k8s-master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /opt/k8s/bin/cfssljson

[root@k8s-master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /opt/k8s/bin/cfssl-certinfo

[root@k8s-master01 ~]# chmod +x /opt/k8s/bin/*

2)创建根证书配置文件

[root@k8s-master01 ~]# mkdir -p /opt/k8s/work

[root@k8s-master01 ~]# cd /opt/k8s/work/

[root@k8s-master01 work]# cat > ca-config.json << EOF

{

“signing”: {

“default”: {

“expiry”: “876000h”

},

“profiles”: {

“kubernetes”: {

“expiry”: “876000h”,

“usages”: [

“signing”,

“key encipherment”,

“server auth”,

“client auth”

]

}

}

}

}

EOF

  • signing:表示当前证书可用于签名其它证书;

  • server auth:表示 Client 可以用这个 CA 对 Server 提供的证书进行校验;

  • client auth:表示 Server 可以用这个 CA 对 Client 提供的证书进行验证;

  • "expiry": "876000h":表示当前证书有效期为 100 年;

3)创建根证书签名请求文件

[root@k8s-master01 work]# cat > ca-csr.json << EOF

{

“CN”: “kubernetes”,

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “k8s”,

“OU”: “System”

}

],

“ca”: {

“expiry”: “876000h”

}

}

EOF

  • CN:Kube-APIServer 将会把这个字段作为请求的用户名,来让浏览器验证网站是否合法。

  • C:国家;ST:州,省;L:地区,城市;O:组织名称,公司名称;OU:组织单位名称,公司部门。

4)生成 CA 密钥 ca-key.pem 和证书 ca.pem

[root@k8s-master01 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

  • 生成证书后,因为 Kubernetes 集群需要 双向 TLS 认证,所以我们可以将生成的文件传送到所有主机中。

5)使用 for 循环来遍历数组,将配置发送给所有主机

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

ssh root@${all_ip} “mkdir -p /etc/kubernetes/cert”

scp ca*.pem ca-config.json root@${all_ip}:/etc/kubernetes/cert

done

2.安装 ETCD 组件


ETCD 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等);Kubernetes 主要就是用 ETCD 来存储所有的运行数据。


下载 ETCD

[root@k8s-master01 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.22/etcd-v3.3.22-linux-amd64.tar.gz

[root@k8s-master01 work]# tar -zxf etcd-v3.3.22-linux-amd64.tar.gz

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp etcd-v3.3.22-linux-amd64/etcd* root@${master_ip}:/opt/k8s/bin

ssh root@${master_ip} “chmod +x /opt/k8s/bin/*”

done

1)创建 ETCD 证书和密钥

[root@k8s-master01 work]# cat > etcd-csr.json << EOF

{

“CN”: “etcd”,

“hosts”: [

“127.0.0.1”,

“192.168.1.1”,

“192.168.1.2”

],

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “k8s”,

“OU”: “System”

}

]

}

EOF

  • hosts:用来指定给 ETCD 授权的 IP 地址或域名列表。

2)生成证书和密钥

[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

ssh root@${master_ip} “mkdir -p /etc/etcd/cert”

scp etcd*.pem root@${master_ip}:/etc/etcd/cert/

done

3)创建启动脚本

[root@k8s-master01 work]# cat > etcd.service.template << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=${ETCD_DATA_DIR}

ExecStart=/opt/k8s/bin/etcd \

–enable-v2=true \

–data-dir=${ETCD_DATA_DIR} \

–wal-dir=${ETCD_WAL_DIR} \

–name=##MASTER_NAME## \

–cert-file=/etc/etcd/cert/etcd.pem \

–key-file=/etc/etcd/cert/etcd-key.pem \

–trusted-ca-file=/etc/kubernetes/cert/ca.pem \

–peer-cert-file=/etc/etcd/cert/etcd.pem \

–peer-key-file=/etc/etcd/cert/etcd-key.pem \

–peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \

–peer-client-cert-auth \

–client-cert-auth \

–listen-peer-urls=https://##MASTER_IP##:2380 \

–initial-advertise-peer-urls=https://##MASTER_IP##:2380 \

–listen-client-urls=https://##MASTER_IP##:2379,http://127.0.0.1:2379 \

–advertise-client-urls=https://##MASTER_IP##:2379 \

–initial-cluster-token=etcd-cluster-0 \

–initial-cluster=${ETCD_NODES} \

–initial-cluster-state=new \

–auto-compaction-mode=periodic \

–auto-compaction-retention=1 \

–max-request-bytes=33554432 \

–quota-backend-bytes=6442450944 \

–heartbeat-interval=250 \

–election-timeout=2000

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

[root@k8s-master01 work]# for (( A=0; A < 2; A++ ))

do

sed -e “s/##MASTER_NAME##/KaTeX parse error: Expected 'EOF', got '#' at position 27: …ES[A]}/" -e "s/#̲#MASTER_IP##/{MASTER_IPS[A]}/” etcd.service.template > etcd-${MASTER_IPS[A]}.service

done

4)启动 ETCD

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp etcd- m a s t e r i p . s e r v i c e r o o t @ {master_ip}.service root@ masterip.serviceroot@{master_ip}:/etc/systemd/system/etcd.service

ssh root@${master_ip} “mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}”

ssh root@${master_ip} “systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd”

done

查看 ETCD 当前的 Leader(领导)

[root@k8s-master01 work]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl \

-w table --cacert=/etc/kubernetes/cert/ca.pem \

–cert=/etc/etcd/cert/etcd.pem \

–key=/etc/etcd/cert/etcd-key.pem \

–endpoints=${ETCD_ENDPOINTS} endpoint status

在这里插入图片描述

3.安装 Flannel 网络插件


Flannel 是一种基于 overlay 网络的跨主机容器网络解决方案,也就是将 TCP 数据封装在另一种网络包里面进行路由转发和通信。Flannel 是使用 Go 语言开发的,主要就是用来让不同主机内的容器实现互联。


下载 Flannel

[root@k8s-master01 work]# mkdir flannel

[root@k8s-master01 work]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

[root@k8s-master01 work]# tar -zxf flannel-v0.11.0-linux-amd64.tar.gz -C flannel

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

scp flannel/{flanneld,mk-docker-opts.sh} root@${all_ip}:/opt/k8s/bin/

ssh root@${all_ip} “chmod +x /opt/k8s/bin/*”

done

1)创建 Flannel 证书和密钥

[root@k8s-master01 work]# cat > flanneld-csr.json << EOF

{

“CN”: “flanneld”,

“hosts”: [],

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “k8s”,

“OU”: “System”

}

]

}

EOF

2)生成证书和密钥

[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

ssh root@${all_ip} “mkdir -p /etc/flanneld/cert”

scp flanneld*.pem root@${all_ip}:/etc/flanneld/cert

done

配置 Pod 的网段信息

[root@k8s-master01 work]# etcdctl \

–endpoints=${ETCD_ENDPOINTS} \

–ca-file=/opt/k8s/work/ca.pem \

–cert-file=/opt/k8s/work/flanneld.pem \

–key-file=/opt/k8s/work/flanneld-key.pem \

mk KaTeX parse error: Expected '}', got 'EOF' at end of input: … '{"Network":"'{CLUSTER_CIDR}‘", “SubnetLen”: 21, “Backend”: {“Type”: “vxlan”}}’

在这里插入图片描述

3)编写启动脚本

[root@k8s-master01 work]# cat > flanneld.service << EOF

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

[Service]

Type=notify

ExecStart=/opt/k8s/bin/flanneld \

-etcd-cafile=/etc/kubernetes/cert/ca.pem \

-etcd-certfile=/etc/flanneld/cert/flanneld.pem \

-etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \

-etcd-endpoints=${ETCD_ENDPOINTS} \

-etcd-prefix=${FLANNEL_ETCD_PREFIX} \

-iface=${IFACE} \

-ip-masq

ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=always

RestartSec=5

StartLimitInterval=0

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

EOF

4)启动并验证

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

scp flanneld.service root@${all_ip}:/etc/systemd/system/

ssh root@${all_ip} “systemctl daemon-reload && systemctl enable flanneld --now”

done

1)查看 Pod 网段信息

[root@k8s-master01 work]# etcdctl \

–endpoints=${ETCD_ENDPOINTS} \

–ca-file=/etc/kubernetes/cert/ca.pem \

–cert-file=/etc/flanneld/cert/flanneld.pem \

–key-file=/etc/flanneld/cert/flanneld-key.pem \

get ${FLANNEL_ETCD_PREFIX}/config

在这里插入图片描述

2)查看已分配的 Pod 子网段列表

[root@k8s-master01 work]# etcdctl \

–endpoints=${ETCD_ENDPOINTS} \

–ca-file=/etc/kubernetes/cert/ca.pem \

–cert-file=/etc/flanneld/cert/flanneld.pem \

–key-file=/etc/flanneld/cert/flanneld-key.pem \

ls ${FLANNEL_ETCD_PREFIX}/subnets

在这里插入图片描述

3)查看某一 Pod 网段对应的节点 IP 和 Flannel 接口地址

[root@k8s-master01 work]# etcdctl \

–endpoints=${ETCD_ENDPOINTS} \

–ca-file=/etc/kubernetes/cert/ca.pem \

–cert-file=/etc/flanneld/cert/flanneld.pem \

–key-file=/etc/flanneld/cert/flanneld-key.pem \

get ${FLANNEL_ETCD_PREFIX}/subnets/10.10.208.0-21

在这里插入图片描述

4.安装 Docker 服务


Docker 运行和管理容器,Kubelet 通过 Container Runtime Interface (CRI) 与它进行交互。


下载 Docker

[root@k8s-master01 work]# wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.12.tgz

[root@k8s-master01 work]# tar -zxf docker-19.03.12.tgz

安装 Docker

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

scp docker/* root@${all_ip}:/opt/k8s/bin/

ssh root@${all_ip} “chmod +x /opt/k8s/bin/*”

done

1)创建启动脚本

[root@k8s-master01 work]# cat > docker.service << “EOF”

[Unit]

Description=Docker Application Container Engine

Documentation=http://docs.docker.io

[Service]

WorkingDirectory=##DOCKER_DIR##

Environment=“PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin”

EnvironmentFile=-/run/flannel/docker

ExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONS

ExecReload=/bin/kill -s HUP $MAINPID

Restart=on-failure

RestartSec=5

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

Delegate=yes

KillMode=process

[Install]

WantedBy=multi-user.target

EOF

[root@k8s-master01 work]# sed -i -e “s|##DOCKER_DIR##|${DOCKER_DIR}|” docker.service

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

scp docker.service root@${all_ip}:/etc/systemd/system/

done

配置 daemon.json 文件

[root@k8s-master01 work]# cat > daemon.json << EOF

{

“registry-mirrors”: [“https://ipbtg5l0.mirror.aliyuncs.com”],

“exec-opts”: [“native.cgroupdriver=cgroupfs”],

“data-root”: “${DOCKER_DIR}/data”,

“exec-root”: “${DOCKER_DIR}/exec”,

“log-driver”: “json-file”,

“log-opts”: {

“max-size”: “100m”,

“max-file”: “5”

},

“storage-driver”: “overlay2”,

“storage-opts”: [

“overlay2.override_kernel_check=true”

]

}

EOF

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

ssh root@${all_ip} “mkdir -p /etc/docker/ ${DOCKER_DIR}/{data,exec}”

scp docker-daemon.json root@${all_ip}:/etc/docker/daemon.json

done

2)启动 Docker

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

ssh root@${all_ip} “systemctl daemon-reload && systemctl enable docker --now”

done

5.安装 Kubectl 服务


下载 Kubectl

[root@k8s-master01 work]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.3/kubernetes-client-linux-amd64.tar.gz

[root@k8s-master01 work]# tar -zxf kubernetes-client-linux-amd64.tar.gz

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kubernetes/client/bin/kubectl root@${master_ip}:/opt/k8s/bin/

ssh root@${master_ip} “chmod +x /opt/k8s/bin/*”

done

1)创建 Admin 证书和密钥

[root@k8s-master01 work]# cat > admin-csr.json << EOF

{

“CN”: “admin”,

“hosts”: [],

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “system:masters”,

“OU”: “System”

}

]

}

EOF

3)生成证书和密钥

[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

4)创建 Kubeconfig 文件

配置集群参数

[root@k8s-master01 work]# kubectl config set-cluster kubernetes \

–certificate-authority=/opt/k8s/work/ca.pem \

–embed-certs=true \

–server=${KUBE_APISERVER} \

–kubeconfig=kubectl.kubeconfig

配置客户端认证参数

[root@k8s-master01 work]# kubectl config set-credentials admin \

–client-certificate=/opt/k8s/work/admin.pem \

–client-key=/opt/k8s/work/admin-key.pem \

–embed-certs=true \

–kubeconfig=kubectl.kubeconfig

配置上下文参数

[root@k8s-master01 work]# kubectl config set-context kubernetes \

–cluster=kubernetes \

–user=admin \

–kubeconfig=kubectl.kubeconfig

配置默认上下文

[root@k8s-master01 work]# kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

5)创建 Kubectl 配置文件,并配置命令补全工具

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

ssh root@${master_ip} “mkdir -p ~/.kube”

scp kubectl.kubeconfig root@${master_ip}:~/.kube/config

ssh root@${master_ip} “echo ‘export KUBECONFIG=$HOME/.kube/config’ >> ~/.bashrc”

ssh root@${master_ip} “echo ‘source <(kubectl completion bash)’ >> ~/.bashrc”

done

下面命令需要在 k8s-master01k8s-master02 上配置:

[root@k8s-master01 work]# source /usr/share/bash-completion/bash_completion

[root@k8s-master01 work]# source <(kubectl completion bash)

[root@k8s-master01 work]# bash ~/.bashrc

三、安装 Kubenetes 相关组件

======================================================================================

1.安装 Kube-APIServer 组件


下载 Kubernetes 二进制文件

[root@k8s-master01 work]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.3/kubernetes-server-linux-amd64.tar.gz

[root@k8s-master01 work]# tar -zxf kubernetes-server-linux-amd64.tar.gz

[root@k8s-master01 work]# cd kubernetes

[root@k8s-master01 kubernetes]# tar -zxf kubernetes-src.tar.gz

[root@k8s-master01 kubernetes]# cd …

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp -rp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-scheduler,kubeadm,kubectl,mounter} root@${master_ip}:/opt/k8s/bin/

ssh root@${master_ip} “chmod +x /opt/k8s/bin/*”

done

1)创建 Kubernetes 证书和密钥

[root@k8s-master01 work]# cat > kubernetes-csr.json << EOF

{

“CN”: “kubernetes”,

“hosts”: [

“127.0.0.1”,

“192.168.1.1”,

“192.168.1.2”,

“${CLUSTER_KUBERNETES_SVC_IP}”,

“kubernetes”,

“kubernetes.default”,

“kubernetes.default.svc”,

“kubernetes.default.svc.cluster”,

“kubernetes.default.svc.cluster.local.”

],

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “k8s”,

“OU”: “System”

}

]

}

EOF

2)生成证书和密钥

[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

ssh root@${master_ip} “mkdir -p /etc/kubernetes/cert”

scp kubernetes*.pem root@${master_ip}:/etc/kubernetes/cert/

done

3)配置 Kube-APIServer 审计

创建加密配置文件

[root@k8s-master01 work]# cat > encryption-config.yaml << EOF

kind: EncryptionConfig

apiVersion: v1

resources:

  • resources:

  • secrets

providers:

  • aescbc:

keys:

  • name: zhangsan

secret: ${ENCRYPTION_KEY}

  • identity: {}

EOF

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp encryption-config.yaml root@${master_ip}:/etc/kubernetes/encryption-config.yaml

done

创建审计策略文件

[root@k8s-master01 work]# cat > audit-policy.yaml << EOF

apiVersion: audit.k8s.io/v1beta1

kind: Policy

rules:

The following requests were manually identified as high-volume and low-risk, so drop them.

  • level: None

resources:

  • group: “”

resources:

  • endpoints

  • services

  • services/status

users:

  • ‘system:kube-proxy’

verbs:

  • watch

  • level: None

resources:

  • group: “”

resources:

  • nodes

  • nodes/status

userGroups:

  • ‘system:nodes’

verbs:

  • get

  • level: None

namespaces:

  • kube-system

resources:

  • group: “”

resources:

  • endpoints

users:

  • ‘system:kube-controller-manager’

  • ‘system:kube-scheduler’

  • ‘system:serviceaccount:kube-system:endpoint-controller’

verbs:

  • get

  • update

  • level: None

resources:

  • group: “”

resources:

  • namespaces

  • namespaces/status

  • namespaces/finalize

users:

  • ‘system:apiserver’

verbs:

  • get

Don’t log HPA fetching metrics.

  • level: None

resources:

  • group: metrics.k8s.io

users:

  • ‘system:kube-controller-manager’

verbs:

  • get

  • list

Don’t log these read-only URLs.

  • level: None

nonResourceURLs:

  • ‘/healthz*’

  • /version

  • ‘/swagger*’

Don’t log events requests.

  • level: None

resources:

  • group: “”

resources:

  • events

node and pod status calls from nodes are high-volume and can be large, don’t log responses for expected updates from nodes

  • level: Request

omitStages:

  • RequestReceived

resources:

  • group: “”

resources:

  • nodes/status

  • pods/status

users:

  • kubelet

  • ‘system:node-problem-detector’

  • ‘system:serviceaccount:kube-system:node-problem-detector’

verbs:

  • update

  • patch

  • level: Request

omitStages:

  • RequestReceived

resources:

  • group: “”

resources:

  • nodes/status

  • pods/status

userGroups:

  • ‘system:nodes’

verbs:

  • update

  • patch

deletecollection calls can be large, don’t log responses for expected namespace deletions

  • level: Request

omitStages:

  • RequestReceived

users:

  • ‘system:serviceaccount:kube-system:namespace-controller’

verbs:

  • deletecollection

Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,

so only log at the Metadata level.

  • level: Metadata

omitStages:

  • RequestReceived

resources:

  • group: “”

resources:

  • secrets

  • configmaps

  • group: authentication.k8s.io

resources:

  • tokenreviews

Get repsonses can be large; skip them.

  • level: Request

omitStages:

  • RequestReceived

resources:

  • group: “”

  • group: admissionregistration.k8s.io

  • group: apiextensions.k8s.io

  • group: apiregistration.k8s.io

  • group: apps

  • group: authentication.k8s.io

  • group: authorization.k8s.io

  • group: autoscaling

  • group: batch

  • group: certificates.k8s.io

  • group: extensions

  • group: metrics.k8s.io

  • group: networking.k8s.io

  • group: policy

  • group: rbac.authorization.k8s.io

  • group: scheduling.k8s.io

  • group: settings.k8s.io

  • group: storage.k8s.io

verbs:

  • get

  • list

  • watch

Default level for known APIs

  • level: RequestResponse

omitStages:

  • RequestReceived

resources:

  • group: “”

  • group: admissionregistration.k8s.io

  • group: apiextensions.k8s.io

  • group: apiregistration.k8s.io

  • group: apps

  • group: authentication.k8s.io

  • group: authorization.k8s.io

  • group: autoscaling

  • group: batch

  • group: certificates.k8s.io

  • group: extensions

  • group: metrics.k8s.io

  • group: networking.k8s.io

  • group: policy

  • group: rbac.authorization.k8s.io

  • group: scheduling.k8s.io

  • group: settings.k8s.io

  • group: storage.k8s.io

Default level for all other requests.

  • level: Metadata

omitStages:

  • RequestReceived

EOF

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp audit-policy.yaml root@${master_ip}:/etc/kubernetes/audit-policy.yaml

done

4)配置 Metrics-Server

创建 metrics-server 的 CA 证书请求文件

[root@k8s-master01 work]# cat > proxy-client-csr.json << EOF

{

“CN”: “system:metrics-server”,

“hosts”: [],

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “k8s”,

“OU”: “System”

}

]

}

EOF

生成证书和密钥

[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp proxy-client*.pem root@${master_ip}:/etc/kubernetes/cert/

done

5)创建启动脚本

[root@k8s-master01 work]# cat > kube-apiserver.service.template << EOF

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

WorkingDirectory=${K8S_DIR}/kube-apiserver

ExecStart=/opt/k8s/bin/kube-apiserver \

–insecure-port=0 \

–secure-port=6443 \

–bind-address=##MASTER_IP## \

–advertise-address=##MASTER_IP## \

–default-not-ready-toleration-seconds=360 \

–default-unreachable-toleration-seconds=360 \

–feature-gates=DynamicAuditing=true \

–max-mutating-requests-inflight=2000 \

–max-requests-inflight=4000 \

–default-watch-cache-size=200 \

–delete-collection-workers=2 \

–encryption-provider-config=/etc/kubernetes/encryption-config.yaml \

–etcd-cafile=/etc/kubernetes/cert/ca.pem \

–etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \

–etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \

–etcd-servers=${ETCD_ENDPOINTS} \

–tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \

–tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \

–audit-dynamic-configuration \

–audit-log-maxage=30 \

–audit-log-maxbackup=3 \

–audit-log-maxsize=100 \

–audit-log-truncate-enabled=true \

–audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \

–audit-policy-file=/etc/kubernetes/audit-policy.yaml \

–profiling \

–anonymous-auth=false \

–client-ca-file=/etc/kubernetes/cert/ca.pem \

–enable-bootstrap-token-auth=true \

–requestheader-allowed-names=“system:metrics-server” \

–requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \

–requestheader-extra-headers-prefix=X-Remote-Extra- \

–requestheader-group-headers=X-Remote-Group \

–requestheader-username-headers=X-Remote-User \

–service-account-key-file=/etc/kubernetes/cert/ca.pem \

–authorization-mode=Node,RBAC \

–runtime-config=api/all=true \

–enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \

–allow-privileged=true \

–apiserver-count=3 \

–event-ttl=168h \

–kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \

–kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \

–kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \

–kubelet-https=true \

–kubelet-timeout=10s \

–proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \

–proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \

–service-cluster-ip-range=${SERVICE_CIDR} \

–service-node-port-range=${NODE_PORT_RANGE} \

–logtostderr=true \

–v=2

Restart=on-failure

RestartSec=10

Type=notify

LimitNOFILE=65535

[Install]

WantedBy=multi-user.target

EOF

6)启动 Kube-APIServer 并验证

[root@k8s-master01 work]# for (( A=0; A < 2; A++ ))

do

sed -e “s/##MASTER_NAME##/KaTeX parse error: Expected 'EOF', got '#' at position 27: …ES[A]}/" -e "s/#̲#MASTER_IP##/{MASTER_IPS[A]}/” kube-apiserver.service.template > kube-apiserver-${MASTER_IPS[A]}.service

done

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-apiserver- m a s t e r i p . s e r v i c e r o o t @ {master_ip}.service root@ masterip.serviceroot@{master_ip}:/etc/systemd/system/kube-apiserver.service

ssh root@${master_ip} “mkdir -p ${K8S_DIR}/kube-apiserver”

ssh root@${master_ip} “systemctl daemon-reload && systemctl enable kube-apiserver --now”

done

查看 Kube-APIServer 写入 ETCD 的数据

[root@k8s-master01 work]# ETCDCTL_API=3 etcdctl \

–endpoints=${ETCD_ENDPOINTS} \

–cacert=/opt/k8s/work/ca.pem \

–cert=/opt/k8s/work/etcd.pem \

–key=/opt/k8s/work/etcd-key.pem \

get /registry/ --prefix --keys-only

查看集群信息

[root@k8s-master01 work]# kubectl cluster-info

[root@k8s-master01 work]# kubectl get all --all-namespaces

[root@k8s-master01 work]# kubectl get componentstatuses

[root@k8s-master01 work]# netstat -anpt | grep 6443

在这里插入图片描述

授予 kube-apiserver 访问 kubelet API 的权限

[root@k8s-master01 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

2.安装 Controller Manager 组件


1)创建 Controller Manager 证书和密钥

[root@k8s-master01 work]# cat > kube-controller-manager-csr.json << EOF

{

“CN”: “system:kube-controller-manager”,

“hosts”: [

“127.0.0.1”,

“192.168.1.1”,

“192.168.1.2”

],

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “system:kube-controller-manager”,

“OU”: “System”

}

]

}

EOF

2)生成证书和密钥

[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-controller-manager*.pem root@${master_ip}:/etc/kubernetes/cert/

done

3)创建 Kubeconfig 文件

[root@k8s-master01 work]# kubectl config set-cluster kubernetes \

–certificate-authority=/opt/k8s/work/ca.pem \

–embed-certs=true \

–server=${KUBE_APISERVER} \

–kubeconfig=kube-controller-manager.kubeconfig

[root@k8s-master01 work]# kubectl config set-credentials system:kube-controller-manager \

–client-certificate=kube-controller-manager.pem \

–client-key=kube-controller-manager-key.pem \

–embed-certs=true \

–kubeconfig=kube-controller-manager.kubeconfig

[root@k8s-master01 work]# kubectl config set-context system:kube-controller-manager \

–cluster=kubernetes \

–user=system:kube-controller-manager \

–kubeconfig=kube-controller-manager.kubeconfig

[root@k8s-master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-controller-manager.kubeconfig root@${master_ip}:/etc/kubernetes/

done

4)创建启动脚本

[root@k8s-master01 work]# cat > kube-controller-manager.service.template << EOF

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

WorkingDirectory=${K8S_DIR}/kube-controller-manager

ExecStart=/opt/k8s/bin/kube-controller-manager \

–secure-port=10257 \

–bind-address=127.0.0.1 \

–profiling \

–cluster-name=kubernetes \

–controllers=*,bootstrapsigner,tokencleaner \

–kube-api-qps=1000 \

–kube-api-burst=2000 \

–leader-elect \

–use-service-account-credentials\

–concurrent-service-syncs=2 \

–tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \

–tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \

–authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \

–client-ca-file=/etc/kubernetes/cert/ca.pem \

–requestheader-allowed-names=“system:metrics-server” \

–requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \

–requestheader-extra-headers-prefix=“X-Remote-Extra-” \

–requestheader-group-headers=X-Remote-Group \

–requestheader-username-headers=X-Remote-User \

–cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \

–cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \

–experimental-cluster-signing-duration=87600h \

–horizontal-pod-autoscaler-sync-period=10s \

–concurrent-deployment-syncs=10 \

–concurrent-gc-syncs=30 \

–node-cidr-mask-size=24 \

–service-cluster-ip-range=${SERVICE_CIDR} \

–cluster-cidr=${CLUSTER_CIDR} \

–pod-eviction-timeout=6m \

–terminated-pod-gc-threshold=10000 \

–root-ca-file=/etc/kubernetes/cert/ca.pem \

–service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \

–kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \

–logtostderr=true \

–v=2

Restart=on-failure

RestartSec=5

[Install]

WantedBy=multi-user.target

EOF

4)启动并验证

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-controller-manager.service.template root@${master_ip}:/etc/systemd/system/kube-controller-manager.service

ssh root@${master_ip} “mkdir -p ${K8S_DIR}/kube-controller-manager”

ssh root@${master_ip} “systemctl daemon-reload && systemctl enable kube-controller-manager --now”

done

查看输出的 Metrics

[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://127.0.0.1:10257/metrics | head

查看权限

[root@k8s-master01 work]# kubectl describe clusterrole system:kube-controller-manager

[root@k8s-master01 work]# kubectl get clusterrole | grep controller

[root@k8s-master01 work]# kubectl describe clusterrole system:controller:deployment-controller

查看当前的 Leader

[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

3.安装 Kube-Scheduler 组件


1)创建 Kube-Scheduler 证书和密钥

[root@k8s-master01 work]# cat > kube-scheduler-csr.json << EOF

{

“CN”: “system:kube-scheduler”,

“hosts”: [

“127.0.0.1”,

“192.168.1.1”,

“192.168.1.2”

],

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Shanghai”,

“L”: “Shanghai”,

“O”: “system:kube-scheduler”,

“OU”: “System”

}

]

}

EOF

2)生成证书和密钥

[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-scheduler*.pem root@${master_ip}:/etc/kubernetes/cert/

done

3)创建 Kubeconfig 文件

[root@k8s-master01 work]# kubectl config set-cluster kubernetes \

–certificate-authority=/opt/k8s/work/ca.pem \

–embed-certs=true \

–server=${KUBE_APISERVER} \

–kubeconfig=kube-scheduler.kubeconfig

[root@k8s-master01 work]# kubectl config set-credentials system:kube-scheduler \

–client-certificate=kube-scheduler.pem \

–client-key=kube-scheduler-key.pem \

–embed-certs=true \

–kubeconfig=kube-scheduler.kubeconfig

[root@k8s-master01 work]# kubectl config set-context system:kube-scheduler \

–cluster=kubernetes \

–user=system:kube-scheduler \

–kubeconfig=kube-scheduler.kubeconfig

[root@k8s-master01 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-scheduler.kubeconfig root@${master_ip}:/etc/kubernetes/

done

4)创建 Kube-Scheduler 配置文件

[root@k8s-master01 work]# cat > kube-scheduler.yaml.template << EOF

apiVersion: kubescheduler.config.k8s.io/v1alpha1

kind: KubeSchedulerConfiguration

bindTimeoutSeconds: 600

clientConnection:

burst: 200

kubeconfig: “/etc/kubernetes/kube-scheduler.kubeconfig”

qps: 100

enableContentionProfiling: false

enableProfiling: true

hardPodAffinitySymmetricWeight: 1

healthzBindAddress: 127.0.0.1:10251

leaderElection:

leaderElect: true

metricsBindAddress: 127.0.0.1:10251

EOF

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-scheduler.yaml.template root@${master_ip}:/etc/kubernetes/kube-scheduler.yaml

done

5)创建启动脚本

[root@k8s-master01 work]# cat > kube-scheduler.service.template << EOF

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

WorkingDirectory=${K8S_DIR}/kube-scheduler

ExecStart=/opt/k8s/bin/kube-scheduler \

–port=0 \

–secure-port=10259 \

–bind-address=127.0.0.1 \

–config=/etc/kubernetes/kube-scheduler.yaml \

–tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \

–tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \

–authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \

–client-ca-file=/etc/kubernetes/cert/ca.pem \

–requestheader-allowed-names=“system:metrics-server” \

–requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \

–requestheader-extra-headers-prefix=“X-Remote-Extra-” \

–requestheader-group-headers=X-Remote-Group \

–requestheader-username-headers=X-Remote-User \

–authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \

–logtostderr=true \

–v=2

Restart=always

RestartSec=5

StartLimitInterval=0

[Install]

WantedBy=multi-user.target

EOF

6)启动并验证

[root@k8s-master01 work]# for master_ip in ${MASTER_IPS[@]}

do

echo “>>> ${master_ip}”

scp kube-scheduler.service.template root@${master_ip}:/etc/systemd/system/kube-scheduler.service

ssh root@${master_ip} “mkdir -p ${K8S_DIR}/kube-scheduler”

ssh root@${master_ip} “systemctl daemon-reload && systemctl enable kube-scheduler --now”

done

[root@k8s-master01 work]# netstat -nlpt | grep kube-schedule

  • 10251:接收 http 请求,非安全端口,不需要认证授权;

  • 10259:接收 https 请求,安全端口,需要认认证授权(两个接口都对外提供 /metrics/healthz 的访问)

查看输出的 Metrics

[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://127.0.0.1:10257/metrics | head

查看权限

[root@k8s-master01 work]# kubectl describe clusterrole system:kube-controller-manager

[root@k8s-master01 work]# kubectl get clusterrole | grep controller

[root@k8s-master01 work]# kubectl describe clusterrole system:controller:deployment-controller

查看当前的 Leader

[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

4.安装 Kubelet 组件


[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

scp kubernetes/server/bin/kubelet root@${all_ip}:/opt/k8s/bin/

ssh root@${all_ip} “chmod +x /opt/k8s/bin/*”

done

[root@k8s-master01 work]# for all_name in ${ALL_NAMES[@]}

do

echo “>>> ${all_name}”

export BOOTSTRAP_TOKEN=$(kubeadm token create \

–description kubelet-bootstrap-token \

–groups system:bootstrappers:${all_name} \

–kubeconfig ~/.kube/config)

kubectl config set-cluster kubernetes \

–certificate-authority=/etc/kubernetes/cert/ca.pem \

–embed-certs=true \

–server=${KUBE_APISERVER} \

–kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig

kubectl config set-credentials kubelet-bootstrap \

–token=${BOOTSTRAP_TOKEN} \

–kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig

kubectl config set-context default \

–cluster=kubernetes \

–user=kubelet-bootstrap \

–kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig

done

[root@k8s-master01 work]# kubeadm token list --kubeconfig ~/.kube/config # 查看 Kubeadm 为各节点创建的 Token

[root@k8s-master01 work]# kubectl get secrets -n kube-system | grep bootstrap-token # 查看各 Token 关联的 Secret

在这里插入图片描述

[root@k8s-master01 work]# for all_name in ${ALL_NAMES[@]}

do

echo “>>> ${all_name}”

scp kubelet-bootstrap- a l l n a m e . k u b e c o n f i g r o o t @ {all_name}.kubeconfig root@ allname.kubeconfigroot@{all_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig

done

创建 Kubelet 参数配置文件

[root@k8s-master01 work]# cat > kubelet-config.yaml.template << EOF

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: “##ALL_IP##”

staticPodPath: “”

syncFrequency: 1m

fileCheckFrequency: 20s

httpCheckFrequency: 20s

staticPodURL: “”

port: 10250

readOnlyPort: 0

rotateCertificates: true

serverTLSBootstrap: true

authentication:

anonymous:

enabled: false

webhook:

enabled: true

x509:

clientCAFile: “/etc/kubernetes/cert/ca.pem”

authorization:

mode: Webhook

registryPullQPS: 0

registryBurst: 20

eventRecordQPS: 0

eventBurst: 20

enableDebuggingHandlers: true

enableContentionProfiling: true

healthzPort: 10248

healthzBindAddress: “##ALL_IP##”

clusterDomain: “${CLUSTER_DNS_DOMAIN}”

clusterDNS:

  • “${CLUSTER_DNS_SVC_IP}”

nodeStatusUpdateFrequency: 10s

nodeStatusReportFrequency: 1m

imageMinimumGCAge: 2m

imageGCHighThresholdPercent: 85

imageGCLowThresholdPercent: 80

volumeStatsAggPeriod: 1m

kubeletCgroups: “”

systemCgroups: “”

cgroupRoot: “”

cgroupsPerQOS: true

cgroupDriver: cgroupfs

runtimeRequestTimeout: 10m

hairpinMode: promiscuous-bridge

maxPods: 220

podCIDR: “${CLUSTER_CIDR}”

podPidsLimit: -1

resolvConf: /etc/resolv.conf

maxOpenFiles: 1000000

kubeAPIQPS: 1000

kubeAPIBurst: 2000

serializeImagePulls: false

evictionHard:

memory.available: “100Mi”

nodefs.available: “10%”

nodefs.inodesFree: “5%”

imagefs.available: “15%”

evictionSoft: {}

enableControllerAttachDetach: true

failSwapOn: true

containerLogMaxSize: 20Mi

containerLogMaxFiles: 10

systemReserved: {}

kubeReserved: {}

systemReservedCgroup: “”

kubeReservedCgroup: “”

enforceNodeAllocatable: [“pods”]

EOF

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

sed -e "s/##ALL_IP##/ a l l i p / " k u b e l e t − c o n f i g . y a m l . t e m p l a t e > k u b e l e t − c o n f i g − {all_ip}/" kubelet-config.yaml.template > kubelet-config- allip/"kubeletconfig.yaml.template>kubeletconfig{all_ip}.yaml.template

scp kubelet-config- a l l i p . y a m l . t e m p l a t e r o o t @ {all_ip}.yaml.template root@ allip.yaml.templateroot@{all_ip}:/etc/kubernetes/kubelet-config.yaml

done

1)创建 Kubelet 启动脚本

[root@k8s-master01 work]# cat > kubelet.service.template << EOF

[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=${K8S_DIR}/kubelet

ExecStart=/opt/k8s/bin/kubelet \

–bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \

–cert-dir=/etc/kubernetes/cert \

–cgroup-driver=cgroupfs \

–cni-conf-dir=/etc/cni/net.d \

–container-runtime=docker \

–container-runtime-endpoint=unix:///var/run/dockershim.sock \

–root-dir=${K8S_DIR}/kubelet \

–kubeconfig=/etc/kubernetes/kubelet.kubeconfig \

–config=/etc/kubernetes/kubelet-config.yaml \

–hostname-override=##ALL_NAME## \

–pod-infra-container-image=registry.aliyuncs.com/google_containers/pause-amd64:3.2 \

–image-pull-progress-deadline=15m \

–volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \

–logtostderr=true \

–v=2

Restart=always
自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。

深知大多数Java工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年Java开发全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上Java开发知识点,真正体系化!

由于文件比较大,这里只是将部分目录截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且会持续更新!

如果你觉得这些内容对你有帮助,可以扫码获取!!(备注Java获取)

img

最后

由于篇幅限制,小编在此截出几张知识讲解的图解

P8级大佬整理在Github上45K+star手册,吃透消化,面试跳槽不心慌

P8级大佬整理在Github上45K+star手册,吃透消化,面试跳槽不心慌

P8级大佬整理在Github上45K+star手册,吃透消化,面试跳槽不心慌

P8级大佬整理在Github上45K+star手册,吃透消化,面试跳槽不心慌

P8级大佬整理在Github上45K+star手册,吃透消化,面试跳槽不心慌

《互联网大厂面试真题解析、进阶开发核心学习笔记、全套讲解视频、实战项目源码讲义》点击传送门即可获取!
/etc/kubernetes/cert/ca.pem"

authorization:

mode: Webhook

registryPullQPS: 0

registryBurst: 20

eventRecordQPS: 0

eventBurst: 20

enableDebuggingHandlers: true

enableContentionProfiling: true

healthzPort: 10248

healthzBindAddress: “##ALL_IP##”

clusterDomain: “${CLUSTER_DNS_DOMAIN}”

clusterDNS:

  • “${CLUSTER_DNS_SVC_IP}”

nodeStatusUpdateFrequency: 10s

nodeStatusReportFrequency: 1m

imageMinimumGCAge: 2m

imageGCHighThresholdPercent: 85

imageGCLowThresholdPercent: 80

volumeStatsAggPeriod: 1m

kubeletCgroups: “”

systemCgroups: “”

cgroupRoot: “”

cgroupsPerQOS: true

cgroupDriver: cgroupfs

runtimeRequestTimeout: 10m

hairpinMode: promiscuous-bridge

maxPods: 220

podCIDR: “${CLUSTER_CIDR}”

podPidsLimit: -1

resolvConf: /etc/resolv.conf

maxOpenFiles: 1000000

kubeAPIQPS: 1000

kubeAPIBurst: 2000

serializeImagePulls: false

evictionHard:

memory.available: “100Mi”

nodefs.available: “10%”

nodefs.inodesFree: “5%”

imagefs.available: “15%”

evictionSoft: {}

enableControllerAttachDetach: true

failSwapOn: true

containerLogMaxSize: 20Mi

containerLogMaxFiles: 10

systemReserved: {}

kubeReserved: {}

systemReservedCgroup: “”

kubeReservedCgroup: “”

enforceNodeAllocatable: [“pods”]

EOF

[root@k8s-master01 work]# for all_ip in ${ALL_IPS[@]}

do

echo “>>> ${all_ip}”

sed -e "s/##ALL_IP##/ a l l i p / " k u b e l e t − c o n f i g . y a m l . t e m p l a t e > k u b e l e t − c o n f i g − {all_ip}/" kubelet-config.yaml.template > kubelet-config- allip/"kubeletconfig.yaml.template>kubeletconfig{all_ip}.yaml.template

scp kubelet-config- a l l i p . y a m l . t e m p l a t e r o o t @ {all_ip}.yaml.template root@ allip.yaml.templateroot@{all_ip}:/etc/kubernetes/kubelet-config.yaml

done

1)创建 Kubelet 启动脚本

[root@k8s-master01 work]# cat > kubelet.service.template << EOF

[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=${K8S_DIR}/kubelet

ExecStart=/opt/k8s/bin/kubelet \

–bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \

–cert-dir=/etc/kubernetes/cert \

–cgroup-driver=cgroupfs \

–cni-conf-dir=/etc/cni/net.d \

–container-runtime=docker \

–container-runtime-endpoint=unix:///var/run/dockershim.sock \

–root-dir=${K8S_DIR}/kubelet \

–kubeconfig=/etc/kubernetes/kubelet.kubeconfig \

–config=/etc/kubernetes/kubelet-config.yaml \

–hostname-override=##ALL_NAME## \

–pod-infra-container-image=registry.aliyuncs.com/google_containers/pause-amd64:3.2 \

–image-pull-progress-deadline=15m \

–volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \

–logtostderr=true \

–v=2

Restart=always
自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。

深知大多数Java工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年Java开发全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。[外链图片转存中…(img-Gp84jeZB-1713525262674)]

[外链图片转存中…(img-EuwjY6Gx-1713525262675)]

[外链图片转存中…(img-554tsxd6-1713525262675)]

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上Java开发知识点,真正体系化!

由于文件比较大,这里只是将部分目录截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且会持续更新!

如果你觉得这些内容对你有帮助,可以扫码获取!!(备注Java获取)

img

最后

由于篇幅限制,小编在此截出几张知识讲解的图解

[外链图片转存中…(img-PhssYEci-1713525262675)]

[外链图片转存中…(img-OQitNjpK-1713525262676)]

[外链图片转存中…(img-LTIsWCF8-1713525262676)]

[外链图片转存中…(img-4zeULxAL-1713525262676)]

[外链图片转存中…(img-slEXHcxy-1713525262676)]

《互联网大厂面试真题解析、进阶开发核心学习笔记、全套讲解视频、实战项目源码讲义》点击传送门即可获取!

  • 15
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值