k8s-二进制安装

1 环境准备

pod 网段: 10.96.0.0/16
service 网段: 10.244.0.0/16
集群角色IP主机名安装组件
控制节点192.168.241.191k8smaster01apiserver、controller-manager、scheduler、etcd、docker、keepalived、haproxy
控制节点192.168.241.192k8smaster02apiserver、controller-manager、scheduler、etcd、docker、keepalived、haproxy
node192.168.241.193k8snode01etcd、docker、keepalived、haproxy
VIP192.168.241.190

开始配置环境:

# 配置主机名
hostnamectl set-hostname k8smaster01 && bash


# 配置域名解析
cat >> /etc/hosts << EOF
192.168.241.191 k8smaster01
192.168.241.192 k8smaster02
192.168.241.193 k8snode01
EOF

# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld

# 关闭selinux
setenforce 0 && sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 关闭swap交换空间
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

# 开始时间同步
yum install ntpdate -y && ntpdate time.windows.com

# 加载 br_netfilter 模块
modprobe br_netfilter

# 查看模块是否加载成功
lsmod |grep br_netfilter

# 修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 内核参数生效
sysctl -p /etc/sysctl.d/k8s.conf

 建立免密登录

ssh-keygen 一路回车即可

 把本地的 ssh 公钥文件安装到远程主机对应的账户

ssh-copy-id root@192.168.241.193

安装docker

 # 安装所需的软件包。yum-utils 提供了 yum-config-manager ,并且 device mapper 存储驱动程序需要 device-mapper-persistent-data 和 lvm2。
 yum install -y yum-utils  device-mapper-persistent-data  lvm2
 # 选择国内的一些源地址:
 yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 # 安装最新版本的 Docker Engine-Community 和 containerd
 yum install docker-ce docker-ce-cli containerd.io -y

# 启动
systemctl enable docker && systemctl start docker

# 配置阿里镜像源,日志大小,cgroup
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts":["native.cgroupdriver=systemd"],
  "log-driver":"json-file",
  "log-opts":{"max-size":"50m","max-file":"3"} 
}
EOF
# 加载配置文件重启
systemctl daemon-reload
# 重启
systemctl restart docker
# 查看docker 配置信息
docker info

选择ipvs 还是 iptables 可以根据集群大小来选择,几十台的集群使用默认的iptables即可。

2 etcd集群搭建

Etcd 是一个分布式键值存储系统,Kubernetes 使用 Etcd 进行数据存储,所以先准备一个 Etcd 数据库,为解决 Etcd 单点故障,应采用集群方式部署,这里使用 3 台组建集群,可容忍 1 台机器故障,当然,你也可以使用 5 台组建集群,可容忍 2 台机器故障

自签证书:

# 创建ssl工作目录
mkdir ~/ssl
cd ~/ssl
# 准备 cfssl 证书生成工具
# cfssl 是一个开源的证书管理工具,使用 json 文件生成证书,相比 openssl 更方便使用。
# 找任意一台服务器操作,这里用 Master 节点。
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

通过cfssl命令获取初始配置文件和证书文件json根据需要修改配置.

CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);

O:Organization (单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

L 字段:所在城市
S 字段:所在省份
C 字段:只能是国家字母缩写,如中国:CN

 

生成 ca 证书请求文件

cat ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "system"
        }
    ]
}

 执行命令成ca

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

   生成 ca 证书文件

cat ca-config.json

{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

生成 etcd 证书

配置 etcd 证书请求,hosts 的 ip 变成自己 etcd 所在节点的 ip 

cat etcd-csr.json

{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.241.190",
        "192.168.241.191",
        "192.168.241.192",
        "192.168.241.193"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "system"
        }
    ]
}

ca 签发etcd证书

 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

 最后获得四个证书

 部署etcd集群

1 创建文件目录并解压etcd

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.5.1-linux-amd64.tar.gz
mv etcd-v3.5.1-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

2 创建 etcd 配置文件

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.241.191:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.241.191:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.241.191:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.241.191:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.241.191:2380,etcd-2=https://192.168.241.192:2380,etcd-3=https://192.168.241.193:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF



# 注:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群 Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群

3. systemd 管理 etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/etcd.pem \
--key-file=/opt/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-cert-file=/opt/etcd/ssl/etcd.pem \
--peer-key-file=/opt/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

4. 拷贝刚才生成的证书,配置启动

# 拷贝刚才生成的证书
# 把刚才生成的证书拷贝到配置文件中的路径:
scp ~/ssl/*.pem /opt/etcd/ssl/
# 启动并设置开机启动
systemctl daemon-reload && systemctl start etcd && systemctl enable etcd

5 复制文件到其他节点,并启动etcd

# 复制etcd文件
for i in k8smaster02 k8snode01;do scp -r /opt/etcd $i:/opt/;done

# 复制system文件
for i in k8smaster02 k8snode01;do scp -r /usr/lib/systemd/system/etcd.service $i:/usr/lib/systemd/system/ ;done

启动 etcd 的时候,先启动 k8smaster01 的 etcd 服务,会一直卡住在启动的状态,然后接着再启动 k8smaster02 的 etcd,这样 k8smaster01这个节点 etcd 才会正常起来 

6 查看etcd 状态

/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.241.191:2379,https://192.168.241.192:2379,https://192.168.241.193:2379" endpoint health

 3 安装api server

下载:kubernetes/CHANGELOG at master · kubernetes/kubernetes · GitHub

通过上的链接可以找到下载包

下载server包

 把下载的文件上传到k8smaster01节点并解压。

进入解压文件的 kubernetes/server/bin目录,会发现很多可执行文件

 创建 csr 请求文件,从ca申请kube-apiserver证书

cat kube-apiserver-csr.json 
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "192.168.241.190",
        "192.168.241.191",
        "192.168.241.192",
        "192.168.241.193",
        "10.244.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "system"
        }
    ]
}


#注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被
kubernetes master 集群使用,需要将 master 节点的 IP 都填上,同时还需要填写 service 网络的首个
IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个 IP,如 10.244.0.1)

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

 创建k8s目录,复制文件到文件夹中

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
# 复制证书文件
cp ~/ssl/*.pem /opt/kubernetes/ssl/

kubernetes目录结构 

 添加kube-apiserver配置文件

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--anonymous-auth=false \\
--bind-address=192.168.241.191 \\
--secure-port=6443 \\
--advertise-address=192.168.241.191 \\
--insecure-port=0 \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth \\
--service-cluster-ip-range=10.244.0.0/16 \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--kubelet-client-certificate=/opt/kubernetes/ssl/kube-apiserver.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/kube-apiserver-key.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/etcd.pem \\
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://192.168.241.191:2379,https://192.168.241.192:2379,https://192.168.241.193:2379 \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/kube-apiserver-audit.log \\
--event-ttl=1h \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/opt/kubernetes/logs \\
--v=4"
EOF

#注:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd 集群地址
--bind-address:监听地址
--secure-port:https 安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service 虚拟 IP 地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用 RBAC 授权和节点自管理
--enable-bootstrap-token-auth:启用 TLS bootstrap 机制
--token-auth-file:bootstrap token 文件
--service-node-port-range:Service nodeport 类型默认分配端口范围
--kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
--tls-xxx-file:apiserver https 证书
--etcd-xxxfile:连接 Etcd 集群证书 –
-audit-log-xxx:审计日志

 systemd管理kube-apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

 启用 TLS Bootstrapping 机制

 k8s每个节点与apiserver交互都需要有ca签发的证书才能通讯,节点过多工作量巨大。为了简化流程 引入了TLS机制,kubelet会一一个低权限的用户向apiserver申请证书,kubelet的证书再有apiserver动态签署。

具体过程

1.TLS的作用是对通讯进行加密,防止他人窃听。如果证书不信任连接无法建立,就完不成通讯。

2.当TLS连接建立成功后,根据用户权限来控制对apiserver访问。

在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O 字段作为用户组.

kubelet 首次启动流程

TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;

在 apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的 Token 和 由 apiserver 的 CA 签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.

token.csv 格式:
3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

认证流程:

创建token.csv

# 格式:token,用户名,UID,用户组
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

4 安装kubectl

Kubectl 是客户端工具,操作 k8s 资源的,如增删改查等.

Kubectl 操作资源的时候,怎么知道连接到哪个集群,这个是在 kubeadm 初始化 k8s 的时候会告诉我们要用的一个方法,在root目录下创建一个.kube的文件夹,放入config文件,在我们执行命令的时候会自动加载/root/.kube/config文件决定管理那个集群,也可以使用环境变量定义config文件,      export KUBECONFIG=文件位置 ,默认先寻找环境变量定义的文件,如果没有去root目录寻找.kube中的config文件。

下面创建一个config文件

# 证书请求文件
cat > admin-csr.json << EOF
{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:masters",
            "OU": "system"
        }
    ]
}
EOF


# cn : 用户,由于证书是有ca签发的,所以认证通过,
# O : 组,在k8s中默认存在很多预授权的组,我们可以根据组的权限获得API的访问权限
# O: 配置为 system:masters 在集群内部 cluster-admin 的 clusterrolebinding 将system:masters 组和 cluster-admin clusterrole 绑定在一起

 生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

复制证书到文件中

cp admin*.pem /opt/kubernetes/ssl/

配置安全上下文

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA
证书和自身使用的证书

# 1 设置集群参数,指定ca.pem和集群地址
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.241.191:6443 --kubeconfig=kube.config
# 2 设置客户端认证参数,指定用户名,用户证书
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
# 3 设置上下文参数
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
# 4 设置当前上下文
kubectl config use-context kubernetes --kubeconfig=kube.config
# 5 复制文件到.kube
mkdir ~/.kube
cp kube.config ~/.kube/config
# 6 授权 kubernetes 证书访问 kubelet api 权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

查看集群组件状态

 复制文件到k8smaster02

scp /usr/bin/kubectl root@192.168.241.192:/usr/bin/kubectl

scp ~/.kube/config root@192.168.241.192:/root/.kube/

配置kubectl命令补全

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile

5 部署 kube-controller-manager 组件

# 证书请求文件

cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [
        "127.0.0.1",
        "192.168.241.190",
        "192.168.241.191",
        "192.168.241.192",
        "192.168.241.193"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:kube-controller-manager",
            "OU": "system"
        }
    ]
}
EOF

获得证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

把证书复制到kubernetes中的ssl证书目录
cp kube-controller-manager*.pem /opt/kubernetes/ssl/

创建 kube-controller-manager 的 kubeconfig

# 1.设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.241.191:6443 --kubeconfig=kube-controller-manager.kubeconfig
# 2. 设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
# 3. 设置上下文参数
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
# 4. 设置当前上下文
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
# 5. 复制文件到cfg目录中
cp kube-controller-manager.kubeconfig /opt/kubernetes/cfg

创建配置文件 kube-controller-manager.conf

cat > kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \\
--secure-port=10252 \\
--bind-address=127.0.0.1 \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--service-cluster-ip-range=10.244.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.0.0.0/16 \\
--experimental-cluster-signing-duration=87600h \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--leader-elect=true \\
--feature-gates=RotateKubeletServerCertificate=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--tls-cert-file=/opt/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/kube-controller-manager-key.pem \\
--use-service-account-credentials=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/opt/kubernetes/logs \\
--v=2"
EOF

kube-controller-manager交给systemd管理

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

启动

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

复制文件到k8smaster01上

# 复制证书
scp kube-controller-manager* root@192.168.241.192:/opt/kubernetes/ssl/
# 复制配置文件
scp kube-controller-manager* root@192.168.241.192:/opt/kubernetes/cfg/
# 复制systemd配置文件
scp /usr/lib/systemd/system/kube-controller-manager.service root@192.168.241.192:/usr/lib/systemd/system/

6 部署 kube-scheduler 组件

创建 csr 请求证书

cat >  kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
        "127.0.0.1",
        "192.168.241.190",
        "192.168.241.191",
        "192.168.241.192",
        "192.168.241.193"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:kube-scheduler",
            "OU": "system"
        }
    ]
}
EOF

获得证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建 kube-scheduler 的 kubeconfig

# 1 设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.241.191:6443 --kubeconfig=kube-scheduler.kubeconfig
# 2.设置客户端认证参数
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem  --client-key=kube-scheduler-key.pem  --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
# 3.设置上下文参数
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
# 4.设置当前上下文
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
# 5 复制文件
cp kube-scheduler.kubeconfig /opt/kubernetes/cfg

创建配置文件 kube-scheduler.conf

cat > kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/kubernetes/logs \
--v=2"
EOF

kube-scheduler交给systemd管理

cat > kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

启动

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler

复制文件到k8smaster02

 scp kube-scheduler.* root@192.168.241.192:/opt/kubernetes/cfg/
scp /usr/lib/systemd/system/kube-scheduler.service root@192.168.241.192:/usr/lib/systemd/system/

7 部署 kubelet 组件

每个 Node 节点上的 kubelet 定期就会调用 API Server 的 REST 接口报告自身状态,API Server
接收这些信息后,将节点状态信息更新到 etcd 中。kubelet 也通过 API Server 监听 Pod 信息,从而对 Node机器上的 POD 进行管理,如创建、删除、更新 Pod

以下操作在 k8smaster01上操作

cd ~/ssl
# token 载入环境变量
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /opt/kubernetes/cfg/token.csv)

# 载入集群信息
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.241.191:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

# 创建账户及认证token
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

# 账户和集群关联
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

# 设置当前上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

# 集群角色绑定
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

创建配置文件 kubelet.json

"cgroupDriver": "systemd"要和 docker 的驱动一致,address 替换为自己 node 的 IP 地址。

cat > kubelet.json << EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/opt/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.241.191",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.244.0.2"]
}
EOF

kubelet 交由systemd管理

cat > kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.json \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/kubernetes/logs \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

复制文件 

scp kubernetes/server/bin/kubelet 192.168.241.192:/opt/kubernetes/bin/

scp kubelet-bootstrap.kubeconfig kubelet.json 192.168.241.192:/opt/kubernetes/cfg

scp /usr/lib/systemd/system/kubelet.service 192.168.241.193:/usr/lib/systemd/system/ 

scp ca.pem 192.168.241.193:/opt/kubernetes/ssl/

启动 

mkdir /var/lib/kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
# 查看请求
kubectl get csr
# 批准进入集群
kubectl certificate approve csr名
# 查看节点
kubectl get nodes

kubectl 启动后在k8smaster01节点上批准下kubelet的 bootstrap请求 

8 部署kube-proxy

创建 csr 请求  ,创建证书文件

vim kube-proxy-csr.json

{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "HeiBei",
            "L": "ShiJiaZhuang",
            "O": "k8s",
            "OU": "system"
        }
    ]
}



# ca 签发证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
 # 创建配置文件,加入集群信息配置ca
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.241.191:6443 --kubeconfig=kube-proxy.kubeconfig
 
# 加入用户信息
 kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

 # 用户和集群信息关联
 kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
 
# 设置当前上下文
 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

创建 kube-proxy 配置文件

vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
# 节点IP
bindAddress: 192.168.1.191
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
# 节点网段
clusterCIDR: 192.168.1.0/24
# 节点IP
healthzBindAddress: 192.168.1.191:10256
kind: KubeProxyConfiguration
# 节点IP
metricsBindAddress: 192.168.1.191:10249

# ipvs
# mode: "ipvs"

创建服务启动文件,交由systemd管理

vim kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
--config=/opt/kubernetes/cfg/kube-proxy.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/kubernetes/logs \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

启动

mkdir -p /var/lib/kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy

然后把文件复制到其他节点

9 安装calico网络插件

kubectl create -f https://docs.projectcalico.org/archive/v3.13/manifests/calico.yaml

 10 安装coredns

cat coredns.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.7.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  # dnsIP
  clusterIP: 10.255.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

11 测试dns尝试部署一个服务

部署nginx

# 创建pod
kubectl create deployment test --image=nginx
# 创建svc
kubectl expose deployment test --port=80 --target-port=80 --type=NodePort

 

测试dns

kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh

 

 12 安装  keepalivedx +nginx  实现 k8s apiserver 高可用

yum install haproxy keepalived -y

haproxy配置文件

vi /etc/haproxy/haproxy.cfg


global

    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    stats socket /var/lib/haproxy/stats

listen admin_status
    bind *:80
    mode http
    stats uri /status

frontend  main *:16443
    mode tcp
    default_backend k8s

backend k8s
    balance     roundrobin
    mode tcp
    server  k8smaster01 192.168.1.191:6443 check
    server  k8smaster02 192.168.1.192:6443 check

启动

systemctl start haproxy
systemctl enable haproxy

keepalived 配置文件

vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    # 备用节点优先级降低
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.190
    }
    track_script {
        check_haproxy
    }
}

启动

systemctl start keepalived
systemctl enable keepalived

把集群中配置文件IP 192.168.241.191:6443替换为192.168.241.190:16443重启集群即可实现高可用

  • 1
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值