二进制安装K8S集群
二进制安装kubernetes
kubernetes集群架构
实现环境部署
实验环境:
3台CentOS 7虚拟机
主机名 | IP | 角色 | 资源 |
---|---|---|---|
k8s-master | 192.168.2.73 | master | 2核4G 20G |
k8s-node01 | 192.168.2.71 | node-1 | 4核8G 40G |
k8s-node02 | 192.168.2.72 | node-2 | 4核8G 40G |
1 由于K8s在较低内核中存在某些Bug,因此需要先升级内核版本。建议使用4.0或者以上版本(由于环境部分为重复步骤,大家可以用写脚本,然后用ansible自动化运维工具更帮助管理员快速自动化部署环境)
#升级内核版本
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-5.0.4-1.el7.elrepo.x86_64.rpm
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-devel-5.0.4-1.el7.elrepo.x86_64.rpm
yum -y install kernel-ml-5.0.4-1.el7.elrepo.x86_64.rpm kernel-ml-devel-5.0.4-1.el7.elrepo.x86_64.rpm
# 调整默认内核启动
cat /boot/grub2/grub.cfg |grep menuentry
grub2-set-default "CentOS Linux (5.0.4-1.el7.elrepo.x86_64) 7 (Core)"
# 检查是否修改正确
grub2-editenv list
reboot
开启IPVS的支持
# 确认过内核版本后,开启IPVS
uname -a
# 这里用一个脚本执行
cat /etc/sysconfig/modeles/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
#增加执行权限,查看ip_vs模块是否添加成功
chmod 755 /etc/sysconfig/modules/ipvs.modules && sh /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
关闭交换分区,Selinux以及firewalld等
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#关闭selinux
##永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config
##临时关闭
setenforce 0
#关闭swap
##临时关闭
swapoff -a
##永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab
# 设置k8s需要的内核参数
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf
# 刷新加载
sysctl -p
配置主机名的解析,时间同步
#同步时间
yum -y install ntpdate
ntpdate -u ntp1.aliyun.com
#调整时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
#将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
#重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
#关闭系统不需要的服务
systemctl stop postfix && systemctl disable postfix
#将解析写入每个节点的hosts文件中
cat /etc/hosts
192.168.2.73 k8s-master
192.168.2.71 k8s-node01
192.168.2.72 k8s-node02
配置各个主机能够进行免密钥登陆(如果用ansible的话这个步骤可以省略)
ssh-keygen
ssh-copy-id root@192.168.2.73
ssh-copy-id root@192.168.2.72
ssh-copy-id root@192.168.2.71
签发证书
将会为下列组件签发证书
- admin user
- kubelet
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- kube-proxy
这里使用一个cfssl工具,可以方便一点
curl -s -L -o /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
curl -s -L -o /bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x /bin/cfssl*
创建CA证书
# ca证书配置请求
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
# ca证书签名请求
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "HangZhou",
"O": "yjl",
"OU": "CA",
"ST": "Winterfell"
}
]
}
EOF
# 产生ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# 查看产生的证书
ls -a
#然后会发现两个文件
ca-key.pem ca.pem
产生admin用户证书
# 证书签名请求配置
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "Westeros",
"L": "The North",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Winterfell"
}
]
}
EOF
# 产生admin用户证书并校验结果
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
# 查看产生的证书
ls -l
admin-key.pem admin.pem
kubelet Client端证书
节点授权:节点的授权hi是一种特殊的授权模式,有kubelet API请求明确授权
节点的授权允许kubelet去执行API操作,包括:
- 读操作
- services
- endpoints
- nodes
- pods
- secert、configmap、pvc、pv
- 写操作
- 节点和节点的状态(使用NodeRestriction准入插件限制kubelet修改自己的节点)
- pods和po的状态(使用NodeRestriction准入插件限制kublet修改绑定到本身的Pods)
- 事件
- 认证相关的操作
- 读/写访问certificationsigningrequests TLS引导API
- 能够创建tokenreview和subjectaccessreview委托认证/授权检查
为了能够通过节点授权,kubelet必须使用一个凭证来标识它们是”system:nodes“组,是带有”systema:node:“名称
产生kubelet的证书
#配置node节点相关信息
cat node.txt
k8s-node01 192.168.2.71
k8s-node02 192.168.2.72
# 创建脚本
vim genrate-kubelet-certificate.sh
#!/bin/bash
for line in `cat node.txt`
instance=`echo $line | awk '{print $1}'`
INTERNAL_IP=`echo $line | awk '{print $1}'`
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "Westeros",
"L": "The North",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Winterfell"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
# 产生证书并确认查看
sh generate-kubelet-certificate.sh
ls -l
k8s-node01-key.pem k8s-node01.pem k8s-node02-key.pem k8s-node02.pem
Controller Manager客户端证书
# kube-controller-manager证书请求内容如下
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "Westeros",
"L": "The North",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Winterfell"
}
]
}
EOF
# 对kube-controller-manager进行签发证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
# 校验产生的证书
ls kube-controller-manager*
kube-controller-manager.csr kube-controller-manager-key.pem
kube-controller-manager-csr.json kube-controller-manager.pem
Kube Proxy客户端证书
# kube proxy证书签名请求配置
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "Westeros",
"L": "The North",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Winterfell"
}
]
}
EOF
#针对kube proxy签发证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
# 校验产生的证书
ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
Scheduler 客户端证书
# kube Scheduler客户端证书签名请求
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "Westeros",
"L": "The North",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Winterfell"
}
]
}
EOF
# 针对kube-scheduler签发证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-schedule
#校验产生的证书
ls kube-scheduler*
kube-scheduler.csr kube-scheduler-csr.json kube-scheduler-key.pem kube-scheduler.pem
Kubernetes API证书
kube-api服务器证书主机名称应该包含如下:
- 所有控制器的主机名称
- 所有控制器的IP
- 负载均衡器的主机名称
- 负载均衡器的IP
- Kuberbetes的服务(kubenrnetes默认的服务器名称和IP均是10.32.0.1)
- loaclhost
CERT_HOST=10.32.0.1,k8s-master,192.168.2.73,127.0.0.1,localhost,kubernetes.default
# 创建证书签名请求
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "Westeros",
"L": "The North",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Winterfell"
}
]
}
EOF
# 产生kubernetes API证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${CERT_HOSTNAME} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
# 校验查看证书
ls kuberntes*
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
服务账号密钥对
服务账户密钥对常来签署服务账号的token
# 配置服务账号的证书签名请求
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "Westeros",
"L": "The North",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Winterfell"
}
]
}
EOF
#针对服务账号签发证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
# 查看针对服务账户签发的证书
ls service-account*
service-account.csr service-account-csr.json service-account-key.pem service-account.pem
复制证书到各个节点
#!/bin/bash
for line in `cat node.txt`
instance=`echo $line | awk '{print $1}'`
rsync -zvhe ssh ca.pem ${instance}-key.pem ${instance}.pem root@${instance}:/root/
done
产生kubeconfig
kubeconfig常用来在kubernetes组件之间和用户到kubenetes之间。kubeconfig主要包含三件事:
Entity | describe |
---|---|
集群 | spi-server的IP以及base64位编码的证书 |
用户 | 用户相关的信息,比如认证用户名,它的证书和key或者服务账号的token等 |
上下文 | 拥有集群和证书的引用,加入你有多个集群和用户,那么使用上下文将会变得非常方便 |
主要需要产生如下组件的kubeconfig:
- kubelet kubeconfig
- kube-proxy kubeconfig
- kube-controller-manager kubeconfig
- kube-schdeluer kubeconfig
- admin kubeconfig
产生kubelet的kubeconfig
在kubeconfig中的·user·应该是·system:node:<node_name>·,它应该匹配我们产生kubelet
client证书的主机名称
# 先把kubectl的二进制包download到 /usr/local/bin,并增加x权限
cat kubelet-kubeconfig.sh
#!/bin/bah
for instance in k8s-node01 k8s-node02; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.2.73:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
sh kubelet-kubeconfig.sh
# 查看校验kubeconfig
ls
k8s-node01.kubeconfig k8s-node02.kubeconfig
产生kube-proxy的kubeconfig
cat kube-proxy-kubecofig.sh
#!/bin/bash
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.2.73:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 执行脚本
sh kube-proxy-kubecofig.sh
# 查看校验kube-proxy.kubeconfig
ls
kube-proxy.kubeconfig
创建kube-controller-manager kubeconfig
cat kube-controller-manager-kubecofig.sh
#!/bin/bash
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
#执行脚本
sh kube-controller-manager-kubecofig.sh
# 查看校验kube-controller-manager.kubeconfig
ls
kube-controller-manager.kubeconfig
产生kube-scheduler kubeconfig
cat kube-scheduler-kubecofig.sh
#!/bin/bash
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
#执行脚本
sh kube-scheduler-kubeconfig.sh
# 查看校验kube-scheduler-kubeconfig
ls
kube-scheduler.kubeconfig
产生admin的kubeconfig
cat admin-kubeconfig.sh
#!/bin/bash
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
#执行脚本
sh admin-kubeconfig.sh
# 查看校验admin.kubeconfig
ls
admin.kubeconfig
复制kubeconfig到各个节点
cat kubeconfig-copy.sh
#!/bin/bash
for line in k8s-node01 k8s-node02; do
rsync -zvhe ssh ${line}.kubeconfig kube-proxy.kubeconfig root@${line}:/root
done
通信加密配置设置
Kubernetes存储不同类型的数据包,包括集群状态,应用程序配置,和密钥。kubernetes支持集群数据加密。(放在控制节点)
# 加密key
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# 加密的config文件
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
etcd集群安装
我们一般把etcd安装在控制节点:
解压缩etcd文件并复制相关证书到适当的位置
tar -xvf etcd-v3.3.5-linux-amd64.tar.gz
chmod +x etcd-v3.3.5-linux-amd64/etcd*
mv etcd-v3.3.5-linux-amd64/etcd* /usr/local/bin/
mkdir -p /etc/etcd /var/lib/etcd
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
创建服务的配置文件
cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name k8s-master \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://192.168.2.73:2380 \\
--listen-peer-urls https://192.168.2.73:2380 \\
--listen-client-urls https://192.168.2.73:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://192.168.2.73:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster k8s-master=https://192.168.2.73:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动etcd服务
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
etcd安装和配置完成后,校验它是否正常运行
# etcdctlv3版本需要在命令前指定版本:ETCDCTL_API=3
ETCDCTL_API=3 etcdctl member list--endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
master节点各组件部署
控制节点包括
- kube-api
- kube-controller-manager
- kube-scheduler
移动二进制文件到/usr/local/bin目录下
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin
kubernetes API服务器配置
移动证书到kubernetes目录
mkdir -p /var/lib/kubernetes
mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem encryption-config.yaml /var/lib/kubernetes/
cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=192.168.2.73 \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://192.168.2.73:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2 \\
--kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
kubernetes Controller Manager配置
移动kube-controller-manager.kubeconfig到/var/lib/kubernetes目录
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
创建kube-controller-manager服务配置文件
cat /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--allocate-node-cidrs=true \
--cluster-cidr=10.100.0.0/16 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
kubernetes Scheduler配置
移动kube-scheduler kubeconfig到/var/lib/kubernetes目录
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
创建kube-scheduler配置文件
mkdir -p /etc/kubernetes/config
cat /etckubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
创建kube-scheduler服务配置文件
cat /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动服务
kubectl deamon-reload
kubectl enable kube-apiserver kube-controller-manager kube-scheduler
kubectl start kube-apiserver kube-controller-manager kube-scheduler
# 查看服务状态
kubectl status kube-apiserver kube-controller-manager kube-scheduler
node节点各组件部署
安装Docker
# 首先安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加阿里云的epel源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 更新安装Docker-ce(默认安装最新版本)
yum makecache
yum -y install docker-ce
注意:安装指定版本的Docker-ce
vim /etc/yum.repos.d/docker-ce.repo
#将[docker-ce-test]下方的enabled=0修改为enabled=1
查找Docker-ce的版本
yum list docker-ce.x86_64 --showdumplicates | sort -r
安装指定版本的Docker-ce版本
yum -y install docker-ce-[VERSION]
Kubelet配置
将证书和kube-kubeconfig移动到相对应的目录
mkdir -p /var/lib/kublet /var/lib/kubernetes
mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
mv ca.pem /var/lib/kubernetes/
配置kubelet配置文件
cat /car/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.100.0.0/16"
#resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
创建kubelet服务配置文件
cat /etc.systecd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--config=/var/lib/kubelet/kubelet-config.yaml \
--docker=unix:///var/run/docker.sock \
--docker-endpoint=unix:///var/run/docker.sock \
--image-pull-progress-deadline=2m \
--network-plugin=cni \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--register-node=true \
--cgroup-driver=systemd \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
kube-proxy配置
移动kube-proxy.kubeconfig文件对应kubernetes目录
mkdir /var/lib/kube-proxy -p
mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
创建kube-proxy配置文件
cat /var/lib/kube-proxy/kube-proxy.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: ""
clusterCIDR: "10.100.0.0/16"
创建kube-proxy服务配置文件
cat /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动kubelet服务以及kube-proxy服务并检查状态
systemctl daemon-reload
systemctl enable kubelet kube-proxy docker
systemctl start kubelet kube-proxy docker
systemcl status kubelet kube-proxy docker
这时在master节点执行kubectl get node就可以看到节点的状态了
部署CNI插件
安装CNI插件
mkdir /opt/cni/bin /etc/cni/net.d -p
cd /opt/cni/bin
wget https://github.com/containernetworking/plugins/releases/download/v0.8.3/cni-plugins-linux-amd64-v0.8.3.tgz
tar xvf cni-plugins-linux-amd64-v0.8.3.tgz -C /opt/cni/bin
chmod +x /opt/cni/bin/*
在master节点执行
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 修改 "Network": "10.100.0.0/16"
kubectl apply -f kube-flannel.yaml
# 节点的状态{Ready}
kubectl get node
对集群进行测试
创建nginx deployment,它有两个副本
cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
replicas: 2
template:
metadata:
labels:
run: nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
# 应用yaml文件
kubectl apply -f nginx-deployment.yaml
针对创建的Deployment创建服务
kubectl expose deployment/nginx
创建测试的Pod
kubectl run busybox --image=odise/busybox-curl --command -- sleep 3600
POD=`kubectl get pod -o name | cut -d "/" -f 2 | awk "NR==1{print}"`
获取nginx pods的IP
kubectl get ep nginx
现在使用curl命令测试
kubectl exec $POD -- curl <first nginx pod IP address>
kubectl exec $POD -- curl <second nginx pod IP address>
kubectl get svc
部署Core DNS插件
部署Core DNS
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
校验DNS Pod的状态
kubectl get pod -n kube-system
创建测试的Pod
kubectl run busybox --image=odise/busybox-curl --command -- sleep 3600
POD=`kubectl get pod -o name | cut -d "/" -f 2 | awk "NR==1{print}"`
执行DNS的解析
kubectl exec -it $POD – nslookup kubernetes
Kubeadm和二进制方式对比
这里就说一下单节点的k8s不同安装方式的对比
kubeadm方式安装k8s集群
1 安装虚拟机,安装Linux操作系统的虚拟机3台
2 对操作系统初始化,环境的搭建
3 所有节点(master和node)安装Docker,kubeadm,kubelet,kubectl
- 1)修改docker仓库地址,yum源地址修改为阿里云地址;
- 2)安装docker、使用yum安装,不指定版本默认安装最新docker版本;
- 3)安装kubeadm、kubelet和kubectl
- k8s已经发布最新的 版本,可以指定安装最新版本
- yum -y install kubelet-xxx kubectl-xxx kubeadm-xxx
4在master节点执行初始化命令
- kubeadm init
- 默认拉取镜像地址k8s.gcr.io,需要修改使用国内地址
5再所有的node节点上,使用join命令,把node添加到master节点上
6安装网络插件(CNI)
- kubectl apply -f kube-flannel.yml
7 测试kubernetes集群
- kubectl get node
- kubectl get pod -n kube-system
二进制方式安装K8s集群
1 安装虚拟机,安装3台Linux操作系统的虚拟机
2 对操作系统进行初始化操作,部署基础的环境
3 生成cfssl自签证书,并拷贝到对应节点
- ca.pem、ca-key.pem、admin.pem、admin-key.pem、kubelet-key.pem、kubelet.pem、kube-controller-manager-key.pem、kube-controller-manager.pem、kube-proxy.pem、kube-proxy-key.pem、scheduler.pem、scheduler-key.pem、kube-apiserver.pem、kube-apiserver-key.pem、service-account-key.pem
service-account.pem- 下载kubectl二进制包,通过kubectl生成kubelet.kubeconfig、kube-proxy.kubeconfig、kube-controller-manager.kubeconfig、kube-scheduler.kubeconfig、admin.kubeconfig
4 部署etcd集群
- 部署的本质,就是把etcd集群交给systemd管理
- 创建etcd.service文件,启动,设置开机启动
5 通过base64算法给集群通信设置加密配置
6 部署master组件,并对kubelet授权,角色绑定
- kube-apiserver、controller-manager、scheduler交给systemd管理,并设置开机启动
- 检查各个组件的状态
kubectl get componentstatuses --kubeconfig admin.kubeconfig7 部署node组件
- docker、kubelet、kube-proxy交给systemd管理,设置开机启动
8 部署安装CNI插件
- 常见的有flannel calico weave
9 测试kubernetes集群
- 安装nginx测试访问
欢迎各位更看到的朋友留下自己宝贵的意见!源潮一定万分感激!!!