Kubernetes 单点安装
一、环境准备
二、Kubernetes Install
Master配置
1.安装CFSSL工具
2.生成ETCD证书
3.安装启动ETCD
4.安装Docker
5.安装Kubernetes
6.生成分发Kubernetes证书
7.服务配置配置
8.Master 上安装node节点
Node节点配置
1.Docker安装
2.分配证书
3.Node节点配置
4.创建 nginx 代理
5.认证
Calico
1.Calico介绍
2.Calico 安装配置
DNS
部署 DNS 自动扩容部署
温馨提示:整个环境只需要修改IP地址!不要其他 删除的
一、环境准备
本次我们安装Kubernetes不使用集群版本安装,使 用单点安装。
环境准备需要master和node都要进行操作
环境如下:
IP 主机名 节点 服务
192.168.30.147 master master etcd、kube-apiserver、kube-controller-manage 、kube-scheduler 如果master上不安装Node可以不安装以下服务dock er、kubelet、kube-proxy、calico
192.168.30.148 node node docker、kubelet、kube-proxy、nginx(master上n ode节点可以buanzhuangnginx)
k8s组件版本:v1.11
docker版本:v17.03
etcd版本:v3.2.22
calico版本:v3.1.3
dns版本:1.14.7
Kubernetes版本
本次版本采用v1.11
查看系统及内核版本
➜ cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
➜ uname -a
3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
#我们要升级内核版本
温馨提示:下面的操作需要在两台服务器上执行
设置主机名,host
192.168.30.148 node1
192.168.30.147 master
master node设置互信
设置时间同步
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
ntpdate -u cn.pool.ntp.org
hwclock --systohc
timedatectl set-timezone Asia/Shanghai
关闭swap分区
➜ swapoff -a #临时关闭swap分区
➜ vim /etc/fstab #永久关闭swap分区
swap was on /dev/sda11 during installation
UUID=0a55fdb5-a9d8-4215-80f7-f42f75644f69 none swap sw 0 0
#注释掉SWAP分区项,即可
#不听我的kubelet启动报错自己百度
设置Yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
yum install wget vim lsof net-tools lrzsz -y
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/'/etc/selinux/config
升级内核
不要问我为什么
yum update
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-releas e-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml -y&&
sed -i s/saved/0/g /etc/default/grub&&
grub2-mkconfig -o /boot/grub2/grub.cfg &&reboot
#不重启不生效!
Kubernetes 升级内核失败
查看内核
➜ uname -a
Linux master 4.17.6-1.el7.elrepo.x86_64 #1SMP Wed Jul 11 17:24:30 EDT 2018 x86_64x86_64 x86_64 GNU/Linux
设置内核参数
echo "* soft nofile 190000" >> /etc/security/limits.conf
echo "* hard nofile 200000" >> /etc/security/limits.conf
echo "* soft nproc 252144" >> /etc/security/limits.conf
echo "* hadr nproc 262144" >> /etc/security/limits.conf
tee /etc/sysctl.conf <<-'EOF'
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.tcp_tw_recycle = 0
net.ipv4.ip_local_port_range = 10000 61000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_forward = 1
net.core.netdev_max_backlog = 2000
net.ipv4.tcp_mem = 131072 262144 524288
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_low_latency = 0
net.core.rmem_default = 256960
net.core.rmem_max = 513920
net.core.wmem_default = 256960
net.core.wmem_max = 513920
net.core.somaxconn = 2048
net.core.optmem_max = 81920
net.ipv4.tcp_mem = 131072 262144 524288
net.ipv4.tcp_rmem = 8760 256960 4088000
net.ipv4.tcp_wmem = 8760 256960 4088000
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_syn_retries = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
EOF
echo "options nf_conntrack hashsize=819200" >> /etc/modprobe.d/mlx4.conf
modprobe br_netfilter
sysctl -p
二、Kubernetes Install
Master配置
1.安装CFSSL工具
工具说明:
client certificate 用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端
server certificate 服务端使用,客户端以此验证服务端身份,例如doc ker服务端、kube-apiserver
peer certificate 双向证书,用于etcd集群成员间通信
安装CFSSL工具
➜ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/bin/cfssl
➜ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/bin/cfssljson
➜ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.生成ETCD证书
etcd作为Kubernetes集群的主数据库,在安装Kubernetes各服务之前需要首先安装和启动
创建CA证书
#创建etcd目录,用户生成etcd证书,请步骤和我保持一致
➜ mkdir /root/etcd_ssl && cd /root/etcd_ssl
cat > etcd-root-ca-csr.json << EOF
{
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "beijing",
"ST": "beijing",
"C": "CN"
}
],
"CN": "etcd-root-ca"
}
EOF
etcd集群证书
cat > etcd-gencert.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
过期时间设置成了 87600h
ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等 参数;后续在签名证书时使用某个 profile;
signing:表示该证书可用于签名其它证书;生成 的 ca.pem 证书中 CA=TRUE;
server auth:表示client可以用该 CA 对server提供的证书进行验证;
client auth:表示server可以用该CA对client提 供的证书进行验证;
etcd证书签名请求
cat > etcd-csr.json << EOF
{
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "beijing",
"ST": "beijing",
"C": "CN"
}
],
"CN": "etcd",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.30.147"
]
}
EOF
$ hosts写master地址
生成证书
cfssl gencert --initca=true etcd-root-ca-csr.json \
| cfssljson --bare etcd-root-ca
创建根CA
cfssl gencert --ca etcd-root-ca.pem \
--ca-key etcd-root-ca-key.pem \
--config etcd-gencert.json \
-profile=etcd etcd-csr.json | cfssljson --bare etcd
ETCD所需证书如下
➜ ll
total 36
-rw-r--r-- 1 root root 1765 Jul 12 10:48 etcd.csr
-rw-r--r-- 1 root root 282 Jul 12 10:48 etcd-csr.json
-rw-r--r-- 1 root root 471 Jul 12 10:48 etcd-gencert.json
-rw------- 1 root root 3243 Jul 12 10:48 etcd-key.pem
-rw-r--r-- 1 root root 2151 Jul 12 10:48 etcd.pem
-rw-r--r-- 1 root root 1708 Jul 12 10:48 etcd-root-ca.csr
-rw-r--r-- 1 root root 218 Jul 12 10:48 etcd-root-ca-csr.json
-rw------- 1 root root 3243 Jul 12 10:48 etcd-root-ca-key.pem
-rw-r--r-- 1 root root 2078 Jul 12 10:48 etcd-root-ca.pem
3.安装启动ETCD
ETCD 只有apiserver和Controller Manager需要连接
yum install etcd -y && 上传rpm包,使用rpm -ivh 安装
分发etcd证书
➜ mkdir -p /etc/etcd/ssl && cd /root/etcd_ssl
查看etcd证书
➜ ll /root/etcd_ssl/
total 36
-rw-r--r--. 1 root root 1765 Jul 20 10:46 etcd.csr
-rw-r--r--. 1 root root 282 Jul 20 10:42 etcd-csr.json
-rw-r--r--. 1 root root 471 Jul 20 10:40 etcd-gencert.json
-rw-------. 1 root root 3243 Jul 20 10:46 etcd-key.pem
-rw-r--r--. 1 root root 2151 Jul 20 10:46 etcd.pem
-rw-r--r--. 1 root root 1708 Jul 20 10:46 etcd-root-ca.csr
-rw-r--r--. 1 root root 218 Jul 20 10:40 etcd-root-ca-csr.json
-rw-------. 1 root root 3243 Jul 20 10:46 etcd-root-ca-key.pem
-rw-r--r--. 1 root root 2078 Jul 20 10:46 etcd-root-ca.pem
复制证书到相关目录
mkdir /etcd/ssl
\cp *.pem /etc/etcd/ssl/
chown -R etcd:etcd /etc/etcd/ssl
chown -R etcd:etcd /var/lib/etcd
chmod -R 644 /etc/etcd/ssl/
chmod 755 /etc/etcd/ssl/
配置修改ETCD-master配置
➜ cp /etc/etcd/etcd.conf{,.bak} && >/etc/etcd/etcd.conf
cat >/etc/etcd/etcd.conf <<EOF
# [member]
ETCD_NAME=etcd
ETCD_DATA_DIR="/var/lib/etcd/etcd.etcd"
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.30.147:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.30.147:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""
# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.147:2380"
# if you use different ETCD_NAME (e.g.test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd=https://192.168.30.147:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.147:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_PEER_AUTO_TLS="true"
# [logging]
#ETCD_DEBUG="false"
# examples for -log-package-levelsetcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
EOF
###需要将192.168.30.147修改成master的地址
启动etcd
systemctl daemon-reload
systemctl restart etcd
systemctl enable etcd
测试是否可以使用
export ETCDCTL_API=3
etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.30.147:2379 endpoint health
可用状态如下:
[root@master ~]# export ETCDCTL_API=3
[root@master ~]# etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.30.147:2379 endpoint health
https://192.168.30.147:2379 is healthy:successfully committed proposal: took = 643.432µs
查看2379 ETCD端口
➜ netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.30.147:2379 0.0.0.0:* LISTEN 2016/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 2016/etcd
tcp 0 0 192.168.30.147:2380 0.0.0.0:* LISTEN 2016/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 965/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1081/master
tcp6 0 0 :::22 :::* LISTEN 965/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1081/master
udp 0 0 127.0.0.1:323 0.0.0.0:* 721/chronyd
udp6 0 0 ::1:323 :::* 721/chronyd
##### 以上ETCD安装并配置完成
4.安装Docker
#!/bin/bash
export docker_version=17.03.2
yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/ce ntos/docker-ce.repo
yum makecache all
version=$(yum list docker-ce.x86_64 --showduplicates | sort -r|grep ${docker_version}|awk '{print $2}')
yum -y install --setopt=obsoletes=0 docker-ce-${version} docker-ce-selinux-${version}
由于网络经常超时,我们已经把镜像上传上去,可以直接下载我提供的安装包安装即可
docker及K8S包下载 密码:1zov
安装修改配置
设置开机启动并启动docker
systemctl enable docker
systemctl start docker
替换docker相关配置
sed -i '/ExecStart=\/usr\/bin\/dockerd/i\ExecStartPost=\/sbin/iptables -I FORWARD -s 0.0.0.0\/0 -d 0.0.0.0\/0 -j ACCEPT' /usr/lib/systemd/system/docker.service
sed -i '/dockerd/s/$/\-\-storage\-driver\=overlay2/g' /usr/lib/systemd/system/docker.service
重启docker
systemctl daemon-reload
systemctl restart docker
如果之前已安装旧版本,可以卸载安装新的
yum remove docker \
docker-common \
docker-selinux \
docker-engine
5.安装Kubernetes
如何下载Kubernetes
压缩包kubernetes.tar.gz内包含了Kubernetes的服务程序文件、文档和示例;压缩包kubernetes-s rc.tar.gz内则包含了全部源代码。也可以直接Server Binaries中的kubernetes-server-linux-amd6 4.tar.gz文件,其中包含了Kubernetes需要运行的全部服务程序文件
Kubernetes 下载地址:https://github.com/kubernetes/kubernetes/releases
docker及K8S包下载 密码:1zov
Kubernetes配置
tar xf kubernetes-server-linux-amd64.tar.gz
for i in hyperkube kube-apiserverkube-scheduler kubeletkube-controller-manager kubectl kube-proxy;do
cp ./kubernetes/server/bin/$i /usr/bin/
chmod 755 /usr/bin/$i
done
6.生成分发Kubernetes证书
设置证书目录
mkdir /root/kubernets_ssl && cd /root/kubernets_ssl
k8s-root-ca-csr.json证书
cat > k8s-root-ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
k8s-gencert.json证书
cat > k8s-gencert.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
kubernetes-csr.json 证书
$ hosts字段填写上所有你要用到的节点ip(master),创建 kubernetes 证书签名请求文件 kubernetes-csr.json:
cat >kubernetes-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.254.0.1",
"192.168.30.147",
"localhost",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.loca l"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
kube-proxy-csr.json 证书
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
admin-csr.json证书
cat > admin-csr.json << EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
生成Kubernetes证书
➜ cfssl gencert --initca=true 8s-root-ca-csr.json | cfssljson --bare k8s-root-ca
➜ for targetName in kubernetes admin ube-proxy; do
cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
done
#生成boostrap配置
#注意:在后面的所有关于KUBE_APISERVER BOOTSTRAP_TOKEN的配置都直接写对应的值不要写变量
export KUBE_APISERVER="https://192.168.30.147:6443"
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
echo "Tokne: ${BOOTSTRAP_TOKEN}"
cat > token.csv <<EOF
$5a228a901431474ade930e653df4ba90,kubelet-bootstrap,10001," system:kubelet-bootstrap"
EOF
#注意:在后面的所有关于KUBE_APISERVER BOOTSTRAP_TOKEN的配置都直接写对应的值不要写变量
配置证书信息
# Master 上该地址应为 https://MasterIP:6443
进入Kubernetes证书目录/root/kubernetes_ssl
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=https://192.168.30.147:6443 \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=$5a228a901431474ade930e653df4ba90 \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default--kubeconfig=bootstrap.kubeconfig
# echo "Create kube-proxy kubeconfig..."
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=https://192.168.30.147:6443 \
--kubeconfig=kube-proxy.kubeconfig
# kube-proxy
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
# kube-proxy_config
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 生成高级审计配置
cat >> audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF
#分发kubernetes证书#####
cd /root/kubernets_ssl
mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
\cp *.kubeconfig token.csv audit-policy.yaml /etc/kubernetes
useradd -s /sbin/nologin -M kube
chown -R kube:kube /etc/kubernetes/ssl
# 生成kubectl的配置
cd /root/kubernets_ssl
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
--embed-certs=true \
--server=https://192.168.30.147:6443
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
kubectl config use-context kubernetes
# 设置 log 目录权限
mkdir -p /var/log/kube-audit /usr/libexec/kubernetes
chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes
chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes
7.服务配置配置
Master操作
证书与 rpm 都安装完成后,只需要修改配置(配置位于 /etc/kubernetes 目录)后启动相关组件即可
cd /etc/kubernetes
config 通用配置
以下操作不提示默认即可,需要修改已注释
cat > /etc/kubernetes/config <<EOF
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
EOF
apiserver 配置
cat > /etc/kubernetes/apiserver <<EOF
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --insecure-bind-address=0.0.0.0 --bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcdcluster
KUBE_ETCD_SERVERS=--etcd-servers=https://192.168.30.147:2379 ##etcd地址
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccoun t,ResourceQuota,NodeRestriction"
# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node \
--endpoint-reconciler-type=lease \
--runtime-config=batch/v2alpha1=true \
--anonymous-auth=false \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key. pem \
--client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca -key.pem \
--etcd-quorum-read=true \
--storage-backend=etcd3 \
--etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--enable-swagger-ui=true \
--apiserver-count=3 \
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-audit/audit.log \
--event-ttl=1h "
EOF
#需要修改的地址是etcd的,集群逗号为分隔符填写
controller-manager 配置
cat > /etc/kubernetes/controller-manager <<EOF
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--leader-elect=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=60s"
EOF
scheduler 配置
cat >scheduler <<EOF
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"
EOF
设置服务启动脚本
Kubernetes服务的组件配置已经生成,接下来我们配置组件的启动脚本
###kube-apiserver.service服务脚本###
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
###kube-controller-manager.service服务脚本###
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=root
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
###kube-scheduler.service服务脚本###
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=root
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动kube-apiserver、kube-controller-manager、kube-schedule
systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
设置开机启动
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
提示:kube-apiserver是主要服务,如果apiserver启动失败其他的也会失败
验证是否成功
[root@master system]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
#创建ClusterRoleBinding
由于 kubelet 采用了 TLS Bootstrapping,所有根绝 RBAC 控制策略,kubelet 使用的用户 kubelet-bootstrap 是不具备任何访问 API 权限的
这是需要预先在集群内创建 ClusterRoleBinding 授予其 system:node-bootstrapper Role
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
删除命令------ 不执行!
kubectl delete clusterrolebinding kubelet-bootstrap
8.Master 上安装node节点
对于node节点,master也可以进行安装
master上node节点安装kube-proxy、kubelet
######Kuberlet配置
cat >/etc/kubernetes/kubelet <<EOF
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.30.147"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=master"
# location of the api-server
# KUBELET_API_SERVER=""
# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
--cluster-dns=10.254.0.2 \
--resolv-conf=/etc/resolv.conf \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--cluster-domain=cluster.local. \
--hairpin-mode promiscuous-bridge \
--serialize-image-pulls=false \
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
EOF
将IP地址修改为master上的IP地址和主机名,其他不需要修改
创建服务脚本
###kubelet.service服务脚本###
文件名称:kubelet.service
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
创建工程目录
/var/lib/kubelet 这个目录如果没有需要我们手动创建mkdir /var/lib/kubelet -p
#kube-proxy配置
cat >/etc/kubernetes/proxy <<EOF
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.30.147 \
--hostname-override=master \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--cluster-cidr=10.254.0.0/16"
EOF
#master ip && name
kube-proxy启动脚本
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动kubelet and Kube-proxy
systemctl daemon-reload
systemctl restart kube-proxy
systemctl restart kubelet
当我们启动完成之后,在kubelet日志中可以看到下面的日志,提示我们已经创建好了,但是需要我 们通过一下认证。
通过kubectl get csr查看
1.Docker安装
参考master
2.分配证书
我们需要去Master上分配证书kubernetesetcd给Node
虽然 Node 节点上没有 Etcd,但是如果部署网络组件,如 calico、flannel 等时,网络组件需要联通 Etcd 就会用到 Etcd 的相关证书。
从Mster节点上将hyperkuber kubelet kubectl kube-proxy 拷贝至node上
for i in hyperkube kubelet kubectl kube-proxy;do
scp ./kubernetes/server/bin/$i 192.168.60.25:/usr/bin/
ssh 192.168.30.148 chmod 755 /usr/bin/$i
done
##这里的IP是node节点ip
分发K8s证书
cd K8S证书目录
cd /root/kubernets_ssl/
for IP in 192.168.30.148;do
ssh $IP mkdir -p /etc/kubernetes/ssl
scp *.pem $IP:/etc/kubernetes/ssl
scp *.kubeconfig token.csv audit-policy.yaml $IP:/etc/kubernetes
ssh $IP useradd -s /sbin/nologin/ kube
ssh $IP chown -R kube:kube /etc/kubernetes/ssl
done
分发ETCD证书
for IP in 192.168.30.148;do
cd /root/etcd_ssl
ssh $IP mkdir -p /etc/etcd/ssl
scp *.pem $IP:/etc/etcd/ssl
ssh $IP chmod -R 644 /etc/etcd/ssl/*
ssh $IP chmod 755 /etc/etcd/ssl
done
给Node设置文件权限
ssh root@192.168.30.148 mkdir -p /var/log/kube-audit /usr/libexec/kubernetes &&
ssh root@192.168.30.148 chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes &&
ssh root@192.168.30.148 chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes
3.Node节点配置
node 节点上配置文件同样位于 /etc/kubernetes 目录
node 节点只需要修改 config kubelet proxy这三个配置文件,修改如下
#config 通用配置
注意: config 配置文件(包括下面的 kubelet、proxy)中全部未 定义 API Server 地址,因为 kubelet 和 kube-proxy 组件启动时使用了 --require-kubeconfig 选项,该选项会使其从 *.kubeconfig 中读取 API Server 地址,而忽略配置文件中设置的;
所以配置文件中设置的地址其实是无效的
cat > /etc/kubernetes/config <<EOF
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
# KUBE_MASTER="--master=http://127.0.0.1:8080"
EOF
# kubelet 配置
cat >/etc/kubernetes/kubelet <<EOF
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.30.148"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node"
# location of the api-server
# KUBELET_API_SERVER=""
# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
--cluster-dns=10.254.0.2 \
--resolv-conf=/etc/resolv.conf \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--cluster-domain=cluster.local. \
--hairpin-mode promiscuous-bridge \
--serialize-image-pulls=false \
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
EOF
#这里的IP地址是node的IP地址和主机名
复制启动脚本
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
mkdir /var/lib/kubelet -p
工程目录我们设置在/var/lib/kubele需要我们手动创建
启动kubelet
sed -i 's#127.0.0.1#192.168.30.147#g' /etc/kubernetes/bootstrap.kubeconfig
#这里的地址是master地址
#这里是为了测试kubelet是否可以连接到master上,后面启动nginx的作用是为了master的高可用
systemctl daemon-reload
systemctl restart kubelet
systemctl enable kubelet
#修改kube-proxy配置
cat >/etc/kubernetes/proxy <<EOF
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.30.148 \
--hostname-override=node \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--cluster-cidr=10.254.0.0/16"
EOF
#替换node IP
kube-proxy启动脚本
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
4.创建 nginx 代理
此时所有 node 应该连接本地的 nginx 代理,然后 nginx 来负载所有 api server;以下为 nginx 代理相关配置
我们也可以不用nginx代理。需要修改 bootstrap.kubeconfig kube-proxy.kubeconfig中的 API Server 地址即可
注意: 对于在 master 节点启动 kubelet 来说,不需要 nginx 做负载均衡;可以跳过此步骤,并修改 kubelet.kubeconfig、kube-proxy.kubeconfig 中的 apiserver 地址为当前 master ip 6443 端口即可
# 创建配置目录
mkdir -p /etc/nginx
# 写入代理配置
cat > /etc/nginx/nginx.conf <<EOF
error_log stderr notice;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
stream {
upstream kube_apiserver {
least_conn;
server 192.168.60.24:6443 weight=20 max_fails=1 fail_timeout=10s;
#server中代理master的IP
}
server {
listen 0.0.0.0:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
EOF
##servcer 中代理的ip应该是master中的apiserver端口
# 更新权限
chmod +r /etc/nginx/nginx.conf
#启动nginx的docker容器。运行转发
docker run -it -d -p 127.0.0.1:6443:6443 -v /etc/nginx:/etc/nginx --name nginx-proxy --net=host --restart=on-failure:5 --memory=512M nginx:1.13.5-alpine
小提示:可以提前拉nginx镜像
docker pull daocloud.io/library/nginx:1.13.5-alpine
为了保证 nginx 的可靠性,综合便捷性考虑,node 节点上的 nginx 使用 docker 启动,同时 使用 systemd 来守护, systemd 配置如下
cat >/etc/systemd/system/nginx-proxy.service <<EOF
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service
[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker start nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s
[Install]
WantedBy=multi-user.target
EOF
➜ systemctl daemon-reload
➜ systemctl start nginx-proxy
➜ systemctl enable nginx-proxy
我们要确保有6443端口,才可以启动kubelet
sed -i 's#192.168.60.24#127.0.0.1#g' /etc/kubernetes/bootstrap.kubeconfig
查看6443端口
[root@node kubernetes]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2042/kube-proxy
tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 1925/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 966/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1050/master
tcp6 0 0 :::10256 :::* LISTEN 2042/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 966/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1050/master
udp 0 0 127.0.0.1:323 0.0.0.0:* 717/chronyd
udp6 0 0 ::1:323 :::* 717/chronyd
[root@node kubernetes]# lsof -i:6443
lsof: no pwd entry for UID 100
lsof: no pwd entry for UID 100
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kubelet 1765 root 3u IPv4 27573 0t0 TCP node1:39246->master:sun-sr-https (ESTABLISHED)
nginx 1925 root 4u IPv4 29028 0t0 TCP *:sun-sr-https (LISTEN)
lsof: no pwd entry for UID 100
nginx 1934 100 4u IPv4 29028 0t0 TCP *:sun-sr-https (LISTEN)
lsof: no pwd entry for UID 100
nginx 1935 100 4u IPv4 29028 0t0 TCP *:sun-sr-https (LISTEN)
启动kubelet-proxy
在启动kubelet之前最好将kube-proxy重启一下
systemctl restart kube-proxy
systemctl enable kubelet
systemctl daemon-reload
systemctl restart kubelet
systemctl enable kubelet
记得检查kubelet状态!
5.认证
由于采用了 TLS Bootstrapping,所以 kubelet 启动后不会立即加入集群,而是进行证书申请
此时只需要在 master 允许其证书申请即可
# 查看 csr
➜ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-l9d25 2m kubelet-bootstrap Pending
'
如果我们将2台都启动了kubelet都配置好了并且启动了,这里会显示2台,一个master一个node
# 签发证书
kubectl certificate approve csr-l9d25
#csr-l9d25 为证书名称
或者执行kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
# 查看 node
签发完成证书
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready <none> 40m v1.11.0
node Ready <none> 39m v1.11.0
认证后自动生成了kubelet kubeconfig 文件和公私钥:
$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2280 Nov 7 10:26 /etc/kubernetes/kubelet.kubeconfig
$ ls -l /etc/kubernetes/ssl/kubelet*
-rw-r--r-- 1 root root 1046 Nov 7 10:26 /etc/kubernetes/ssl/kubelet-client.crt
-rw------- 1 root root 227 Nov 7 10:22 /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r-- 1 root root 1115 Nov 7 10:16 /etc/kubernetes/ssl/kubelet.crt
-rw------- 1 root root 1675 Nov 7 10:16 /etc/kubernetes/ssl/kubelet.key
#注意:
apiserver如果不启动后续没法操作
kubelet里面配置的IP地址都是本机(master配置node)
Node服务上先启动nginx-proxy在启动kube-proxy。kube-proxy里面地址配置本机127.0.0.1:6443实际上就是master:6443
Calico
calico是一个比较有趣的虚拟网络解决方案,完全利用路由规则实现动态组网,通过BGP协议通告路由。
calico的好处是endpoints组成的网络是单纯的三层网络,报文的流向完全通过路由规则控制,没有overlay等额外开销。
calico的endpoint可以漂移,并且实现了acl。
calico的缺点是路由的数目与容器数目相同,非常容易超过路由器、三层交换、甚至node的处理能力,从而限制了整个网络的扩张。
calico的每个node上会设置大量(海量)的iptables规则、路由,运维、排障难度大。
calico的原理决定了它不可能支持VPC,容器只能从calico设置的网段中获取ip。
calico目前的实现没有流量控制的功能,会出现少数容器抢占node多数带宽的情况。
calico的网络规模受到BGP网络规模的限制。
名词解释
endpoint: 接入到calico网络中的网卡称为endpoint
AS: 网络自治系统,通过BGP协议与其它AS网络交换路由信息
ibgp: AS内部的BGP Speaker,与同一个AS内部的ibgp、ebgp交换路由信息。
ebgp: AS边界的BGP Speaker,与同一个AS内部的ibgp、其它AS的ebgp交换路由信息。
workloadEndpoint: 虚拟机、容器使用的endpoint
hostEndpoints: 物理机(node)的地址
2.Calico 安装配置
Calico 目前部署也相对比较简单,只需要创建一下 yml 文件即可
# 获取相关 Cliaco.yaml 版本我们使用3.1,低版本会有Bug
wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
#替换 Etcd 地址-master这里的IP地址为etcd的地址
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://192.168.30.147:2379\"@gi' calico.yaml
# 替换 Etcd 证书
修改 Etcd 相关配置,以下列出主要修改部分(etcd 证书内容需要被 base64 转码)
export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'`
sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml
sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml
# 设定calico的地址池,注意不要与集群IP与宿主机IP段相同
sed -i s/192.168.0.0/172.16.0.0/g calico.yaml
修改kubelet配置
执行部署操作,注意,在开启 RBAC 的情况下需要单独创建 ClusterRole 和 ClusterRoleBinding
https://www.kubernetes.org.cn/1879.html
RoleBinding和ClusterRoleBinding
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
##提示有些镜像是需要我们去docker hub下载,我们这里可以将镜像导入
镜像下载地址 密码:ibyt
导入镜像(master和node都需要导入)
pause.tar
不导入镜像会超时
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned default/nginx-deployment-7c5b578d88-lckk2 to node
Warning FailedCreatePodSandBox 5s (x3 over 43s) kubelet, node Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection timed out
提示:因为calico的镜像在国外,我这里已经将镜像到处,大家使用docker load <calio.tar将镜像导入即可
calico镜像及yaml文件打包 密码:wxi1
建议将calico的master节点和node节点镜像都相同
[root@node ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx 1.13.2-alpine 2d92198f40ec 12 months ago 15.5 MB
daocloud.io/library/nginx 1.13.2-alpine 2d92198f40ec 12 months ago 15.5 MB
[root@node ~]#
[root@node ~]#
[root@node ~]# docker load < calico-node.tar
cd7100a72410: Loading layer [==================================================>] 4.403 MB/4.403 MB
ddc4cb8dae60: Loading layer [==================================================>] 7.84 MB/7.84 MB
77087b8943a2: Loading layer [==================================================>] 249.3 kB/249.3 kB
c7227c83afaf: Loading layer [==================================================>] 4.801 MB/4.801 MB
2e0e333a66b6: Loading layer [==================================================>] 231.8 MB/231.8 MB
Loaded image: quay.io/calico/node:v3.1.3
master有以下镜像
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/calico/node v3.1.3 7eca10056c8e 7 weeks ago 248 MB
quay.io/calico/kube-controllers v3.1.3 240a82836573 7 weeks ago 55 MB
quay.io/calico/cni v3.1.3 9f355e076ea7 7 weeks ago 68.8 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 2 years ago 747 kB
[root@master ~]#
@@@@@@@@@@@@@@@@@@@@@@@@@
Node有以下镜像
[root@node ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/calico/node v3.1.3 7eca10056c8e 7 weeks ago 248 MB
quay.io/calico/cni v3.1.3 9f355e076ea7 7 weeks ago 68.8 MB
nginx 1.13.5-alpine ea7bef82810a 9 months ago 15.5 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 2 years ago 747 kB
创建pod及rbac
kubectl apply -f rbac.yaml
kubectl create -f calico.yaml
Cliaco 官方文档要求 kubelet 启动时要配置使用 cni 插件 --network-plugin=cni,同时 kube-proxy
不能使用 --masquerade-all 启动(会与 Calico policy 冲突),所以需要修改所有 kubelet 和 proxy 配置文件
#修改所有的kubelet配置,在运行参数中加上以下参数
vim /etc/kubernetes/kubelet
--network-plugin=cni \
#注意在这部的时候最好重启下kubelet服务与docker服务,避免配置更新不及时造成的错误
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
启动之后,查看pod
[root@master ~]# kubectl get pod -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE
calico-node-8977h 2/2 Running 0 2m 192.168.60.25 node
calico-node-bl9mf 2/2 Running 0 2m 192.168.60.24 master
calico-policy-controller-79bc74b848-7l6zb 1/1 Running 0 2m 192.168.60.24 master
Pod Yaml参考https://mritd.me/2017/07/31/calico-yml-bug/
calicoctl
calicoctl 1.0之后calicoctl管理的都是资源(resource),之前版本的ip pool,profile, policy等都是资源。资源通过yaml或者json格式方式来定义,通过calicoctl create或者apply来创建和应用,通过calicoctl get命令来查看
calicoctl 下载
wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl
chmod +x calicoctl
mv calicoctl /usr/bin/
#下载不下来往上翻,我已经上传到百度云
检查calicoctl是否安装成功
[root@master yaml]# calicoctl version
Version: v1.3.0
Build date:
Git commit: d2babb6
配置calicoctl的datastore
[root@master ~]# mkdir -p /etc/calico/
#编辑calico控制器的配置文件
下载的默认是3.1,修改版本即可下载2.6
2.6版本配置如下
cat > /etc/calico/calicoctl.cfg<<EOF
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
datastoreType: "etcdv2"
etcdEndpoints: "https://192.168.30.147:2379"
etcdKeyFile: "/etc/etcd/ssl/etcd-key.pem"
etcdCertFile: "/etc/etcd/ssl/etcd.pem"
etcdCACertFile: "/etc/etcd/ssl/etcd-root-ca.pem"
EOF
#需要连接ETCD,此处的地址是etcd的(Master上)
3.1的只需要根据相关的修改就可以
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: "etcdv3"
etcdEndpoints: "https://192.168.30.147:2379"
etcdKeyFile: "/etc/etcd/ssl/etcd-key.pem"
etcdCertFile: "/etc/etcd/ssl/etcd.pem"
etcdCACertFile: "/etc/etcd/ssl/etcd-root-ca.pem"
官方文档:https://docs.projectcalico.org/v3.1/usage/calicoctl/configure/
不同版本有不同版本的配置,建议参考官方文档~
#查看calico状态
[root@master calico]# calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+-------------+
| 192.168.30.148 | node-to-node mesh | up | 06:13:41 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
查看deployment
[root@master ~]# kubectl get deployment --namespace=kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1 1 1 1 4h
calico-policy-controller 0 0 0 0 4h
[root@master ~]# kubectl get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
calico-kube-controllers-b785696ff-b7kjv 1/1 Running 0 4h 192.168.30.148 node
calico-node-szl6m 2/2 Running 0 4h 192.168.30.148 node
calico-node-tl4xc 2/2 Running 0 4h 192.168.30.147 master
查看创建后的
[root@master ~]# kubectl get pod,svc -n kube-system
NAME READY STATUS RESTARTS AGE
pod/calico-kube-controllers-b785696ff-b7kjv 1/1 Running 0 4h
pod/calico-node-szl6m 2/2 Running 0 4h
pod/calico-node-tl4xc 2/2 Running 0 4h
pod/kube-dns-66544b5b44-vg8lw 2/3 Running 5 4m
测试
创建完calico,我们需要测试一下是否正常
cat > test.service.yaml << EOF
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
EOF
##暴露的端口是31000
编辑deploy文件
cat > test.deploy.yaml << EOF
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.0-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
EOF
Kube-DNS安装
kube-dns下载
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in
手动下载并且修改名字
##新版本新增加了很多东西,如果怕改错请直接下载我的包,这里面设计对接kubelet的配置,例如10.254.0.2以及cluster.local
##建议使用我提供的yaml
sed -i 's/$DNS_DOMAIN/cluster.local/gi' kube-dns.yaml
sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kube-dns.yaml
导入镜像
docker load -i kube-dns.tar
##可以不导入镜像,默认会去yaml文件指定的地方下载,如果使用导入的镜像,请yaml也是用相同的!
创建Pod
kubectl create -f kube-dns.yaml
#需要修改yaml的imag地址,和本地镜像对接
查看pod
[root@master ~]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-b49d9b875-8bwz4 1/1 Running 0 3h
calico-node-5vnsh 2/2 Running 0 3h
calico-node-d8gqr 2/2 Running 0 3h
kube-dns-864b8bdc77-swfw5 3/3 Running 0 2h
验证
#创建一组pod和Server 查看pod内网通信是否正常
[root@master test]# cat demo.deploy.yml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: demo-deployment
spec:
replicas: 5
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: daocloud.io/library/tomcat:6.0-jre7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
顺便验证一下内外网的通信
kubectl get pod --all-namespaces -o wide
kubectl get svc -o wide
kubectl exec -it $pod-name bash
进入容器后
curl -I $podename:8080
部署 DNS 自动扩容部署
GitHub上下载
GitHub:https://github.com/kubernetes/kubernetes/tree/release-1.8/cluster/addons/dns-horizontal-autoscaler
dns-horizontal-autoscaler-rbac.yaml文件解析:
实际它就创建了三个资源:ServiceAccount、ClusterRole、ClusterRoleBinding ,创建帐户,创建角色,赋予权限,将帐户绑定到角色上面。
导入镜像,要不太慢了
### node 和master都需要哦~
root@node ~]# docker load -i gcr.io_google_containers_cluster-proportional-autoscaler-amd64_1.1.2-r2.tar
3fb66f713c9f: Loading layer 4.221 MB/4.221 MB
a6851b15f08c: Loading layer 45.68 MB/45.68 MB
Loaded image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2
查看镜像
[root@master ~]# docker images|grep cluster
gcr.io/google_containers/cluster-proportional-autoscaler-amd64 1.1.2-r2 7d892ca550df 13 months ago 49.6 MB
确保对应yaml的镜像
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
还需要下载一个rbac文件
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in
kubectl create -f dns-horizontal-autoscaler-rbac.yaml
kubectl create -f dns-horizontal-autoscaler.yaml
## 直接下载需要修改配置
自动扩容yaml文件
[root@master calico]# cat dns-horizontal-autoscaler.yaml
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
kind: ServiceAccount
apiVersion: v1
metadata:
name: kube-dns-autoscaler
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:kube-dns-autoscaler
labels:
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
- apiGroups: [""]
resources: ["replicationcontrollers/scale"]
verbs: ["get", "update"]
- apiGroups: ["extensions"]
resources: ["deployments/scale", "replicasets/scale"]
verbs: ["get", "update"]
# Remove the configmaps rule once below issue is fixed:
# kubernetes-incubator/cluster-proportional-autoscaler#16
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:kube-dns-autoscaler
labels:
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: kube-dns-autoscaler
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:kube-dns-autoscaler
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns-autoscaler
namespace: kube-system
labels:
k8s-app: kube-dns-autoscaler
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
template:
metadata:
labels:
k8s-app: kube-dns-autoscaler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
containers:
- name: autoscaler
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2
resources:
requests:
cpu: "20m"
memory: "10Mi"
command:
- /cluster-proportional-autoscaler
- --namespace=kube-system
- --configmap=kube-dns-autoscaler
# Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base
- --target=Deployment/kube-dns
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
# If using small nodes, "nodesPerReplica" should dominate.
- --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
- --logtostderr=true
- --v=2
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
serviceAccountName: kube-dns-autoscaler
[root@master calico]#