二进制方式部署kubernetes实战
- 1. 理论基础
- 2. 集群架构
- 3. 基础环境准备
- 4. **部署master节点**
- 5. **部署node节点**
- 6. 验证k8s集群
- 7. 每次重启集群需要启动的地方
- 8. 个人简介
1. 理论基础
1.1. kubernetes的五个组件
master节点的三个组件
kube-apiserver
整个集群的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制。
kube-controller-manager
控制器管理器
负责维护集群的状态,比如故障检测、自动扩展、滚动更新等。保证资源到达期望值。
kube-scheduler
调度器
经过策略调度POD到合适的节点上面运行。分别有预选策略和优选策略。
node节点的两个组件
kubelet
在集群节点上运行的代理,kubelet会通过各种机制来确保容器处于运行状态且健康。kubelet不会管理不是由kubernetes创建的容器。kubelet接收POD的期望状态(副本数、镜像、网络等),并调用容器运行环境来实现预期状态。
kubelet会定时汇报节点的状态给apiserver,作为scheduler调度的基础。kubelet会对镜像和容器进行清理,避免不必要的文件资源占用。
kube-proxy
kube-proxy是集群中节点上运行的网络代理,是实现service资源功能组件之一。kube-proxy建立了POD网络和集群网络之间的关系。不同node上的service流量转发规则会通过kube-proxy来调用apiserver访问etcd进行规则更新。
service流量调度方式有三种方式:userspace(废弃,性能很差)、iptables(性能差,复杂,即将废弃)、ipvs(性能好,转发方式清晰)。
1.2. kubernetes的三条网络
节点网络
实际网络,就是宿主机网络
建议地址段:10.4.7.0/24
建议通过不同的IP端,区分不同的业务、机房或数据中心
Pod网络
实际网络,容器运行的网络
建议172.7.21.0/24
,并建议POD网段与节点IP绑定
如: 节点IP为10.4.7.21
,则POD网络为172.7.21.0/24
Service 网络
虚拟网络,也叫集群网络(cluster server),用于内部集群间通信
构建于POD网络之上, 主要是解决服务发现和负载均衡
通过kube-proxy连接POD网络和service网络
建议地址段为:192.168.0.0/16
2. 集群架构
主机名 | IP地址 |
---|---|
hdss7-11.host.com | 10.4.7.11 |
hdss7-12.host.com | 10.4.7.12 |
hdss7-21.host.com | 10.4.7.21 |
hdss7-22.host.com | 10.4.7.22 |
hdss7-200.host.com | 10.4.7.200 |
3. 基础环境准备
3.1. 准备虚拟机(这里我用的vmware)
注意事项
VMnet1和VMnet8网卡在windows上不存在的解决方案
网络的设置(VMnet8 网桥、网关)
虚拟机的参数
3.2. 设置主机名
# 设置
hostnamectl set-hostname hdss7-xx.host.com
# 重启
reboot
3.3. 设置网卡
# ifcfg-ens32 不同的平台这里可能不太一样
vi /etc/sysconfig/network-scripts/ifcfg-ens32
# 编辑内容
TYPE="Ethernet"
BOOTPROTO="none"
NAME="ens32"
DEVICE="ens32"
ONBOOT="yes"
IPADDR="10.4.7.11"
NETMASK="255.255.255.0"
GATEWAY="10.4.7.254"
DNS1="10.4.7.11"
# 重启网络
systemctl restart network
# 测试
ping www.baidu.com
3.4. xshell 链接服务器
3.5. 关闭防火墙和selinux
# 查看内核版本(docker需要在3.8以上)
uname -a
# 关闭防火墙和selinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
3.6. 设置yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache
3.7. 安装常用工具
# 解释
# wget:wget命令用来从指定的URL下载文件
# net-tools:网络相关的工具
# telnet:Telnet是TCP/IP协议簇中的一个虚拟终端协议,它允许连接到远程主机
# tree:树结构目录
# nmap:nmap是一款非常强大的测试工具,是每一个学习安全的人所必须掌握的工具
# sysstat是一个软件包,包含监测系统性能及效率的一组工具,这些工具对于我们收集系统性能数据,比如CPU使用率、硬盘和网络吞吐数据,这些数据的收集和分析,有利于我们判断系统是否正常运行,是提高系统运行效率、安全运行服务器的得力助手。
# lrzsz:上传下载
# dos2unix:dos2unix是将Windows格式文件转换为Unix、Linux格式的实用命令
# bind-utils: bind是linux系统下的一个DNS服务程序.bind-utils是bind软件提供的一组DNS工具包,里面有一些DNS相关的工具.主要有:dig,host,nslookup,nsupdate.使用这些工具可以进行域名解析和DNS调试工作.
yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y
3.8. 安装bind服务
根据域名查看ip
[root@hdss-200 ~]# nslookup www.qq.com
Server: 10.4.7.254
Address: 10.4.7.254#53
Non-authoritative answer:
www.qq.com canonical name = https.qq.com.
Name: https.qq.com
Address: 125.39.52.26
Name: https.qq.com
Address: 2402:4e00:8030:1::7d
hdss7-11.host.com 上
安装bind 9
yum install bind -y
# 查看
[root@hdss7-11 ~]# rpm -qa bind
bind-9.11.4-16.P2.el7_8.6.x86_64
配置bind9
# 主配置文件
# vi /etc/named.conf
listen-on port 53 { 10.4.7.11; }; # 监听,去掉ipv6
allow-query { any; }; # 允许哪些节点可以根据自己建立的dns进行域名解析
forwarders { 10.4.7.254; }; # 上级dns的地址(网关)
recursion yes; # dns采用递归算法进行查询
dnssec-enable no; # 节省资源
dnssec-validation no
# 区域配置文件
# vi /etc/named.rfc1912.zones
# 主机域
zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 10.4.7.11; };
};
# 业务域
zone "od.com" IN {
type master;
file "od.com.zone";
allow-update { 10.4.7.11; };
};
# 配置区域数据文件
# 配置主机域数据文件
# vi /var/named/host.com.zone
$ORIGIN host.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. (
2020110201 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
hdss7-11 A 10.4.7.11
hdss7-12 A 10.4.7.12
hdss7-21 A 10.4.7.21
hdss7-22 A 10.4.7.22
hdss7-200 A 10.4.7.200
# 配置业务域数据文件
# vi /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020110201 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
检查配置并启动bind 9
named-checkconf
systemctl start named
netstat -lntup|grep 53
检查
[root@hdss7-11 ~]# dig -t A hdss7-11.host.com @10.4.7.11 +short
10.4.7.11
[root@hdss7-11 ~]# dig -t A hdss7-12.host.com @10.4.7.11 +short
10.4.7.12
[root@hdss7-11 ~]# dig -t A hdss7-21.host.com @10.4.7.11 +short
10.4.7.21
[root@hdss7-11 ~]# dig -t A hdss7-22.host.com @10.4.7.11 +short
10.4.7.22
[root@hdss7-11 ~]# dig -t A hdss7-200.host.com @10.4.7.11 +short
10.4.7.200
配置DNS客户端
linux所有主机
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DNS1=10.4.7.11
##########
systemctl restart network
##########
vi /etc/resolv.conf
search host.com
nameserver 10.4.7.11
Windows主机
wmnet8网卡更改DNS:10.4.7.11
检查
Linux
ping www.baidu.com
ping hdss7-200
Windows
ping hdss7-200.host.com
3.9. 准备签发证书环境
hdss7-200.host.com 上
安装cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl*
创建生成ca证书csr的json配置文件
mkdir /opt/certs
vi /opt/certs/ca-csr.json
{
"CN": "OldboyEdu",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
],
"ca": {
"expiry": "175200h"
}
}
生成ca证书文件
cd /opt/certs
cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
# 生成的公钥和私钥如下
[root@hdss-200 certs]# ll
总用量 16
-rw-r--r--. 1 root root 993 11月 2 10:57 ca.csr
-rw-r--r--. 1 root root 328 11月 2 10:49 ca-csr.json
-rw-------. 1 root root 1679 11月 2 10:57 ca-key.pem
-rw-r--r--. 1 root root 1346 11月 2 10:57 ca.pem
3.10 部署docker
hdss7-21.host.com,hdss7-22.host.com,hdss7-200.host.com上
安装
[root@hdss7-21 ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
配置
mkdir /etc/docker
vi /etc/docker/daemon.json
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
"bip": "172.7.21.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"live-restore": true
}
##########
bip要根据宿主机ip变化
注意:hdss7-21.host.com bip 172.7.21.1/24
hdss7-22.host.com bip 172.7.22.1/24
hdss7-200.host.com bip 172.7.200.1/24
启动
mkdir -p /data/docker
systemctl start docker
systemctl enable docker
docker version
3.11. 部署docker镜像私有仓库harbor
hdss7-200.host.com 上
下载软件并解压
harbor官网github地址: https://github.com/goharbor/harbor
[root@hdss7-200 src]# tar xf harbor-offline-installer-v1.8.3.tgz -C /opt/
[root@hdss7-200 opt]# mv harbor/ harbor-v1.8.3
[root@hdss7-200 opt]# ln -s /opt/harbor-v1.8.3/ /opt/harbor
配置
[root@hdss7-200 opt]# vi /opt/harbor/harbor.yml
hostname: harbor.od.com
http:
port: 180
harbor_admin_password:Harbor12345
data_volume: /data/harbor
log:
level: info
rotate_count: 50
rotate_size:200M
location: /data/harbor/logs
[root@hdss7-200 opt]# mkdir -p /data/harbor/logs
安装docker-compose
[root@hdss7-200 opt]# yum install docker-compose -y
安装harbor
[root@hdss7-200 harbor]# /opt/harbor/install.sh
检查harbor启动情况
[root@hdss7-200 harbor]# docker-compose ps
[root@hdss7-200 harbor]# docker ps -a
配置harbor的dns内网解析(注意: 在10.4.7.11上)
# [root@hdss7-11 ~]# vi /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020110202 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
harbor A 10.4.7.200
安装NGINX并配置注意: 在10.4.7.200上
[root@hdss7-200 harbor]# yum install nginx -y
[root@hdss7-200 harbor]# vi /etc/nginx/conf.d/harbor.od.com.conf
server {
listen 80;
server_name harbor.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
[root@hdss7-200 harbor]# nginx -t
[root@hdss7-200 harbor]# systemctl start nginx
[root@hdss7-200 harbor]# systemctl enable nginx
浏览器打开harbor.od.com并测试
[root@hdss7-11 ~]# curl harbor.od.com
浏览器输入:harbor.od.com 用户名:admin 密码:Harbor12345
2、新建项目:public 访问级别:公开
3、下载镜像并给镜像打tag
[root@hdss7-200 harbor]# docker pull nginx:1.7.9
[root@hdss7-200 harbor]# docker images |grep 1.7.9
[root@hdss7-200 harbor]# docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9
4、登录harbor并上传到仓库
[root@hdss7-200 harbor]# docker login harbor.od.com
[root@hdss7-200 harbor]# docker push harbor.od.com/public/nginx:v1.7.9
4. 部署master节点
4.1. 部署etcd集群
部署方法以hdss7-12.host.com为例
集群架构
主机名 | 角色 | ip地址 |
---|---|---|
hdss7-12.host.com | lead | 10.4.7.12 |
hdss7-21.host.com | follow | 10.4.7.21 |
hdss7-22.host.com | follow | 10.4.7.22 |
创建基于根证书的config配置文件(hdss7-200上)
[root@hdss7-200 ~]# vi /opt/certs/ca-config.json
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
创建生成自签发证书的csr的json配置文件
[root@hdss7-200 ~]# vi /opt/certs/etcd-peer-csr.json
{
"CN": "k8s-etcd",
"hosts": [
"10.4.7.11",
"10.4.7.12",
"10.4.7.21",
"10.4.7.22"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成etcd证书文件
[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer
检查生成的证书文件
[root@hdss7-200 certs]# ll
etcd-peer.csr
etcd-peer-csr.json
etcd-peer-key.pem
etcd-peer.pem
创建etcd用户
hdss7-12上
[root@hdss7-12 opt]# useradd -s /sbin/nologin -M etcd
下载软件,解压,做软连接
# https://github.com/etcd-io/etcd/tags?after=v3.3.12
[root@hdss7-12 src]# tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
[root@hdss7-12 src]# cd ..
[root@hdss7-12 opt]# mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
[root@hdss7-12 opt]# ln -s /opt/etcd-v3.1.20/ /opt/etcd
创建目录,拷贝证书文件
# 创建证书目录、数据目录、日志目录
mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
# 拷贝生成的证书文件
[root@hdss7-12 certs]# scp hdss7-200:/opt/certs/ca.pem .
[root@hdss7-12 certs]# scp hdss7-200:/opt/certs/etcd-peer.pem .
[root@hdss7-12 certs]# scp hdss7-200:/opt/certs/etcd-peer-key.pem .
创建etcd服务启动脚本
[root@hdss7-12 ~]# vi /opt/etcd/etcd-server-startup.sh
#!/bin/sh
./etcd --name etcd-server-7-12 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://10.4.7.12:2380 \
--listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://10.4.7.12:2380 \
--advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
[root@hdss7-12 ~]# chmod +x /opt/etcd/etcd-server-startup.sh
授权目录权限
[root@hdss7-12 ~]# chown -R etcd.etcd /opt/etcd-v3.1.20/ /data/etcd/ /data/logs/etcd-server/
安装supervisor软件
[root@hdss7-12 ~]# yum install supervisor -y
[root@hdss7-12 ~]# systemctl start supervisord
[root@hdss7-12 ~]# systemctl enable supervisord
创建supervisor配置
[root@hdss7-12 ~]# vi /etc/supervisord.d/etcd-server.ini
[program:etcd-server-7-12]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动etcd服务并检查
[root@hdss7-12 ~]# supervisorctl update
[root@hdss7-12 ~]# supervisorctl status
[root@hdss7-12 ~]# netstat -lntup|grep etcd
部署启动所有集群
不同的地方
# /opt/etcd/etcd-server-startup.sh
--name
--listen-peer-urls
--listen-client-urls
--initial-advertise-peer-urls
--advertise-client-urls
##########
# /etc/supervisord.d/etcd-server.ini
[program:etcd-server-7-12]
检查集群状态
[root@hdss7-22 etcd]# ./etcdctl cluster-health
[root@hdss7-22 etcd]# ./etcdctl member list
4.2. 部署kube-apiserver集群
集群架构
主机名 | 角色 | ip地址 |
---|---|---|
hdss7-21.host.com | kube-apiserver | 10.4.7.21 |
hdss7-22.host.com | kube-apiserver | 10.4.7.22 |
部署方法以hdss7-21.host.com为例
下载软件,解压,做软连接
# 下载链接:github得翻墙(https://github.com/kubernetes/kubernetes/releases/tag/v1.15.2
# CHANGELOG-1.15.md--→server binaries--→kubernetes-server-linux-amd64.tar.gz)
https://storage.googleapis.com/kubernetes-release/release/v1.15.2/kubernetes-server-linux-amd64.tar.gz
[root@hdss7-21 src]# tar xf kubernetes-server-linux-amd64-v1.15.2.tar.gz -C /opt
[root@hdss7-21 opt]# mv kubernetes/ kubernetes-v1.15.2
[root@hdss7-21 opt]# ln -s /opt/kubernetes-v1.15.2/ /opt/kubernetes
[root@hdss7-21 opt]# cd kubernetes
[root@hdss7-21 kubernetes]# rm -rf kubernetes-src.tar.gz
[root@hdss7-21 kubernetes]# cd server/bin
[root@hdss7-21 bin]# rm -f *.tar
[root@hdss7-21 bin]# rm -f *_tag
签发client证书
hdss7-200.host.com上
创建生成证书csr的json配置文件
[root@hdss7-200 certs]# vi /opt/certs/client-csr.json
{
"CN": "k8s-node",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成client证书文件
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
检查生成的证书文件
[root@hdss7-200 certs]# ll
client.csr
client-csr.json
client-key.pem
client.pem
签发kube-apiserver证书
创建生成证书csr的json配置文件
[root@hdss7-200 certs]# vi /opt/certs/apiserver-csr.json
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成kube-apiserver证书文件
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
检查生成的证书文件
[root@hdss7-200 certs]# ll
apiserver.csr
apiserver-csr.json
apiserver-key.pem
apiserver.pem
拷贝证书文件至各节点,并创建配置
# 拷贝证书文件到/opt/kubernetes/server/bin/certs目录下
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/ca.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/ca-key.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/client.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/client-key.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/apiserver.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/apiserver-key.pem .
# 创建配置
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
创建apiserver启动脚本
#!/bin/bash
./kube-apiserver \
--apiserver-count 2 \
--audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ./conf/audit.yaml \
--authorization-mode RBAC \
--client-ca-file ./certs/ca.pem \
--requestheader-client-ca-file ./certs/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--etcd-cafile ./certs/ca.pem \
--etcd-certfile ./certs/client.pem \
--etcd-keyfile ./certs/client-key.pem \
--etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
--service-account-key-file ./certs/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./certs/client.pem \
--kubelet-client-key ./certs/client-key.pem \
--log-dir /data/logs/kubernetes/kube-apiserver \
--tls-cert-file ./certs/apiserver.pem \
--tls-private-key-file ./certs/apiserver-key.pem \
--v 2
授权和创建目录
[root@hdss7-21 bin]# chmod +x kube-apiserver.sh
[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver
创建supervisor配置
[root@hdss7-21 bin]# vi /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-7-21]
command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动服务并检查
[root@hdss7-21 bin]# supervisorctl update
[root@hdss7-21 bin]# supervisorctl status
[root@hdss7-21 bin]# netstat -nltup|grep kube-api
部署启动所有集群
/etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-7-21]
4.3. 部署四层反向代理
集群架构
主机名 | 角色 | IP地址 | VIP地址 |
---|---|---|---|
hdss7-11.host.com | L4 | 10.4.7.11 | 10.4.7.10 |
hdss7-12.host.com | L4 | 10.4.7.12 | 10.4.7.10 |
安装NGINX和keepalived
[root@hdss7-12 etcd]# yum install nginx keepalived -y
hdss7-11.host.com和hdss7-12.host.com都安装NGINX和keepalived
[root@hdss7-11 conf.d]# vi /etc/nginx/nginx.conf
stream {
upstream kube-apiserver {
server 10.4.7.21:6443 max_fails=3 fail_timeout=30s;
server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
}
}
[root@hdss7-11 etcd]# nginx -t
hdss7-11.host.com和hdss7-12.host.com配置keepalived
检查端口
[root@hdss7-11 ~]# vi /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
# script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
# interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
if [ $PORT_PROCESS -eq 0 ];then
echo "Port $CHK_PORT Is Not Used,End."
exit 1
fi
else
echo "Check Port Cant Be Empty!"
fi
##########
chmod +x /etc/keepalived/check_port.sh
##########
[root@hdss7-11 ~]#
##########
# 配置文件(删掉原来文件已经有的内容)
# keepalived 主:
[root@hdss7-11 conf.d]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 10.4.7.11
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 10.4.7.11
nopreempt
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
# keepalived 从:
[root@hdss7-12 conf.d]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 10.4.7.12
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 251
mcast_src_ip 10.4.7.12
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
启动代理并检查
systemctl start nginx keepalived
systemctl enable nginx keepalived
netstat -lntup|grep nginx
ip addr
注意事项
- keepalived 主从绑定的时候要和自己的网络名称保持一致: interface ens32
- 查看日志排错:tail -fn 200 /var/log/messages
- 生产上vip不能轻易来回飘(故障转移)
- nginx -s stop
- netstat -nltp| grep 7443
- ip addr
4.4. 部署controller-manager
集群架构
主机名 | 角色 | IP地址 |
---|---|---|
hdss7-21.host.com | controller-manager | 10.4.7.21 |
hdss7-22.host.com | controller-manager | 10.4.7.22 |
部署方法以hdss7-21.host.com为例
创建启动脚本
hdss7-21.host.com上
[root@hdss7-21 bin]# vi /opt/kubernetes/server/bin/kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--leader-elect true \
--log-dir /data/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./certs/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./certs/ca.pem \
--v 2
授权文件权限,创建目录
[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager
创建supervisor配置
[root@hdss7-21 bin]# vi /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-7-21]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动服务并检查
[root@hdss7-21 bin]# supervisorctl update
[root@hdss7-21 bin]# supervisorctl status
部署启动所有集群
不同的地方
/etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-7-21]
4.5. 部署kube-scheduler
集群架构
主机名 | 角色 | IP地址 |
---|---|---|
hdss7-21.host.com | kube-scheduler | 10.4.7.21 |
hdss7-22.host.com | kube-scheduler | 10.4.7.22 |
部署方法以hdss7-21.host.com为例
创建启动脚本
hdss7-21.host.com上
[root@hdss7-21 bin]# vi /opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
--leader-elect \
--log-dir /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2
授权文件权限,创建目录
[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler
创建supervisor配置
[root@hdss7-21 bin]# vi /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-7-21]
command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动服务并检查
[root@hdss7-21 bin]# supervisorctl update
[root@hdss7-21 bin]# supervisorctl status
部署启动所有集群
不同的地方
/etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-7-21]
4.6.检查master节点
建立kubectl软链接
[root@hdss7-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
检查master节点
[root@hdss7-21 bin]# kubectl get cs
5. 部署node节点
5.1. 部署kubelet
集群架构
主机名 | 角色 | IP地址 |
---|---|---|
hdss7-21.host.com | kubelet | 10.4.7.21 |
hdss7-22.host.com | kubelet | 10.4.7.22 |
部署方法以hdss7-21.host.com为例
签发kubelet证书
hdss7-200.host.com上
创建生成证书csr的json配置文件
[root@hdss7-200 certs]# vi kubelet-csr.json
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23",
"10.4.7.24",
"10.4.7.25",
"10.4.7.26",
"10.4.7.27",
"10.4.7.28"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成kubelet证书文件
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
检查生成的证书文件
[root@hdss7-200 certs]# ll
kubelet.csr
kubelet-csr.json
kubelet-key.pem
kubelet.pem
拷贝证书文件至各节点,并创建配置
hdss7-21.host.com上
拷贝证书文件
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kubelet.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kubelet-key.pem .
创建配置
set-cluster
[root@hdss7-21 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kubelet.kubeconfig
(2)、set-credentials
[root@hdss7-21 conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
--client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
(3)、set-context
[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
(4)、use-context
[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
(5)、查看生成的kubelet.kubeconfig
[root@hdss7-21 conf]# ll
kubelet.kubeconfig
(6)、k8s-node.yaml
(1) 创建配置文件
[root@hdss7-21 conf]# vi k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
(2)应用资源配置
[root@hdss7-21 conf]# kubectl create -f k8s-node.yaml
(3)查看集群角色和角色属性
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
(4)拷贝kubelet.kubeconfig 到hdss7-22.host.com上
[root@hdss7-22 conf]# scp hdss7-21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig .
准备pause基础镜像
hdss7-200.host.com上
下载pause镜像
[root@hdss7-200 ~]# docker pull kubernetes/pause
上传到docker私有仓库harbor中
(1)、给镜像打tag
[root@hdss7-200 ~]# docker images -a
[root@hdss7-200 ~]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@hdss7-200 ~]# docker images -a
(2)、上传到harbor上
[root@hdss7-200 ~]# docker push harbor.od.com/public/pause:latest
创建kubelet启动脚本
hdss7-21.host.com上
[root@hdss7-21 conf]# vi /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./certs/ca.pem \
--tls-cert-file ./certs/kubelet.pem \
--tls-private-key-file ./certs/kubelet-key.pem \
--hostname-override hdss7-21.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
授权,创建目录
[root@hdss7-21 conf]# chmod +x /opt/kubernetes/server/bin/kubelet.sh
[root@hdss7-21 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
创建supervisor配置
[root@hdss7-21 conf]# vi /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动服务并检查
[root@hdss7-21 conf]# supervisorctl update
[root@hdss7-21 conf]# supervisorctl status
部署所有节点
不同的地方
/opt/kubernetes/server/bin/kubelet.sh
--hostname-override
##########
/etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]
检查所有节点并给节点打上标签
[root@hdss7-21 bin]# kubectl get nodes
[root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
[root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
[root@hdss7-21 bin]# kubectl get nodes
5.2. 部署kube-proxy
集群架构
主机名 | 角色 | IP地址 |
---|---|---|
hdss7-21.host.com | kube-proxy | 10.4.7.21 |
hdss7-22.host.com | kube-proxy | 10.4.7.22 |
部署方法以hdss7-21.host.com为例
签发kube-proxy证书
hdss7-200.host.com上
创建生成证书csr的json配置文件
[root@hdss7-200 certs]# vi kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成kube-proxy证书文件
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
检查生成的证书文件
[root@hdss7-200 certs]# ll
kube-proxy-client.csr
kube-proxy-client-key.pem
kube-proxy-client.pem
kube-proxy-csr.json
拷贝证书文件至各节点,并创建配置
hdss7-21.host.com上
拷贝证书文件
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kube-proxy-client.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kube-proxy-client-key.pem .
创建配置
(1)、set-cluster
[root@hdss7-21 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kube-proxy.kubeconfig
(2)、set-credentials
[root@hdss7-21 conf]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/certs/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/certs/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
(3)、set-context
[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
(4)、use-context
[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
(5)、拷贝kube-proxy.kubeconfig 到 hdss7-22.host.com的conf目录下
[root@hdss7-22 conf]# scp hdss7-21:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
创建kube-proxy启动脚本
hdss7-21.host.com上
加载ipvs模块
[root@hdss7-21 bin]# lsmod |grep ip_vs
[root@hdss7-21 bin]# vi /root/ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
[root@hdss7-21 bin]# chmod +x /root/ipvs.sh
[root@hdss7-21 bin]# sh /root/ipvs.sh
[root@hdss7-21 bin]# lsmod |grep ip_vs
创建启动脚本
[root@hdss7-21 bin]# vi /opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/sh
./kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override hdss7-21.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
授权,创建目录
[root@hdss7-22 bin]# ls -l /opt/kubernetes/server/bin/conf/|grep kube-proxy
[root@hdss7-22 bin]# chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
[root@hdss7-22 bin]# mkdir -p /data/logs/kubernetes/kube-proxy
创建supervisor配置
[root@hdss7-21 bin]# vi /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-21]
command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动服务并检查
[root@hdss7-21 bin]# supervisorctl update
[root@hdss7-21 bin]# supervisorctl status
[root@hdss7-21 bin]# yum install ipvsadm -y
[root@hdss7-21 bin]# ipvsadm -Ln
[root@hdss7-21 bin]# kubectl get svc
部署所有节点
不同的地方
/opt/kubernetes/server/bin/kube-proxy.sh
--hostname-override
##########
/etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-21]
6. 验证k8s集群
在任意一个节点上创建一个资源配置清单
hdss7-21.host.com上
[root@hdss7-21 ~]# vi /root/nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.od.com/public/nginx:v1.7.9
ports:
- containerPort: 80
应用资源配置,并检查
hdss7-21.host.com上
[root@hdss7-21 bin]# kubectl create -f /root/nginx-ds.yaml
[root@hdss7-21 bin]# kubectl get pods
[root@hdss7-21 bin]# kubectl get pods -o wide
[root@hdss7-21 bin]# curl 172.7.21.2
hdss7-22.host.com上
[root@hdss7-22 bin]# kubectl get pods
[root@hdss7-22 bin]# kubectl get pods -o wide
[root@hdss7-22 bin]# curl 172.7.22.2
查看kubernetes是否搭建好
[root@hdss7-21 bin]# kubectl get cs
[root@hdss7-21 bin]# kubectl get node
[root@hdss7-21 bin]# kubectl get pods
7. 每次重启集群需要启动的地方
-
检查配置并启动bind 9
named-checkconf
systemctl start named
netstat -lntup|grep 53 -
启动nginx,链接harbor
/opt/harbor/install.sh
nginx -t
systemctl start nginx
systemctl enable nginx -
四层反向代理
systemctl start nginx keepalived
systemctl enable nginx keepalived
netstat -lntup|grep nginx
ip addr
8. 个人简介
-
昵称:雪山飞狐
-
格言:只言片语任我说,提笔句句无需忖。落笔不知寄何人,唯有邀友共斟酌!
-
Github:https://github.com/fanjianhai/K8S
-
邮箱:594042358@qq.com
欢迎交流~