Docker&K8s---跟我一步步部署K8s(二进制安装部署)

Docker&K8s—跟我一步步部署K8s(二进制安装部署)

Docker虽然很强大,但也有缺点:

image-20210617171201159

我们需要一套容器编排工具:

  • docker compose、docker sarm
  • Mesosphere + Marathon
  • Kubernetes(K8s)

Kubernetes(K8s)概述

  • 官网:https://kubernetes.io/zh/
  • 由来:谷歌的Borg系统,后经Go语言重写并捐献给CNCF基金会开源
  • 作用:开源的容器编排框架工具(生态机器丰富)
  • github:https://github.com/kubernetes/kubernetes/releases

image-20210617172813333

Kubernetes快速入门

四组基本概念

  • Pod/Pod控制器

    Pod

    • Pod是K8s里能够被运行的最小的逻辑单元(原子单元)
    • 1个Pod里面可以运行多个容器,他们共享UTS+NET+IPC名称空间
    • 可以把Pod理解成豌豆荚,而同一Pod内的每个容器是一颗颗豌豆
    • 一个Pod里运行多个容器,又叫:边车(SideCar)模式

    Pod控制器

    • 是Pod启动的一种模板,用来保证在K8s里启动的Pod应始终按照人们的预期运行
    • 众多Pod控制器:Deployment、DaemonSet…
  • Name/Namespace

    Name

    • 由于K8s内部,使用“资源”来定义每一种逻辑概念(功能),故每种“资源”,都应该有自己的“名称”
    • “资源”有api版本、类别、元数据等信息
    • “名称”通常定义在“资源”的元数据信息里

    Namespace

    • 随着项目增多、人员增加、集群规模变大,需要一种能够隔离K8s内各种“资源”的方法,这就是名称空间
    • 可以理解为K8s内部的虚拟及群组
    • 默认的:default、kube-system\kube-public
    • 查询k8s里特定的“资源”要带上相应的名称空间
  • Label/Label选择器

    Label

    • 便于分类管理资源对象
    • 一个标签可以对于多个资源,一个资源也可以有多个标签,多对多
    • 组成:key=value
    • 类似的:注解,annotations

    Label选择器

    • 给资源打赏标签后,可以使用标签选择器过滤指定的标签
    • 标签选择器目前有两个:基于等值关系的和基于集合关系
    • 许多资源支持内嵌标签选择器字段:matchLabels、matchExpressions
  • Service/Ingress

    Service

    • 在K8s的世界里,虽然每个Pod都会被分配一个单独的IP地址,但这个IP地址会随着Pod的销毁而消失。
    • Service(服务)就是用来解决这个问题的核心概念
    • 一个Service可以看作是一组提供相同服务的Pod的对外访问接口
    • Service作用于哪些Pod是通过标签选择器来定义的

    Ingress

    • Ingress是K8s集群的工作在OSI网络参考模型下,第7层的应用,对外暴露的接口
    • Service只能进行L4流量调度,表现形式是ip+port
    • Ingress则可以调度不同业务域、不同URL访问路径的业务流量

image-20210617180703284

常见的K8s安装部署方式

  • Minikube
  • 二进制安装部署(生产首选,新手)
  • 使用kubeadmin进行部署(相对简单,老手)

准备工作

  • 五台虚拟机
  • 2c/2G/50g 192.168.12.0/24
  • Centos7

安装cnetos7,选择web service安装即可,虚拟机网络模式选nat模式,

windows配置vmnet8的ipv4

# ip
192.168.12.1
# 掩码
255.255.255.0
# 首选DNS
192.168.12.11

虚拟机netvm8设置

# IP
192.168.12.0
# 掩码
255.255.255.0
# 网关
192.168.12.254

Centos8网络ip配置、主机名、epel源、常用工具安装

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
IPADDR=192.168.12.200
NETMASK=255.255.255.0
GATEWAY=192.168.12.254
DNS1=192.168.12.254
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
[root@localhost ~]# nmcli c reload
[root@localhost ~]# reboot
[root@localhost ~]# ping baidu.com
PING baidu.com (39.156.69.79) 56(84) bytes of data.
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=1 ttl=128 time=24.1 ms
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=2 ttl=128 time=23.8 ms
^Z
[1]+  Stopped                 ping baidu.com

[root@localhost ~]# hostnamectl set-hostname hdss12-200.host.com
[root@hdss12-200 ~]# vi /etc/selinux/config
SELINUX=disabled
[root@hdss12-200 ~]# reboot
[root@hdss12-200 ~]# getenforce
Disabled
[root@hdss12-200 ~]# systemctl stop firewalld
[root@hdss12-200 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
[root@hdss12-200 ~]# yum clean all
[root@hdss12-200 ~]# yum makecache
[root@hdss12-200 ~]# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y

bind9部署dns

hdss12-11上

named
[root@hdss12-11 ~]# yum install bind -y
[root@hdss12-11 ~]# rpm -qa bind
bind-9.11.26-4.el8_4.x86_64
[root@hdss12-11 ~]# vi /etc/named.conf
listen-on port 53 { 192.168.12.11; };
allow-query     { any; };
forwarders      { 192.168.12.254; };
dnssec-enable no;
dnssec-validation no;
[root@hdss12-11 ~]# named-checkconf
区域配置文件

/etc/named.rfc1912.zones

zone "host.com" IN {
	type master;
	file "host.com.zone";
	allow-update { 192.168.12.11; };
};

zone "od.com" IN {
	type master;
	file "od.com.zone";
	allow-update { 192.168.12.11; };
};

/var/named/host.com.zone

$ORIGIN host.com.
$TTL 600         ; 10 minutes
@       IN      SOA     dns.host.com. dnsadmin.host.com. (
                        20210618        ; serial
                          10800         ; refresh (3 hours)
                            900         ; retry (15 minutes)
                         604800         ; expire (1 week)
                          86400         ; minumun (1 day)
                          )
                    NS dns.host.com.
$TTL 60          ; 1 minutes
dns              A    192.168.12.11
HDSS12-11        A    192.168.12.11
HDSS12-12        A    192.168.12.12
HDSS12-21        A    192.168.12.21
HDSS12-22        A    192.168.12.22
HDSS12-200       A    192.168.12.200

/var/named/od.com.zone

$ORIGIN od.com.
$TTL 600         ; 10 minutes
@       IN      SOA     dns.host.com. dnsadmin.host.com. (
                      2021061801        ; serial
                          10800         ; refresh (3 hours)
                            900         ; retry (15 minutes)
                         604800         ; expire (1 week)
                          86400         ; minumun (1 day)
                          )
                    NS dns.od.com.
$TTL 60          ; 1 minutes
dns              A    192.168.12.11

启动

[root@hdss12-11 ~]# systemctl start named
[root@hdss12-11 ~]# netstat -luntp|grep 53
tcp        0      0 192.168.12.11:53        0.0.0.0:*               LISTEN      27874/named         
tcp        0      0 127.0.0.1:953           0.0.0.0:*               LISTEN      27874/named         
tcp6       0      0 ::1:53                  :::*                    LISTEN      27874/named         
tcp6       0      0 ::1:953                 :::*                    LISTEN      27874/named         
udp        0      0 192.168.12.11:53        0.0.0.0:*                           27874/named         
udp6       0      0 ::1:53              
[root@hdss12-11 ~]# dig -t A hdss12-21.host.com @192.168.12.11 +short

最后把所有机器的DNS1改为:192.168.12.11,一定要关闭防火墙,此时用域名全部可以ping通

准备证书签发环境

运维主机HDSS12-200主机上

下载工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
[root@hdss12-200 ~]# chmod u+x /usr/bin/cfssl*
[root@hdss12-200 ~]# mkdir /opt/certs/ ; cd /opt/certs/
[root@hdss12-200 certs]# vim /opt/certs/ca-csr.json
{
    "CN": "BigSmart",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}
# 签发证书
[root@hdss12-200 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2021/06/19 02:20:23 [INFO] generating a new CA key and certificate from CSR
2021/06/19 02:20:23 [INFO] generate received request
2021/06/19 02:20:23 [INFO] received CSR
2021/06/19 02:20:23 [INFO] generating key: rsa-2048
2021/06/19 02:20:23 [INFO] encoded CSR
2021/06/19 02:20:23 [INFO] signed certificate with serial number 447914809879871710040761543206458018822100175097
[root@hdss12-200 certs]# ls
ca.csr  ca-csr.json  ca-key.pem  ca.pem
[root@hdss12-200 certs]# cat ca.pem 
-----BEGIN CERTIFICATE-----
MIIDsjCCApqgAwIBAgIUTnUx2aiYEHRNOjVG9byKAGiRSPkwDQYJKoZIhvcNAQEL
BQAwXzELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB2JlaWppbmcxEDAOBgNVBAcTB2Jl
aWppbmcxCzAJBgNVBAoTAm9kMQwwCgYDVQQLEwNvcHMxETAPBgNVBAMTCEJpZ1Nt
YXJ0MB4XDTIxMDYxOTA2MTUwMFoXDTQxMDYxNDA2MTUwMFowXzELMAkGA1UEBhMC
Q04xEDAOBgNVBAgTB2JlaWppbmcxEDAOBgNVBAcTB2JlaWppbmcxCzAJBgNVBAoT
Am9kMQwwCgYDVQQLEwNvcHMxETAPBgNVBAMTCEJpZ1NtYXJ0MIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0GISRpmhTklTPbtXrcaeEIUChgBG2VrivJCk
uL/5NiEV1L20rC0gIvxGKnv0nch0ksZTl+L7MIDvYDudHSXlzPk1nNEcgXGFwLha
1kOnfB9Q0x7TvYtq+l62mCFUv7dBQw4wLcMrLaBxm/xynRd48JYbtfl/4G8UceoP
f36zDcBcVQvihjPsha5eLLhzy974+KDJWI7yOcU9l9Xrc+TLukRLskBTdz5yLs/J
0ALEAoIwv++M6vBTjfKWsQyA+CBis4gyBiIVXF655YMvDz2IS/KKlndEYf5jI9lw
kmLHU4Z9zmfDkd7+fuHkvIW9tAtw277+0WnBD8rfzQjiX723NQIDAQABo2YwZDAO
BgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAdBgNVHQ4EFgQUXM2y
217eb7a/r9WM3Bs2fSCJT/EwHwYDVR0jBBgwFoAUXM2y217eb7a/r9WM3Bs2fSCJ
T/EwDQYJKoZIhvcNAQELBQADggEBAC40LAbpl/Oke8N/Bj5ILNwXPvlrWdvoIRQi
wCzYH2dXqguNePgRwiM1ElsXkLuvTbe05Mj3RdyEUe2hT5d+NsLe4QCZf6eY8fz5
7sGvqZ839OFpEWOEIxSZYKoVAr4zeilQT6dEktml3iCCLOneZn1sJ6uoI674hxiu
GsSOsQwxSh3b9nmBCanZEoFv1srngLbBuiUy0XEWAznVNyiLyhwdze82+QbRIS24
BLq/MqvzIglOd+IxGnAfQiikI51tGGxCmmHht62NIC6xGAX5QvFBc+iAI2puAPRd
EAIQOOLYUvDIsgtQ0i1DxZiapR4nTpKrU3B7+tWdVcXf1ZOckdQ=
-----END CERTIFICATE-----

部署私有镜像仓库harbor

HDSS12-200, HDSS12-21, HDSS12-21上 

安装docker

yum erase podman buildah -y
curl -fsSL https://get.docker.com |bash -s docker --mirror Aliyun
daemon.json
{
	"graph": "/data/docker",
	"storage-driver": "overlay2",
	"insecure-registries": ["registry.access.redhat.com", "quay.io"],
	"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com/"],
    "bip": "172.17.22.1/24",  # 对于ip对应一段
	"exec-opts": ["native.cgroupdriver=systemd"],
	"live-restore":true
}

12-200安装harbor

[root@hdss12-200 ~]# cd /opt
[root@hdss12-200 opt]# mkdir src
[root@hdss12-200 opt]# cd src
[root@hdss12-200 src]# wget https://github.com/goharbor/harbor/releases/download/v1.9.4/harbor-offline-installer-v1.9.4.tgz
[root@hdss12-200 src]# ls
harbor-offline-installer-v1.9.4.tgz
[root@hdss12-200 src]# tar xf harbor-offline-installer-v1.9.4.tgz -C /opt/
[root@hdss12-200 src]# cd /opt/
[root@hdss12-200 opt]# ll
total 0
drwxr-xr-x. 2 root root  71 Jun 19 02:20 certs
drwx--x--x. 4 root root  28 Jun 19 02:50 containerd
drwxr-xr-x  2 root root 100 Jun 19 04:26 harbor
drwxr-xr-x  2 root root  49 Jun 19 04:07 src
[root@hdss12-200 opt]# mv harbor /opt/harbor-v1.9.4
# 软连接
[root@hdss12-200 opt]# ln -s /opt/harbor-v1.9.4 /opt/harbor
[root@hdss12-200 opt]# ll
total 0
drwxr-xr-x. 2 root root  71 Jun 19 02:20 certs
drwx--x--x. 4 root root  28 Jun 19 02:50 containerd
lrwxrwxrwx  1 root root  18 Jun 19 04:27 harbor -> /opt/harbor-v1.9.4
drwxr-xr-x  2 root root 100 Jun 19 04:26 harbor-v1.9.4
drwxr-xr-x  2 root root  49 Jun 19 04:07 src
[root@hdss12-200 opt]# vim /opt/harbor/harbor.yml
hostname: harbor.ld.com
http:
  port: 180
data_volume: /data/harbor
location: /data/harbor/logs
[root@hdss12-200 opt]# curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
[root@hdss12-200 opt]# rpm -qa docker-compose
[root@hdss12-200 ~]# cd /opt/harbor/
[root@hdss12-200 ~]# systemctl restart  docker
[root@hdss12-200 harbor]# ./install.sh
[root@hdss12-200 harbor]# docker ps -a
[root@hdss12-200 harbor]# yum install nginx -y
[root@hdss12-200 harbor]# vi /etc/nginx/conf.d/harbor.od.com.conf
server {
    listen       80;
    server_name  harbor.od.com;
    
    client_max_body_size 1000m;

    location / {
        proxy_pass http://127.0.0.1:180;
    }
}
[root@hdss12-200 harbor]# vim /etc/rc.d/rc.local  # 增加以下内容
# start harbor
cd /opt/harbor
/usr/docker-compose stop
/usr/docker-compose start

12-11上配置DNS解析

[root@hdss12-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600         ; 10 minutes
@       IN      SOA     dns.host.com. dnsadmin.host.com. (
                      2021061805        ; serial
                          10800         ; refresh (3 hours)
                            900         ; retry (15 minutes)
                         604800         ; expire (1 week)
                          86400         ; minumun (1 day)
                          )
                    NS dns.od.com.
$TTL 60          ; 1 minute
dns              A    192.168.12.11
harbor           A    192.168.12.200
[root@hdss12-11 ~]# systemctl start named
[root@hdss12-11 ~]# host harbor.od.com
harbor.od.com has address 192.168.12.200

浏览器访问

harbor.od.com

image-20210621131224599

信息在:cat /opt/harbor/harbor.yml

登陆:admin

密码:你修改的密码 harbor_admin_password: xxxxxx

新建一个公开项目

image-20210621132329396

向harbor推送nginx镜像

hdss12-200

 ~]# docker pull nginx:1.7.9
 ~]# docker tag nginx:1.7.9  harbor.od.com/public/nginx:v1.7.9
 ~]# docker login harbor.od.com
 ~]# docker push harbor.od.com/public/nginx:v1.7.9

此时查看你的harbor仓库,已经推送过来

image-20210621134802047

K8s部署Master节点服务

image-20210621134942948

部署etcd集群

etcd 的leader选举机制,要求至少为3台或以上的奇数台。本次安装涉及:hdss7-12,hdss7-21,hdss7-22

签发etcd证书

证书签发服务器 hdss12-200:

  • 创建ca的json配置: /opt/certs/ca-config.json

  • server 表示服务端连接客户端时携带的证书,用于客户端验证服务端身份

  • client 表示客户端连接服务端时携带的证书,用于服务端验证客户端身份

  • peer 表示相互之间连接时使用的证书,如etcd节点之间验证

{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
  • 创建etcd证书配置:/opt/certs/etcd-peer-csr.json

重点在hosts上,将所有可能的etcd服务器添加到host列表,不能使用网段,新增etcd服务器需要重新签发证书

{
    "CN": "k8s-etcd",
    "hosts": [
        "192.168.12.11",
        "192.168.12.12",
        "192.168.12.21",
        "192.168.12.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
  • 签发证书
[root@hdss12-200 harbor]# cd /opt/certs/
[root@hdss12-200 certs]# vi ca-config.json
[root@hdss12-200 certs]# vi etcd-peer-csr.json
[root@hdss12-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer
2021/06/21 01:59:38 [INFO] generate received request
2021/06/21 01:59:38 [INFO] received CSR
2021/06/21 01:59:38 [INFO] generating key: rsa-2048
2021/06/21 01:59:38 [INFO] encoded CSR
2021/06/21 01:59:38 [INFO] signed certificate with serial number 3343196185397592834557614861518706185986191828
2021/06/21 01:59:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@hdss12-200 certs]# ll
total 36
-rw-r--r--  1 root root  841 Jun 21 01:56 ca-config.json
-rw-r--r--. 1 root root  993 Jun 19 02:20 ca.csr
-rw-r--r--. 1 root root  252 Jun 19 02:20 ca-csr.json
-rw-------. 1 root root 1675 Jun 19 02:20 ca-key.pem
-rw-r--r--. 1 root root 1342 Jun 19 02:20 ca.pem
-rw-r--r--  1 root root 1062 Jun 21 01:59 etcd-peer.csr
-rw-r--r--  1 root root  379 Jun 21 01:58 etcd-peer-csr.json
-rw-------  1 root root 1679 Jun 21 01:59 etcd-peer-key.pem
-rw-r--r--  1 root root 1428 Jun 21 01:59 etcd-peer.pem
安装etcd

12-12,12-21,12-22上

etcd地址:https://github.com/etcd-io/etcd/

实验使用版本: etcd-v3.1.20-linux-amd64.tar.gz

📎etcd-v3.1.20-linux-amd64.tar.gz

  • 下载etcd
[root@hdss12-12 ~]# useradd -s /sbin/nologin -M etcd
[root@hdss12-12 ~]# mkdir -p /opt/src
[root@hdss12-12 ~]# cd /opt/src/
[root@hdss12-12 src]# wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz
[root@hdss12-12 src]# tar -xf etcd-v3.1.20-linux-amd64.tar.gz 
[root@hdss12-12 src]# mv etcd-v3.1.20-linux-amd64 /opt/etcd-v3.1.20
[root@hdss12-12 src]# ln -s /opt/etcd-v3.1.20 /opt/etcd
[root@hdss12-12 src]# ll /opt/etcd
lrwxrwxrwx 1 root root 17 Jun 21 02:06 /opt/etcd -> /opt/etcd-v3.1.20
[root@hdss12-12 src]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
  • 下发证书到各个etcd上
[root@hdss12-12 ~]# cd /opt/etcd/certs/
[root@hdss12-12 certs]# scp hdss12-200:/opt/certs/ca.pem .
root@hdss12-200's password: 
ca.pem                                                                       100% 1342    80.6KB/s   00:00    
[root@hdss12-12 certs]# scp hdss12-200:/opt/certs/etcd-peer.pem .
root@hdss12-200's password: 
etcd-peer.pem                                                                100% 1428    88.1KB/s   00:00    
[root@hdss12-12 certs]# scp hdss12-200:/opt/certs/etcd-peer-key.pem .
root@hdss12-200's password: 
etcd-peer-key.pem                                                            100% 1679   149.3KB/s   00:00    
[root@hdss12-12 certs]# ls
ca.pem  etcd-peer-key.pem  etcd-peer.pem
  • 创建etcd启动脚本
[root@hdss12-12 certs]# vim /opt/etcd/etcd-server-startup.sh
#!/bin/sh
./etcd/etcd --name etcd-server-12-12 \
    --data-dir /data/etcd/etcd-server \
    --listen-peer-urls https://192.168.12.12:2380 \
    --listen-client-urls https://192.168.12.12:2379,http://127.0.0.1:2379 \
    --quota-backend-bytes 8000000000 \
    --initial-advertise-peer-urls https://192.168.12.12:2380 \
    --advertise-client-urls https://192.168.12.12:2379,http://127.0.0.1:2379 \
    --initial-cluster  etcd-server-12-12=https://192.168.12.12:2380,etcd-server-12-21=https://192.168.12.21:2380,etcd-server-12-22=https://192.168.12.22:2380 \
    --ca-file ./certs/ca.pem \
    --cert-file ./certs/etcd-peer.pem \
    --key-file ./certs/etcd-peer-key.pem \
    --client-cert-auth  \
    --trusted-ca-file ./certs/ca.pem \
    --peer-ca-file ./certs/ca.pem \
    --peer-cert-file ./certs/etcd-peer.pem \
    --peer-key-file ./certs/etcd-peer-key.pem \
    --peer-client-cert-auth \
    --peer-trusted-ca-file ./certs/ca.pem \
    --log-output stdout
[root@hdss12-12 certs]# chmod u+x /opt/etcd/etcd-server-startup.sh
[root@hdss12-12 certs]# chown -R etcd.etcd /opt/etcd/ /data/etcd /data/logs/etcd-server
启动etcd

12-12,12-21,12-22上

# 安装supervisor
[root@hdss12-12 certs]# yum install -y python36
[root@hdss12-12 certs]# pip3 install supervisor
[root@hdss12-12 certs]# echo_supervisord_conf>/etc/supervisord.conf
[root@hdss12-12 certs]# vi /etc/supervisord.conf
[program:etcd-server-12-12]
command=/opt/etcd/etcd-server-startup.sh              ; the program (relative uses PATH, can take args)
numprocs=1                                            ; number of processes copies to start (def 1)
directory=/opt/etcd                                   ; directory to cwd to before exec (def no cwd)
autostart=true                                        ; start at supervisord start (default: true)
autorestart=true                                      ; retstart at unexpected quit (default: true)
startsecs=30                                          ; number of secs prog must stay running (def. 1)
startretries=3                                        ; max # of serial start failures (default 3)
exitcodes=0,2                                         ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                       ; signal used to kill process (default TERM)
stopwaitsecs=10                                       ; max num secs to wait b4 SIGKILL (default 10)
user=etcd                                             ; setuid to this UNIX account to run the program
redirect_stderr=true                                  ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                          ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=5                              ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                           ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false 
[root@hdss12-22 certs]# supervisord -c /etc/supervisord.conf
sudo unlink /tmp/supervisor.sock  # stop
[root@hdss12-22 certs]# supervisorctl update
[root@hdss12-22 certs]# supervisorctl status
[root@hdss12-21 ~]# cd /opt/etcd
[root@hdss12-21 etcd]# ll
total 30072
drwxr-xr-x  2 etcd etcd       66 Jun 22 01:32 certs
drwxr-xr-x 11 etcd etcd     4096 Oct 10  2018 Documentation
-rwxr-xr-x  1 etcd etcd 16406432 Oct 10  2018 etcd
-rwxr-xr-x  1 etcd etcd 14327712 Oct 10  2018 etcdctl
-rwxr--r--  1 etcd etcd     1019 Jun 22 01:28 etcd-server-startup.sh
-rw-r--r--  1 etcd etcd    32632 Oct 10  2018 README-etcdctl.md
-rw-r--r--  1 etcd etcd     5878 Oct 10  2018 README.md
-rw-r--r--  1 etcd etcd     7892 Oct 10  2018 READMEv2-etcdctl.md
三个etcd是健康的
[root@hdss12-21 etcd]# ./etcdctl cluster-health
member 46fe8b9a7c2785d is healthy: got healthy result from http://127.0.0.1:2379
member 9884a47bdcc52154 is healthy: got healthy result from http://127.0.0.1:2379
member a011f4e7d4e3b3d8 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy
[root@hdss12-21 etcd]# ./etcdctl member list
46fe8b9a7c2785d: name=etcd-server-12-22 peerURLs=https://192.168.12.22:2380 clientURLs=http://127.0.0.1:2379,https://192.168.12.22:2379 isLeader=false
9884a47bdcc52154: name=etcd-server-12-12 peerURLs=https://192.168.12.12:2380 clientURLs=http://127.0.0.1:2379,https://192.168.12.12:2379 isLeader=true
a011f4e7d4e3b3d8: name=etcd-server-12-21 peerURLs=https://192.168.12.21:2380 clientURLs=http://127.0.0.1:2379,https://192.168.12.21:2379 isLeader=false

安装主控节点apiserver

下载kubernetes服务端

aipserver 涉及的服务器:hdss12-21,hdss12-22

kubernetes-server-linux-amd64.tar.gz(423.2 MB)

[root@hdss12-21 supervisor]# cd /opt/src
[root@hdss12-21 src]# wget https://dl.k8s.io/v1.15.2/kubernetes-server-linux-amd64.tar.gz
[root@hdss12-21 src]# ls
etcd-v3.1.20-linux-amd64.tar.gz  kubernetes  kubernetes-server-linux-amd64.tar.gz
[root@hdss12-21 src]# mv kubernetes /opt/kubernetes-v1.15.2
[root@hdss12-21 src]# ln -s /opt/kubernetes-v1.15.2 /opt/kubernetes
[root@hdss12-21 src]# cd /opt/kubernetes
[root@hdss12-21 kubernetes]# rm -f kubernetes-src.tar.gz
[root@hdss12-21 kubernetes]# cd server/bin/
[root@hdss12-21 bin]# rm -f *.tar *_tag
[root@hdss12-21 bin]# ll
total 884636
-rwxr-xr-x 1 root root  43534816 Aug  5  2019 apiextensions-apiserver
-rwxr-xr-x 1 root root 100548640 Aug  5  2019 cloud-controller-manager
-rwxr-xr-x 1 root root 200648416 Aug  5  2019 hyperkube
-rwxr-xr-x 1 root root  40182208 Aug  5  2019 kubeadm
-rwxr-xr-x 1 root root 164501920 Aug  5  2019 kube-apiserver
-rwxr-xr-x 1 root root 116397088 Aug  5  2019 kube-controller-manager
-rwxr-xr-x 1 root root  42985504 Aug  5  2019 kubectl
-rwxr-xr-x 1 root root 119616640 Aug  5  2019 kubelet
-rwxr-xr-x 1 root root  36987488 Aug  5  2019 kube-proxy
-rwxr-xr-x 1 root root  38786144 Aug  5  2019 kube-scheduler
-rwxr-xr-x 1 root root   1648224 Aug  5  2019 mounter
签发证书
  • 签发client证书(apiserver和etcd通信证书)hdss12-200
[root@hdss12-200 ~]# cd /opt/certs/
[root@hdss12-200 certs]# vim /opt/certs/client-csr.json
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@hdss12-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
[root@hdss12-200 certs]# ls client* -l
-rw-r--r-- 1 root root  993 Jun 22 02:53 client.csr
-rw-r--r-- 1 root root  223 Jun 22 02:53 client-csr.json
-rw------- 1 root root 1679 Jun 22 02:53 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 22 02:53 client.pem
  • 签发server证书(apiserver和其它k8s组件通信使用)
[root@hdss12-200 certs]# vim /opt/certs/apiserver-csr.json
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "192.168.12.10",
        "192.168.12.21",
        "192.168.12.22",
        "192.168.12.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@hdss12-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
[root@hdss12-200 certs]# ls apiserver* -l
-rw-r--r-- 1 root root 1249 Jun 22 03:00 apiserver.csr
-rw-r--r-- 1 root root  432 Jun 22 02:59 apiserver-csr.json
-rw------- 1 root root 1679 Jun 22 03:00 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Jun 22 03:00 apiserver.pem
  • 证书下发
[root@hdss12-200 certs]# for i in 21 22;do echo hdss12-$i;ssh hdss12-$i "mkdir /opt/kubernetes/server/bin/certs";scp apiserver-key.pem apiserver.pem ca-key.pem ca.pem client-key.pem client.pem hdss12-$i:/opt/kubernetes/server/bin/certs/;done
配置apiserver日志审计

aipserver 涉及的服务器:hdss12-21,hdss12-22

[root@hdss12-22 bin]# mkdir /opt/kubernetes/conf
[root@hdss12-22 bin]# vim /opt/kubernetes/conf/audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
配置启动脚本

aipserver 涉及的服务器:hdss7-21,hdss7-22

  • 创建启动脚本
[root@hdss12-22 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver/
[root@hdss12-22 bin]# vim /opt/kubernetes/server/bin/kube-apiserver-startup.sh
#!/bin/bash
./kube-apiserver \
    --apiserver-count 2 \
    --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
    --audit-policy-file ../../conf/audit.yaml \
    --authorization-mode RBAC \
    --client-ca-file ./certs/ca.pem \
    --requestheader-client-ca-file ./certs/ca.pem \
    --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
    --etcd-cafile ./certs/ca.pem \
    --etcd-certfile ./certs/client.pem \
    --etcd-keyfile ./certs/client-key.pem \
    --etcd-servers https://192.168.12.12:2379,https://192.168.12.21:2379,https://192.168.12.22:2379 \
    --service-account-key-file ./certs/ca-key.pem \
    --service-cluster-ip-range 192.168.0.0/16 \
    --service-node-port-range 3000-29999 \
    --target-ram-mb=1024 \
    --kubelet-client-certificate ./certs/client.pem \
    --kubelet-client-key ./certs/client-key.pem \
    --log-dir  /data/logs/kubernetes/kube-apiserver \
    --tls-cert-file ./certs/apiserver.pem \
    --tls-private-key-file ./certs/apiserver-key.pem \
    --v 2
[root@hdss12-22 bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver-startup.sh
  • 配置supervisor启动配置
[program:kube-apiserver-12-21]
command=/opt/kubernetes/server/bin/kube-apiserver-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
[root@hdss12-21 bin]# supervisorctl update
[root@hdss12-22 certs]# supervisorctl status
etcd-server-7-22                 RUNNING   pid 3348, uptime 8:17:52
kube-apiserver-12-22             RUNNING   pid 3852, uptime 0:00:46

配置apiserver L4代理

12-11和12-12

nginx配置
[root@hdss12-12 etcd]# yum install nginx -y
[root@hdss12-11 ~]# vim /etc/nginx/nginx.conf 
# 追加
stream {
    upstream kube-apiserver {
        server 192.168.12.21:6443     max_fails=3 fail_timeout=30s;
        server 192.168.12.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}
[root@hdss12-12 etcd]# systemctl start nginx; systemctl enable nginx
keepalived配置
[root@hdss12-11 ~]# yum install keepalived -y
[root@hdss12-11 ~]# vi /etc/keepalived/check_port.sh
#!/bin/bash
CHK_POR=$1
if [ -n "$CHK_PORT" ];then
		PORT_PROCESS=`ss -lt|grep $CHK_PORT|wc -1`
		if [ $PORT_PROCESS -eq 0 ];then
				echo "PORT $CHK_PORT Is Not Used,End."
				exit 1
		fi
else
		echo "Check Port Cant Be Empty!"
fi
[root@hdss12-11 ~]# chmod +X /etc/keepalived/check_port.sh
# 主
[root@hdss12-11 ~]# vi /etc/keepalived/keepalived.conf
# 先gg定位到开始,然后:.,$d删除所有,再:set paste设为粘贴模式粘贴,最后:wq保存退出
! Configuration File for keepalived

global_defs {
	router_id 192.168.12.11
}

vrrp_script chk_nginx {
	script "/etc/keepalived/check_port.sh 7443"
	interval 2
	weight -20
}

vrrp_instance VI_1 {
	state MASTER
	interface eth0
	virtual_router_id 251
	priority 100
	advert_int 1
	mcast_src_ip 192.168.12.11
	nopreempt
	
	authentication {
		auth_type PASS
		auth_pass 11111111
	}
	track_script {
		chk_nginx
	}
	virtual_ipaddress {
		192.168.12.10
	}
}
# 从
[root@hdss12-12 ~]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
	router_id 192.168.12.12
}

vrrp_script chk_nginx {
	script "/etc/keepalived/check_port.sh 7443"
	interval 2
	weight -20
}

vrrp_instance VI_1 {
	state BACKUP
	interface eth0
	virtual_router_id 251
	priority 90
	advert_int 1
	mcast_src_ip 192.168.12.12
	
	authentication {
		auth_type PASS
		auth_pass 11111111
	}
	track_script {
		chk_nginx
	}
	virtual_ipaddress {
		192.168.12.10
	}
}
[root@hdss12-11 ~]# systemctl start keepalived
[root@hdss12-11 ~]# systemctl enable keepalived
# 主上测试
[root@hdss12-11 ~]# ip add
inet 192.168.12.10/32 scope global eth0
       valid_lft forever preferred_lft forever

如果主机器的nginx出故障keepalived会飘到从机上

controller-manager 安装

controller-manager 涉及的服务器:hdss7-21,hdss7-22

controller-manager 设置为只调用当前机器的 apiserver,走127.0.0.1网卡,因此不配制SSL证书

配置启动脚本
  • 创建启动脚本
[root@hdss12-21 bin]# vim /opt/kubernetes/server/bin/kube-controller-manager-startup.sh
#!/bin/sh
./kube-controller-manager \
    --cluster-cidr 172.7.0.0/16 \
    --leader-elect true \
    --log-dir /data/logs/kubernetes/kube-controller-manager \
    --master http://127.0.0.1:8080 \
    --service-account-private-key-file ./certs/ca-key.pem \
    --service-cluster-ip-range 192.168.0.0/16 \
    --root-ca-file ./certs/ca.pem \
    --v 2
[root@hdss12-21 bin]# mkdir -p  /data/logs/kubernetes/kube-controller-manager
[root@hdss12-21 bin]# chmod u+x /opt/kubernetes/server/bin/kube-controller-manager-startup.sh
  • 配置supervisor启动配置
vi /etc/supervisord.conf
[program:kube-controller-manager-12-21]
command=/opt/kubernetes/server/bin/kube-controller-manager-startup.sh             ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false   

[root@hdss12-21 bin]# supervisorctl update
kube-controller-manager-12-21: added process group
[root@hdss12-21 bin]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 1804, uptime 3:41:00
kube-apiserver-12-21             RUNNING   pid 1805, uptime 3:41:00
kube-controller-manager-12-21    RUNNING   pid 2051, uptime 0:00:32

kube-scheduler安装

kube-scheduler 涉及的服务器:hdss7-21,hdss7-22

kube-scheduler 设置为只调用当前机器的 apiserver,走127.0.0.1网卡,因此不配制SSL证书

  • 创建启动脚本
[root@hdss12-21 bin]# vim /opt/kubernetes/server/bin/kube-scheduler-startup.sh
#!/bin/sh
./kube-scheduler \
    --leader-elect  \
    --log-dir /data/logs/kubernetes/kube-scheduler \
    --master http://127.0.0.1:8080 \
    --v 2
[root@hdss12-21 bin]# chmod u+x /opt/kubernetes/server/bin/kube-scheduler-startup.sh
[root@hdss12-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler
  • 配置supervisor启动配置
[root@hdss12-21 bin]# vi /etc/supervisord.conf 
[program:kube-scheduler-12-21]
command=/opt/kubernetes/server/bin/kube-scheduler-startup.sh                     
numprocs=1                                                               
directory=/opt/kubernetes/server/bin                                     
autostart=true                                                           
autorestart=true                                                         
startsecs=30                                                             
startretries=3                                                           
exitcodes=0,2                                                            
stopsignal=QUIT                                                          
stopwaitsecs=10                                                          
user=root                                                                
redirect_stderr=true                                                     
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log 
stdout_logfile_maxbytes=64MB                                             
stdout_logfile_backups=4                                                 
stdout_capture_maxbytes=1MB                                              
stdout_events_enabled=false 
[root@hdss12-21 bin]# supervisorctl update
kube-scheduler-12-21: added process group
[root@hdss12-21 bin]# supervisorctl status

检查主控节点状态

[root@hdss12-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/local/bin/
[root@hdss12-21 etcd]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}

运算节点部署

kubelet 部署

签发证书

证书签发在 hdss7-200 操作

[root@hdss12-200 ~]# cd /opt/certs/
[root@hdss12-200 certs]# vim kubelet-csr.json
{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "192.168.12.10",
    "192.168.12.21",
    "192.168.12.22",
    "192.168.12.23",
    "192.168.12.24",
    "192.168.12.25",
    "192.168.12.26",
    "192.168.12.27",
    "192.168.12.28"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@hdss12-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
2021/06/23 02:54:06 [INFO] generate received request
2021/06/23 02:54:06 [INFO] received CSR
2021/06/23 02:54:06 [INFO] generating key: rsa-2048
2021/06/23 02:54:07 [INFO] encoded CSR
2021/06/23 02:54:07 [INFO] signed certificate with serial number 561527506472962048178887337594817380054073789266
2021/06/23 02:54:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@hdss12-200 certs]# scp kubelet.pem kubelet-key.pem hdss12-21:/opt/kubernetes/server/bin/certs/
root@hdss12-21's password: 
kubelet.pem                                                                  100% 1464   219.2KB/s   00:00    
kubelet-key.pem                                                              100% 1675   622.3KB/s   00:00    
[root@hdss12-200 certs]# scp kubelet.pem kubelet-key.pem hdss12-22:/opt/kubernetes/server/bin/certs/
root@hdss12-22's password: 
kubelet.pem                                                                  100% 1464   104.7KB/s   00:00    
kubelet-key.pem 
创建kubelet配置

kubelet配置在 hdss7-21 hdss7-22 操作

  • set-cluster # 创建需要连接的集群信息,可以创建多个k8s集群信息
[root@hdss12-21 conf]# pwd
/opt/kubernetes/conf
[root@hdss12-21 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://192.168.12.10:7443 \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
  • set-credentials # 创建用户账号,即用户登陆使用的客户端私有和证书,可以创建多个证书
kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
--client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
  • set-context # 设置context,即确定账号和集群对应关系
kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
  • use-context # 设置当前使用哪个context
kubectl config use-context myk8s-context --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig

把此配置文件传给另一台就不用做以上四步

scp /opt/kubernetes/conf/kubelet.kubeconfig hdss12-22:/opt/kubernetes/conf/

授权k8s-node用户

此步骤只需要在一台master节点执行12-21

授权 k8s-node 用户绑定集群角色 system:node ,让 k8s-node 成为具备运算节点的权限。

[root@hdss12-21 conf]# pwd
/opt/kubernetes/conf
[root@hdss12-21 conf]# vim k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
[root@hdss12-21 conf]# kubectl create -f k8s-node.yaml 
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
[root@hdss12-21 conf]# kubectl get clusterrolebinding k8s-node
NAME       AGE
k8s-node   24s
装备pause镜像

将pause镜像放入到harbor私有仓库中,仅在 hdss7-200 操作:

[root@hdss12-200 ~]# docker image pull kubernetes/pause
Using default tag: latest
latest: Pulling from kubernetes/pause
4f4fb700ef54: Pull complete 
b9c8ec465f6b: Pull complete 
Digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105
Status: Downloaded newer image for kubernetes/pause:latest
docker.io/kubernetes/pause:latest
[root@hdss12-200 ~]# docker image tag kubernetes/pause:latest harbor.od.com/public/pause:latest
[root@hdss12-200 ~]# docker login -u admin harbor.od.com
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@hdss12-200 ~]# docker image push harbor.od.com/public/pause:latest
The push refers to repository [harbor.od.com/public/pause]
5f70bf18a086: Mounted from public/nginx 
e16a89738269: Pushed 
latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938
创建启动脚本

在node节点创建脚本并启动kubelet,涉及服务器: hdss7-21 hdss7-22

[root@hdss12-22 conf]# vim /opt/kubernetes/server/bin/kubelet-startup.sh
#!/bin/sh
./kubelet \
    --anonymous-auth=false \
    --cgroup-driver systemd \
    --cluster-dns 192.168.0.2 \
    --cluster-domain cluster.local \
    --runtime-cgroups=/systemd/system.slice \
    --kubelet-cgroups=/systemd/system.slice \
    --fail-swap-on="false" \
    --client-ca-file ./certs/ca.pem \
    --tls-cert-file ./certs/kubelet.pem \
    --tls-private-key-file ./certs/kubelet-key.pem \
    --hostname-override hdss12-21.host.com \
    --image-gc-high-threshold 20 \
    --image-gc-low-threshold 10 \
    --kubeconfig ../../conf/kubelet.kubeconfig \
    --log-dir /data/logs/kubernetes/kube-kubelet \
    --pod-infra-container-image harbor.od.com/public/pause:latest \
    --root-dir /data/kubelet
[root@hdss12-22 conf]# chmod u+x /opt/kubernetes/server/bin/kubelet-startup.sh
[root@hdss12-22 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
[root@hdss12-21 conf]# vi /etc/supervisord.conf
[program:kube-kubelet-12-21]
command=/opt/kubernetes/server/bin/kubelet-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
[root@hdss12-21 conf]# supervisorctl update
kube-kubelet-12-21: added process group
[root@hdss12-21 conf]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 1804, uptime 5:20:45
kube-apiserver-12-21             RUNNING   pid 2169, uptime 1:16:33
kube-controller-manager-12-21    RUNNING   pid 2528, uptime 0:01:16
kube-kubelet-12-21               RUNNING   pid 2366, uptime 0:02:53
kube-scheduler-12-21
修改节点角色

使用 kubectl get nodes 获取的Node节点角色为空,可以按照以下方式修改

[root@hdss12-21 conf]# kubectl get node
NAME                 STATUS   ROLES    AGE    VERSION
hdss12-21.host.com   Ready    <none>   2m6s   v1.15.2
hdss12-22.host.com   Ready    <none>   2m7s   v1.15.2
[root@hdss12-21 conf]# kubectl label node hdss12-21.host.com node-
node/hdss12-21.host.com labeled
[root@hdss12-21 conf]# kubectl label node hdss12-21.host.com node-
kubectl label node hdss12-22.host.com node-role.kubernetes.io/node=
kubectl label node hdss12-22.host.com node-role.kubernetes.io/master=
[root@hdss12-21 conf]# kubectl get node
NAME                 STATUS   ROLES         AGE     VERSION
hdss12-21.host.com   Ready    master,node   4m53s   v1.15.2
hdss12-22.host.com   Ready    master,node   4m54s   v1.15.2

kube-proxy部署

签发证书

证书签发在 hdss7-200 操作

[root@hdss12-200 ~]# cd /opt/certs/
[root@hdss12-200 certs]# vim kube-proxy-csr.json 
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@hdss12-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
2021/06/23 04:10:30 [INFO] generate received request
2021/06/23 04:10:30 [INFO] received CSR
2021/06/23 04:10:30 [INFO] generating key: rsa-2048
2021/06/23 04:10:30 [INFO] encoded CSR
2021/06/23 04:10:30 [INFO] signed certificate with serial number 567643149660200017349717418358993317904935342606
2021/06/23 04:10:30 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@hdss12-200 certs]# ls kube-proxy-c* -l
-rw-r--r-- 1 root root 1005 Jun 23 04:10 kube-proxy-client.csr
-rw------- 1 root root 1679 Jun 23 04:10 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Jun 23 04:10 kube-proxy-client.pem
-rw-r--r-- 1 root root  267 Jun 23 04:10 kube-proxy-csr.json
[root@hdss12-200 certs]# scp kube-proxy-client-key.pem kube-proxy-client.pem hdss12-21:/opt/kubernetes/server/bin/certs/
root@hdss12-21's password: 
kube-proxy-client-key.pem                                                    100% 1679   425.6KB/s   00:00    
kube-proxy-client.pem                                                        100% 1375   332.2KB/s   00:00    
[root@hdss12-200 certs]# scp kube-proxy-client-key.pem kube-proxy-client.pem hdss12-22:/opt/kubernetes/server/bin/certs/
root@hdss12-22's password: 
kube-proxy-client-key.pem                                                    100% 1679   453.9KB/s   00:00    
kube-proxy-client.pem 
创建kube-proxy配置

在所有node节点创建,涉及服务器:hdss7-21 ,hdss7-22

四步曲

[root@hdss12-21 conf]# pwd
/opt/kubernetes/conf
[root@hdss12-21 conf]# 
kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://192.168.12.10:7443 \
--kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/certs/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/certs/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig

kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig

kubectl config use-context myk8s-context --kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig

把生成配置文件传到另一台机器 那边就可以不用做以上四步

conf]# scp kube-proxy.kubeconfig hdss12-22:/opt/kubernetes/conf/

加载ipvs模块

kube-proxy 共有3种流量调度模式,分别是 namespace,iptables,ipvs,其中ipvs性能最好。

[root@hdss12-21 ~]# lsmod | grep ip_vs
[root@hdss12-21 ~]# for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
[root@hdss12-21 ~]# lsmod | grep ip_vs
ip_vs_wrr              16384  0
ip_vs_wlc              16384  0
ip_vs_sh               16384  0
ip_vs_sed              16384  0
ip_vs_rr               16384  0
ip_vs_pe_sip           16384  0
nf_conntrack_sip       32768  1 ip_vs_pe_sip
ip_vs_ovf              16384  0
ip_vs_nq               16384  0
ip_vs_lc               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_ftp              16384  0
ip_vs_fo               16384  0
ip_vs_dh               16384  0
ip_vs                 172032  28 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_ovf,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_pe_sip,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_nat                 45056  3 ipt_MASQUERADE,nft_chain_nat,ip_vs_ftp
nf_conntrack          172032  4 nf_nat,ipt_MASQUERADE,nf_conntrack_sip,ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
创建启动脚本
[root@hdss12-21 ~]# vim /opt/kubernetes/server/bin/kube-proxy-startup.sh
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override hdss12-21.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ../../conf/kube-proxy.kubeconfig
 ~]# chmod u+x  /opt/kubernetes-v1.15.2/server/bin/kube-proxy-startup.sh
 ~]# mkdir -p /data/logs/kubernetes/kube-proxy

 ~]# vim /etc/supervisord.conf
[program:kube-proxy-12-21]
command=/opt/kubernetes/server/bin/kube-proxy-startup.sh                
numprocs=1                                                      
directory=/opt/kubernetes/server/bin                            
autostart=true                                                  
autorestart=true                                                
startsecs=30                                                    
startretries=3                                                  
exitcodes=0,2                                                   
stopsignal=QUIT                                                 
stopwaitsecs=10                                                 
user=root                                                       
redirect_stderr=true                                            
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB                                    
stdout_logfile_backups=5                                       
stdout_capture_maxbytes=1MB                                     
stdout_events_enabled=false
[root@hdss12-21 bin]# supervisorctl update
kube-proxy-12-21: added process group
[root@hdss12-21 bin]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 1804, uptime 6:06:49
kube-apiserver-12-21             RUNNING   pid 2169, uptime 2:02:37
kube-controller-manager-12-21    RUNNING   pid 2528, uptime 0:47:20
kube-kubelet-12-21               RUNNING   pid 2366, uptime 0:48:57
kube-proxy-12-21                 RUNNING   pid 11691, uptime 0:00:50
kube-scheduler-12-21             RUNNING   pid 2529, uptime 0:47:20
验证集群
[root@hdss12-21 bin]# yum install -y ipvsadm
[root@hdss12-21 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.12.21:6443           Masq    1      0          0         
  -> 192.168.12.22:6443           Masq    1      0          0         

12-21上创建yaml文件

[root@hdss12-21 ~]# vi nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata: 
    name: nginx-ds
spec:
    template:
      metadata:
        labels:
          app: nginx-ds
      spec:
        containers:
        - name: my-nginx
          image: harbor.od.com/public/nginx:v1.7.9
          ports: 
          - containerPort: 80
[root@hdss12-21 ~]# kubectl create -f  nginx-ds.yaml
[root@hdss12-21 ~]# kubectl get pods
  • 19
    点赞
  • 113
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 13
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大聪明Smart

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值