k8s二进制无坑安装——基础篇

文章目录

一、准备虚拟机

虚拟机系统环境 centos 7.4的3.10.0(3.8.0以上即可)
准备五台虚拟机,ip分别为:

  • 192.168.252.11 master1
  • 192.168.252.12 worker1
  • 192.168.252.13 worker2
  • 192.168.252.14 master2
  • 192.168.252.15 worker3
    后面的主机名是没有意义的,是我之前做其他项目的时候起的名字,与此项目无关

1、修改所有机器的主机名

hostnamectl set-hostname <主机名>

2、修改所有机器的ip地址和DNS(虚拟机)

ip需要设置在虚拟机指定的网段里面
dns指向虚拟机网关地址
gateway指向虚拟机网关地址

3、所有机器关闭selinux

#关闭 selinux
setenforce 0

#永久关闭
vim /etc/selinux/config
#将 SELINUX-enforcing 修改成 SELINUX-disabled

验证方式

#输出结果为Disabled就位关闭
getenforce

4、所有机器关闭防火墙

systemctl stop firewalld

5、所有机器安装环境工具

 yum install -y epel-release
 
yum install -y wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils vim less

二、准备虚拟机网络环境

1、其中一台机器上安装bind9

这里选择ip为192.168.252.11的机器安装

1)安装bind9
yum install bind -y
2)修改bind9主配置文件
vim /etc/named.conf

这个文件要特别小心,最好先备份一下再改,bind的语法非常严格,分号和空格非常容易错,一定要注意

options {
        /* 监听到本机的地址 */
        listen-on port 53 { 192.168.252.11; };
        /* listen-on-v6 port 53 { ::1; }; */
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
         /* 那些服务器可以查询 从localhost改成any */
        allow-query     { any; };
        /* 添加一个配置,设置上级网关的配置 */
        forwarders      { 192.168.252.2; };
        

        /* 
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable 
           recursion. 
         - If your recursive DNS server has a public IP address, you MUST enable access 
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification 
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface 
        */
        /* 使用递归的方式查询dns */
        recursion yes;

        /* 这里关闭掉 */
        dnssec-enable no;
        dnssec-validation no;

        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.root.key";

        managed-keys-directory "/var/named/dynamic";
        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

检测配置文件是否正确

named-checkconf
3)修改bind9区域配置文件
vim /etc/named.rfc1912.zones

在文件的最后添加两个域

zone "host.com" IN {
        type master;
        file "host.com.zone";
        allow-update { 192.168.252.11; };
};

zone "od.com" IN {
        type master;
        file "od.com.zone";
        allow-update { 192.168.252.11; };
};

编辑host区域数据文件

vim /var/named/host.com.zone

文件中添加

$ORIGIN host.com.
$TTL 600        ; 10 minutes
@         IN SOA dns.host.com. dnsadmin.host.com. (
                      2020071001   ; serial
                      10800        ; refresh (3 hours)
                      900          ; retry (15 minutes)
                      604800       ; expire (1 week)
                      86400        ; minimum ( 1day)
                      )
               NS    dns.host.com.
$TTL 60  ; 1 minute
dns                    A     192.168.252.11
master1                A     192.168.252.11
worker1                A     192.168.252.12
worker2                A     192.168.252.13
master2                A     192.168.252.14
worker3                A     192.168.252.15

配置od的区域数据文件

vim /var/named/od.com.zone

文件中添加

$ORIGIN od.com.
$TTL 600        ; 10 minutes
@         IN SOA dns.od.com. dnsadmin.od.com. (
                      2020071002  ; serial
                      10800         ; refresh ( 3 hours)
                      900            ; retry ( 15 minutes)
                      604800       ; expire ( 1week )
                      86400        ; minimum ( 1day )
                      )
               NS    dns.od.com.
$TTL 60  ; 1 minute
dns                      A     192.168.252.11
harbor                  A     192.168.252.14

再次检验一下配置文件是否正确

named-checkconf
4)启动bind9
systemctl start named
5)检测域名解析是否成功
dig -t A master1.host.com @192.168.252.11 +short

2、修改所有机器上的DNS指向(包括安装了bind9的机器)

有时候不叫ifcfg-eth0,更加实际情况自行修改

vim /etc/sysconfig/network-scripts/ifcfg-eth0

添加DNS解析

DNS1=192.168.252.11

保存并重启network服务

systemctl restart network

在resolv.conf下添加短域名的匹配

vim /etc/resolv.conf

在最上面添加一行

search host.com

同ping命令检测网络连接是否正常

#测试外网连接
ping baidu.com

#测试内网连接
ping master1

最后需要修改以下物理机(安装虚拟机的物理机),将他配置到DNS中,因为需要通过物理机进行域名访问
控制面板 -> 网络和Internet -> 网络连接 -> VMnet8 -> Internet协议版本4 -> 首选DNS服务器 -> 填写192.168.252.11
如果修改VMnet8没有用,就需要修改本地网络(名字不交本地网络,看那个是连接网络就改哪个)
注意:这里的配置有可能会影响到其他虚拟机的使用,当不需要的时候需要将其删除

#测试是否能通过域名连接上虚拟机,这里需要使用全域名
ping master1.host.com

三、准备签发证书环境

1、下载软件

在master2上下载cfssl

#下载
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo

#赋予执行权限
chmod u+x /usr/local/bin/cfssl*

2、颁发自签证书

1)创建certs文件夹
mkdir /opt/certs
2)创建CA证书签名请求(csr)的json配置文件
vim /opt/certs/ca-csr.json

复制的时候需要把注释删掉,不然会失效

{
    #CA机构的名称
    "CN": "OldboyEdu",
    "hosts": [
    ],
    #加密算法
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            #国家
            "C": "CN",
            #省
            "ST": "beijing",
            #地区城市
            "L": "beijing",
            #组织名称
            "O": "od",
            #机构单位名称,公司部门
            "OU": "ops"
        }
    ],
    "ca": {
        #有效时间,默认为1年
        "expiry": "175200h"
    }
}
3)生成CA证书
cfssl gencert -initca ca-csr.json | cfssl-json -bare ca

执行完之后会生成3个文件

  • ca.csr
  • ca-key.pem
  • ca.pem

四、准备docker环境

在192.168.252.12、192.168.252.13、192.168.252.14、192.168.252.15上安装docker

#下载安装脚本
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安装docker
yum install -y docker-ce
#创建docker工作目录
mkdir -p /data/docker
#创建配置文件目录
mkdir /etc/docker
#创建配置文件
vim /etc/docker/daemon.json

不安全的registry中增加了harbor地址
各个机器上bip网段不一致,bip中间两段与宿主机最后两段相同,目的是方便定位问题
复制的时候需要把注释删掉,不然会失效

{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
  "registry-mirrors": ["https://registry.docker-cn.com"],
  #这个字段每台docker上都不一样,我分别设置为了172.7.11.1/24、172.7.12.1/24、172.7.15.1/24
  "bip": "172.7.11.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

启动docker

systemctl start docker

五、部署docker镜像私有仓库harbor

在192.168.252.14上安装harbor

1、下载harbor

下载地址:https://github.com/goharbor/harbor/tags
下载版本:harbor-offline-installer-v1.8.3.tgz

2、解压harbor

tar -zxvf harbor-offline-installer-v1.8.3.tgz -C /opt/

#便于以后的升级
#把harbor改下名字
mv /opt/harbor /opt/harbor-v1.8.3
#做一个软连接
ln -s /opt/harbor-v1.8.3/ /opt/harbor

3、修改harbor的配置文件

vim /opt/harbor/harbor.yml

将下面几个属性修改成对应的值,其他不变

hostname = harbor.od.com
http:
    port: 180
#默认管理员密码
harbor_admin_password: Harbor12345
data_volume: /data/harbor
log:
    level: info
    rotate_count: 50
    rotate_size: 200M
    location: /data/harbor/logs

常见需要的文件夹

mkdir -p /data/harbor
mkdir -p /data/harbor/logs

4、安装docker-compose

harbor是依赖于docker-compose的

yum install docker-compose -y

5、安装harbor

执行harbor目录下的install.sh

/opt/harbor/install.sh

查看启动的容器

docker-compose ps
docker ps -a

6、安装nginx,对180端口进行反代

yum install nginx -y

修改nginx的配置文件

vim /etc/nginx/conf.d/harbor.od.com.conf
server {
    listen 80;
    server_name harbor.od.com;
    
    client_max_body_size 1000m;
    
    location / {
        proxy_pass http://127.0.0.1:180/;
    }
}

检测配置文件是否正确

nginx -t

启动nginx

systemctl start nginx

systemctl enable nginx

测试

curl harbor.od.com

7、harbor的使用

1)在物理机上访问harbor

访问地址:http://harbor.od.com
需要配置好物理机的DNS 具体步骤看上面
用户名:admin
密码:Harbor12345

2)新建项目

项目名:public
访问级别:公开

3)拉取一个nginx
docker pull nginx:1.7.9

查看这个镜像

docker images | grep 1.7.9

给这个镜像加一个tag

docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9
4)将镜像推送到harbor的仓库中

登录harbor

docker login harbor.od.com

推送镜像到harbor中

docker push harbor.od.com/public/nginx:v1.7.9

六、安装etcd集群

在192.168.252.12、192.168.252.13、192.168.252.15上安装etcd

1、签发etcd的证书

在192.168.252.14服务上签发证书

1)创建ca的json配置
vim /opt/certs/ca-config.json
  • server 表示服务端连接客户端时携带的证书,用于客户端验证服务端身份
  • client 表示客户端连接服务端时携带的证书,用于服务端验证客户端身份
  • peer 表示相互之间连接时使用的证书,如etcd节点之间验证
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
2)创建etcd的csr的json配置
vim etcd-peer-csr.json

重点在hosts上,将所有可能的etcd服务器添加到host列表,不能使用网段,新增etcd服务器需要重新签发证书

{
    "CN": "k8s-etcd",
    "hosts": [
        "192.168.252.11",
        "192.168.252.12",
        "192.168.252.13",
        "192.168.252.14",
        "192.168.252.15"],
    "key": {
        "algo": "rsa",
        "size": 2048    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"        }
    ]
}
3)签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer

2、安装etcd

在192.168.252.12、192.168.252.13、192.168.252.15上安装etcd

1)添加用户
useradd -s /sbin/nologin -M etcd
2)从github上下载etcd

下载地址:https://github.com/etcd-io/etcd/
使用版本:etcd-v3.1.20-linux-amd64.tar.gz

3)解压etcd
tar -zxvf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/

#重命名
mv /opt/etcd-v3.1.20-linux-amd64 /opt/etcd-v3.1.20
#做软连接
ln -s /opt/etcd-v3.1.20 /opt/etcd
4)拷贝etcd相关的证书,准备etcd工作目录和日志目录

创建文件夹

mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server

将证书拷贝到/opt/etcd/certs目录下

scp master2:/opt/certs/ca.pem /opt/etcd/certs
scp master2:/opt/certs/etcd-peer.pem /opt/etcd/certs
scp master2:/opt/certs/etcd-peer-key.pem /opt/etcd/certs
5)创建etcd的启动文件(每台机器都不一样,注意修改)
vim /opt/etcd/etcd-server-startup.sh

listen-peer-urls etcd节点之间通信端口
listen-client-urls 客户端与etcd通信端口
quota-backend-bytes 配额大小
需要修改的参数:name,listen-peer-urls,listen-client-urls,initial-advertise-peer-urls

#!/bin/sh

/opt/etcd/etcd --name etcd-server-252-12 \
    --data-dir /data/etcd/etcd-server \
    --listen-peer-urls https://192.168.252.12:2380 \
    --listen-client-urls https://192.168.252.12:2379,http://127.0.0.1:2379 \
    --quota-backend-bytes 8000000000 \
    --initial-advertise-peer-urls https://192.168.252.12:2380 \
    --advertise-client-urls https://192.168.252.12:2379,http://127.0.0.1:2379 \
    --initial-cluster  etcd-server-252-12=https://192.168.252.12:2380,etcd-server-252-13=https://192.168.252.13:2380,etcd-server-252-15=https://192.168.252.15:2380 \
    --ca-file ./certs/ca.pem \
    --cert-file ./certs/etcd-peer.pem \
    --key-file ./certs/etcd-peer-key.pem \
    --client-cert-auth  \
    --trusted-ca-file ./certs/ca.pem \
    --peer-ca-file ./certs/ca.pem \
    --peer-cert-file ./certs/etcd-peer.pem \
    --peer-key-file ./certs/etcd-peer-key.pem \
    --peer-client-cert-auth \
    --peer-trusted-ca-file ./certs/ca.pem \
    --log-output stdout
6)赋予执行权限
chmod u+x /opt/etcd/etcd-server-startup.sh

#将文件所有权赋予 etcd这个用户
chown -R etcd.etcd /opt/etcd/ /data/etcd /data/logs/etcd-server
7)使用supervisor进行后台管理(可选)

安装supervisor

yum install supervisor -y

启动supervisord

systemctl start supervisord 
systemctl enable supervisord

创建supervisord的启动文件

vim /etc/supervisord.d/etcd-server.ini

注意修改program的名字

[program:etcd-server-252-12]
command=/opt/etcd/etcd-server-startup.sh         ; the program (relative uses PATH, can take args)
numprocs=1                                            ; number of processes copies to start (def 1)
directory=/opt/etcd                              ; directory to cwd to before exec (def no cwd)
autostart=true                                        ; start at supervisord start (default: true)
autorestart=true                                      ; retstart at unexpected quit (default: true)
startsecs=30                                          ; number of secs prog must stay running (def. 1)
startretries=3                                        ; max # of serial start failures (default 3)
exitcodes=0,2                                         ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                       ; signal used to kill process (default TERM)
stopwaitsecs=10                                       ; max num secs to wait b4 SIGKILL (default 10)
user=etcd                                             ; setuid to this UNIX account to run the program
redirect_stderr=true                                  ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                          ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=5                              ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                           ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                           ; emit events on stdout writes (default false)

刷新supervisord的配置文件

supervisorctl update

查看supervisord的状态

supervisorctl status
8)etcd状态查看

查看etcd的网络状态,必须监听了2379和2380的端口才算启动成功(应该会出现3条信息)

netstat -luntp | grep etcd

查看etcd成员列表

/opt/etcd/etcdctl member list

查看etcd集群健康

/opt/etcd/etcdctl cluster-health

七、安装api-server

1、下载k8s

下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#downloads-for-v1152
下载版本:v1.15.2
页面往下翻,找到“Downloads for v1.15.2”-> “Server Binaries” -> “kubernetes-server-linux-amd64.tar.gz”
或者直接访问https://dl.k8s.io/v1.15.2/kubernetes-server-linux-amd64.tar.gz下载

2、解压k8s

将下载好的文件上传到192.168.252.12、192.168.252.13上,然后解药到/opt下

tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /opt

修改包名

mv /opt/kubernetes/ /opt/kubernetes-v1.15.2

创建软连接

ln -s /opt/kubernetes-v1.15.2 /opt/kubernetes

删除一些不需要的tar包,和一些docker的tag配置

rm -f /opt/kubernetes/kubernetes-src.tar.gz
rm -f /opt/kubernetes/server/bin/*.tar
rm -f /opt/kubernetes/server/bin/*_tag

3、签发api-server的证书

在192.168.252.14服务上签发证书

1)创建client的csr的json配置(api-server去访问etcd需要的证书)
vim /opt/certs/client-csr.json
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
2)签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
3)创建api-server的csr的json配置(api-server的服务端证书,别人访问的时候需要对应的客户端证书才能访问)
vim /opt/certs/apiserver-csr.json
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "192.168.252.11",
        "192.168.252.12",
        "192.168.252.13",
        "192.168.252.14",
        "192.168.252.15",
        "192.168.252.20"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
4)签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver
5)把证书拷贝到对应的目录下

192.168.252.12和192.168.252.13都需要这些证书
创建存放证书的目录

mkdir -p /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/ca.pem /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/ca-key.pem /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/client.pem /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/client-key.pem /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/apiserver.pem /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/apiserver-key.pem /opt/kubernetes/server/bin/certs

4、创建api-server的配置文件

1)创建配置所在文件夹
mkdir -p /opt/kubernetes/server/bin/conf
2)创建日志审计规则的配置文件
vim /opt/kubernetes/server/bin/conf/audit.yaml

打开文件后,设置 :set paste,避免自动缩进

apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]
  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]
  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]
  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"
  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]
  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]
  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.
  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

5、创建api-server启动脚本

1)配置api-server启动脚本
vim /opt/kubernetes/server/bin/kube-apiserver-startup.sh
#!/bin/bash

/opt/kubernetes/server/bin/kube-apiserver \
    --apiserver-count 2 \
    --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
    --audit-policy-file ./conf/audit.yaml \
    --authorization-mode RBAC \
    --client-ca-file ./certs/ca.pem \
    --requestheader-client-ca-file ./certs/ca.pem \
    --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
    --etcd-cafile ./certs/ca.pem \
    --etcd-certfile ./certs/client.pem \
    --etcd-keyfile ./certs/client-key.pem \
    --etcd-servers https://192.168.252.12:2379,https://192.168.252.13:2379,https://192.168.252.15:2379 \
    --service-account-key-file ./certs/ca-key.pem \
    --service-cluster-ip-range 192.168.0.0/16 \
    --service-node-port-range 3000-29999 \
    --target-ram-mb=1024 \
    --kubelet-client-certificate ./certs/client.pem \
    --kubelet-client-key ./certs/client-key.pem \
    --log-dir  /data/logs/kubernetes/kube-apiserver \
    --tls-cert-file ./certs/apiserver.pem \
    --tls-private-key-file ./certs/apiserver-key.pem \
    --v 2

添加执行权限

chmod +x /opt/kubernetes/server/bin/kube-apiserver-startup.sh

创建需要的文件夹

mkdir -p /data/logs/kubernetes/kube-apiserver/audit-log
2)配置supervisor启动配置
vim /etc/supervisord.d/kube-apiserver.ini

这个文件每台机器上略有不同

[program:kube-apiserver-252-12]
command=/opt/kubernetes/server/bin/kube-apiserver-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

刷新supervisorctl update

supervisorctl update

查看状态

supervisorctl status

启停apiserver

supervisorctl start kube-apiserver-252-12
supervisorctl stop kube-apiserver-252-12
supervisorctl restart kube-apiserver-252-12
supervisorctl status kube-apiserver-252-12

查看进程

netstat -lntp|grep kube-api

ps -aux|grep kube-apiserver|grep -v grep

八、部署主控节点的L4级反向代理

1、安装nginx

在服务器192.168.252.11和192.168.252.15上安装nginx

yum install nginx -y

修改配置文件

vim /etc/nginx/nginx.conf

加载文件最后,别加载任何括号里面

stream {
    upstream kube-apiserver {
        server 192.168.252.12:6443    max_fails=3 fail_timeout=30s;
        server 192.168.252.13:6443    max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}

检测配置文件是否正确

nginx -t

启动nginx

systemctl start nginx

systemctl enable nginx

2、使用keepalived虚拟Vip

在服务器192.168.252.11和192.168.252.15上安装keepalived

1)安装keepalived
yum install keepalived -y
2)编写keepalived监听脚本
vim /etc/keepalived/check_port.sh
#!/bin/bash
if [ $# -eq 1 ] && [[ $1 =~ ^[0-9]+ ]];then
    [ $(netstat -lntp|grep ":$1 " |wc -l) -eq 0 ] && echo "[ERROR] nginx may be not running!" && exit 1 || exit 0
else
    echo "[ERROR] need one port!"
    exit 1
fi

或者使用这个脚本,二选一即可

#!/bin/bash
CHK_PORT=$1
if [ -n "$CHK_PORT"];then
    PORT_PROCESS=`ss -lnt | grep $CHK_PORT|wc -l`
    if [ $PORT_PROCESS -eq 0 ];then
        echo "Port $CHK_PORT is not used,end"
        exit 1
    fi
else
    echo "Check port cant be empty!"
fi

赋予执行权限

chmod +x /etc/keepalived/check_port.sh
3)编写keepalived配置文件(主:192.168.252.11)
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id 192.168.252.11
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 192.168.252.11
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        192.168.252.20
    }
}
4)编写keepalived配置文件(从:192.168.252.15)
/etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
  router_id 192.168.252.15
}
vrrp_script chk_nginx {
  script "/etc/keepalived/check_port.sh 7443"
  interval 2
  weight -20
}
vrrp_instance VI_1 {
  state BACKUP
  interface ens33
  virtual_router_id 251
  mcast_src_ip 192.168.252.15
  priority 90
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 11111111
  }
  track_script {
    chk_nginx
  }
  virtual_ipaddress {
    192.168.252.20
  }
}
5)启动keepalived
systemctl start keepalived
systemctl enable keepalived

检查有没有虚拟出vip

ip add

查看ens33是否存在ip:192.168.252.20
当主服务失效后,vip会漂移到从服务,并且不会主动的回到主服务上,如果需要回来需要人工确认好之后,重启主和从服务上的keepalived

九、安装主控节点控制器和调度器服务

在192.168.252.12、192.168.252.13上部署

1、创建kube-controller-manager启动脚本

vim /opt/kubernetes/server/bin/kube-controller-manager-startup.sh
#!/bin/sh
/opt/kubernetes/server/bin/kube-controller-manager \
    --cluster-cidr 172.7.0.0/16 \
    --leader-elect true \
    --log-dir /data/logs/kubernetes/kube-controller-manager \
    --master http://127.0.0.1:8080 \
    --service-account-private-key-file ./certs/ca-key.pem \
    --service-cluster-ip-range 192.168.0.0/16 \
    --root-ca-file ./certs/ca.pem \
    --v 2

赋予执行权限

chmod u+x /opt/kubernetes/server/bin/kube-controller-manager-startup.sh

创建日志目录

mkdir -p /data/logs/kubernetes/kube-controller-manager 

2、创建kube-controller-manage的supervisor启动脚本

vim /etc/supervisord.d/kube-conntroller-manager.ini

注意修改服务的名称

[program:kube-controller-manager-252-12]
command=/opt/kubernetes/server/bin/kube-controller-manager-startup.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false  

刷新supervisor

supervisorctl update

3、创建kube-scheduler启动脚本

vim /opt/kubernetes/server/bin/kube-scheduler-startup.sh
#!/bin/sh
/opt/kubernetes/server/bin/kube-scheduler \
    --leader-elect  \
    --log-dir /data/logs/kubernetes/kube-scheduler \
    --master http://127.0.0.1:8080 \
    --v 2

赋予运行权限

chmod u+x /opt/kubernetes/server/bin/kube-scheduler-startup.sh

创建日志目录

mkdir -p /data/logs/kubernetes/kube-scheduler

4、创建kube-scheduler的supervisor启动脚本

vim /etc/supervisord.d/kube-scheduler.ini

注意修改服务的名称

[program:kube-scheduler-252-12]
command=/opt/kubernetes/server/bin/kube-scheduler-startup.sh                     
numprocs=1                                                               
directory=/opt/kubernetes/server/bin                                     
autostart=true                                                           
autorestart=true                                                         
startsecs=30                                                             
startretries=3                                                           
exitcodes=0,2                                                            
stopsignal=QUIT                                                          
stopwaitsecs=10                                                          
user=root                                                                
redirect_stderr=true                                                     
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log 
stdout_logfile_maxbytes=64MB                                             
stdout_logfile_backups=4                                                 
stdout_capture_maxbytes=1MB                                              
stdout_events_enabled=false 

刷新supervisor

supervisorctl update

5、使用kubectl

1)创建软连接
ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
2)检查集群的监控状况
kubectl get cs

十、部署kubelet服务

在192.168.252.12、192.168.252.13上部署

1、签发kubelet的server证书

在192.168.252.14上签发证书

1)创建签名请求(csr)的配置文件
vim /opt/certs/kubelet-csr.json

将所有可能的kubelet机器IP添加到hosts中

{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "192.168.252.11",
    "192.168.252.12",
    "192.168.252.13",
    "192.168.252.14",
    "192.168.252.15",
    "192.168.252.16",
    "192.168.252.17",
    "192.168.252.18",
    "192.168.252.19"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
2)签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
3)把证书拷贝到需要的服务器

192.168.252.12和192.168.252.13上

scp master2:/opt/certs/kubelet.pem /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/kubelet-key.pem /opt/kubernetes/server/bin/certs

2、创建kubelet的配置文件

1)设置set-cluster

创建需要连接的集群信息,可以创建多个k8s集群信息
–server:填写keepalived的vip
–kubeconfig:配置文件成的位置所在

kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://192.168.252.20:7443 \
--kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig
2)设置set-credentials

创建用户账号,即用户登陆使用的客户端私有和证书,可以创建多个证书

kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
--client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig
3)设置set-context

设置context,即确定账号和集群对应关系

kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig
4)设置use-context

设置当前使用哪个context

kubectl config use-context myk8s-context \
--kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig

其他的节点机器可以直接把/opt/kubernetes/server/bin/conf/kubelet.kubeconfig这个文件拷贝到对应的文件中

scp worker1.host.com:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig /opt/kubernetes/server/bin/conf

3、授权k8s-node用户

此步骤只需要在任意一个主节点上执行即可,会被存到etcd中,这样所有的主节点都会知道了
授权 k8s-node 用户绑定集群角色 system:node ,让 k8s-node 成为具备运算节点的权限

1)创建授权文件
vim /opt/kubernetes/server/bin/conf/k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
2)进行授权
kubectl create -f /opt/kubernetes/server/bin/conf/k8s-node.yaml
3)查看授权结果
kubectl get clusterrolebinding k8s-node

4、准备pause基础镜像

1)下载镜像

在192.168.252.14上操作

docker pull kubernetes/pause
2)提交到私有库(harbor)中

打上标签

docker image tag kubernetes/pause:latest harbor.od.com/public/pause:latest

登录harbor

docker login -u admin harbor.od.com

推送到harbor

docker image push harbor.od.com/public/pause:latest

5、创建kubelet启动脚本

192.168.252.12和192.168.252.13上

vim /opt/kubernetes/server/bin/kubelet-startup.sh

注意修改–hostname-override属性

#!/bin/sh
/opt/kubernetes/server/bin/kubelet \
    --anonymous-auth=false \
    --cgroup-driver systemd \
    --cluster-dns 192.168.0.2 \
    --cluster-domain cluster.local \
    --runtime-cgroups=/systemd/system.slice \
    --kubelet-cgroups=/systemd/system.slice \
    --fail-swap-on="false" \
    --client-ca-file ./certs/ca.pem \
    --tls-cert-file ./certs/kubelet.pem \
    --tls-private-key-file ./certs/kubelet-key.pem \
    --hostname-override worker1.host.com \
    --image-gc-high-threshold 20 \
    --image-gc-low-threshold 10 \
    --kubeconfig ./conf/kubelet.kubeconfig \
    --log-dir /data/logs/kubernetes/kube-kubelet \
    --pod-infra-container-image harbor.od.com/public/pause:latest \
    --root-dir /data/kubelet

赋予执行权限

chmod u+x /opt/kubernetes/server/bin/kubelet-startup.sh

创建需要的文件夹

mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

5、创建kubelet的supervisord配置

vim /etc/supervisord.d/kube-kubelet.ini

注意修改服务名

[program:kube-kubelet-252-12]
command=/opt/kubernetes/server/bin/kubelet-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

刷新supervisor

supervisorctl update

查看状态

supervisorctl status

在主服务的机器上查看节点是否加入到集群中了

kubectl get nodes

给节点添加标签

kubectl label node worker1.host.com node-role.kubernetes.io/master=
kubectl label node worker1.host.com node-role.kubernetes.io/node=
kubectl label node worker2.host.com node-role.kubernetes.io/master=
kubectl label node worker2.host.com node-role.kubernetes.io/node=

十一、部署kube-proxy服务

1)签发kube-proxy的csr证书

在192.168.252.14上签发证书

1)创建签名请求(csr)的配置文件
vim /opt/certs/kube-proxy-client-csr.json
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
2)签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-client-csr.json |cfssl-json -bare kube-proxy-client
3)把证书拷贝到需要的服务器

192.168.252.12和192.168.252.13上

scp master2:/opt/certs/kube-proxy-client.pem /opt/kubernetes/server/bin/certs
scp master2:/opt/certs/kube-proxy-client-key.pem /opt/kubernetes/server/bin/certs

2、创建kube-proxy的配置文件

192.168.252.12和192.168.252.13上

1)设置set-cluster
kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://192.168.252.20:7443 \
--kubeconfig=/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig
2)设置set-credentials
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/certs/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/certs/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig
3)设置set-context

这里user的值需要和证书的CN的后半段对上

kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig
4)设置use-context
kubectl config use-context myk8s-context \
--kubeconfig=/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig

192.168.252.13可以直接把生成的文件拷贝过来

scp worker1.host.com:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig /opt/kubernetes/server/bin/conf

3、加载ipvs模块

192.168.252.12和192.168.252.13上

cd ~
vim /root/ipvs.sh

kube-proxy 共有3种流量调度模式,分别是 namespace,iptables,ipvs,其中ipvs性能最好。

#!/bin/bash
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");
do 
    echo $i; 
    /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;
done

赋予执行权限啊

chmod +x ipvs.sh

执行脚本单独

./ipvs.sh

查看ipvs模块是否被启动

lsmod | grep ip_vs

4、创建kube-proxy启动脚本

192.168.252.12和192.168.252.13上

vim /opt/kubernetes/server/bin/kube-proxy-startup.sh

注意修改–hostname-override属性

#!/bin/sh
/opt/kubernetes/server/bin/kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override worker1.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig /opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig

赋予执行权限

chmod +x /opt/kubernetes/server/bin/kube-proxy-startup.sh

创建目录

mkdir -p /data/logs/kubernetes/kube-proxy

5、创建kube-proxy的supervisor配置

vim /etc/supervisord.d/kube-proxy.ini

注意修改服务名

[program:kube-proxy-252-12]
command=/opt/kubernetes/server/bin/kube-proxy-startup.sh                
numprocs=1                                                      
directory=/opt/kubernetes/server/bin                            
autostart=true                                                  
autorestart=true                                                
startsecs=30                                                    
startretries=3                                                  
exitcodes=0,2                                                   
stopsignal=QUIT                                                 
stopwaitsecs=10                                                 
user=root                                                       
redirect_stderr=true                                            
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB                                    
stdout_logfile_backups=5                                       
stdout_capture_maxbytes=1MB                                     
stdout_events_enabled=false

刷新supervisor

supervisorctl update

查看状态

supervisorctl status

6、验证安装结果

安装 ipvsadm

yum install -y ipvsadm

查看ipvs的指向

ipvsadm -Ln

结果如下,192.168.0.1:443 通过nq算法指向 192.168.252.12:6443、192.168.252.13:6443

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.252.12:6443          Masq    1      0          0         
  -> 192.168.252.13:6443          Masq    1      0          0 

查看k8s的svc

kubectl get svc

在对应的服务器上尝试访问docker的网络,返回200 ok即可,如果失败则无法通讯

#192.168.252.11上访问
curl -I 172.7.11.1

#192.168.252.12上访问
curl -I 172.7.12.1

缺少网络插件,无法跨节点通信

十二、简单使用k8s

在192.168.252.12、192.168.252.13任意一个工作节点上

1、部署nginx

1)创建资源清单
vim /root/nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
    name: nginx-ds
spec:
    template:
        metadata:
            labels:
                app: nginx-ds
        spec:
            containers:
            - name: my-nginx
              image: harbor.od.com/public/nginx:v1.7.9
              ports:
              - containerPort: 80
2)k8s运行配置文件
kubectl create -f nginx-ds.yaml

查看k8s pods

kubectl get pods

kubectl get pods -o wide

在对应的服务器上尝试访问docker的网络

#192.168.252.11上访问
curl  172.7.11.2

#192.168.252.12上访问
curl  172.7.12.2

但是依旧无法相互交互

  • 4
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值