02 kubernetes辅助环境设置

kubernetes部署文档-辅助环境部分

提供者:MappleZF

版本:1.0.0

概述

辅助环境主要部署DNS服务、harbor镜像仓库、PCS+Ha、Nginx、NFS等服务,为集群提供相关服务。

  • DNS服务主要用于对集群外部提供域名解析
  • harbor镜像仓库主要给集群提供docker 镜像仓库
  • NFS存储主要给集群提供共享存储服务、后期再部署Ceph集群用于集群的存储功能
  • PCS+Ha主要提供虚拟VIP和负载均衡及反向代理服务,PCS使用起来相对稳定点,而且haproxy具有图形页面,监控统计相对直观,当然也可以用keepalive+nginx进行代替。
  • Nginx主要提供TCP七层的HTTPS和HTTP服务的反向代理和负载均衡,当然也可以直接全有haproxy进行四层和七层全代理。

一、部署DNS服务

1.1 安装DNS服务
安装必要工具
yum install wget net-tools tree nmap sysstat dos2unix bind-utils -y
yum install bind -y
rpm -qa bind
1.2 配置主配置文件
[root@lb03.host.com:/root]# vim /etc/named.conf
options {
        listen-on port 53 { 192.168.13.99; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        allow-query     { any; };
        forwarders      { 202.101.172.35; };

        /* 
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable 
           recursion. 
         - If your recursive DNS server has a public IP address, you MUST enable access 
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification 
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface 
        */
        recursion yes;
				/* 为节省资源,实验中可以将dnssec-enable和dnssec-validation设置为no*/
        dnssec-enable yes;
        dnssec-validation yes;
				
        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.root.key";

        managed-keys-directory "/var/named/dynamic";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
};


1.3 区域配置文件
[root@lb03.host.com:/root]# vim /etc/named.rfc1912.zones

追加以下内容
zone "host.com" IN {
        type master;
        file "host.com.zone";
        allow-update { 192.168.13.99; };
};

zone "lowan.com" IN {
        type master;
        file "lowan.com.zone";
        allow-update { 192.168.13.99; };
};

zone "iot.com" IN {
        type master;
        file "iot.com.zone";
        allow-update { 192.168.13.99; };
};

zone "lowans.com" IN {
        type master;
        file "lowans.com.zone";
        allow-update { 192.168.13.99; };
};

1.4 配置区域数据文件
1.4.1 配置主机域数据文件
[root@lb03.host.com:/root]# cp -p /var/named/named.localhost /var/named/host.com.zone
[root@lb03.host.com:/root]# vim /var/named/host.com.zone
$ORIGIN host.com.
$TTL 600        ;10 minutes
@       IN SOA  dns.host.com.    dnsadmin.host.com. (
                                        2020101001      ; serial
                                        10800           ; refresh(3 hours)
                                        900             ; retry(15 minutes)
                                        604800          ; expire(1 week)
                                        86400    )      ; minimum(1 day)
        NS      dns.host.com.
$TTL 60         ; 1 minutes
dns             A       192.168.13.99
lbvip           A       192.168.13.100
lb01            A       192.168.13.97
lb02            A       192.168.13.98
lb03            A       192.168.13.99
k8smaster01     A       192.168.13.101
k8smaster02     A       192.168.13.102
k8smaster03     A       192.168.13.103
k8sworker01     A       192.168.13.105
k8sworker02     A       192.168.13.106
k8sworker03     A       192.168.13.107
k8sworker04     A       192.168.13.108
k8sworker05     A       192.168.13.109

1.4.2 配置业务域数据文件
root@lb03.host.com:/root]# vim /var/named/iot.com.zone
$ORIGIN iot.com. 
$TTL 600        ; 10 minutes
@       IN SOA  dns.iot.com.   dnsadmin.iot.com. (
                                        2020101001        ; serial
                                        10800             ; refresh (3 hours)
                                        900               ; retry (15 minutes)
                                        604800            ; expire (1 week)
                                        86400 )           ; minimum (1 day)
        NS      dns.iot.com.
$TTL 60         ; 1 minutes
dns             A       192.168.13.99


[root@lb03.host.com:/var/named]# vim lowan.com.zone 
$ORIGIN lowan.com.
$TTL 600        ; 10 minutes
@       IN SOA  dns.lowan.com.   dnsadmin.lowan.com. (
                                        2020101004        ; serial
                                        10800             ; refresh (3 hours)
                                        900               ; retry (15 minutes)
                                        604800            ; expire (1 week)
                                        86400 )           ; minimum (1 day)
        NS      dns.lowan.com.
$TTL 60         ; 1 minutes
dns             A       192.168.13.99
traefik         A       192.168.13.100
~       

1.5 检查配置文件
[root@lb03.host.com:/var]# chown -R named:named named
[root@lb03.host.com:/var/named]# named-checkconf /etc/named.conf
[root@lb03.host.com:/var/named]# named-checkconf /etc/named.rfc1912.zones
1.6 检查数据库文件命令
[root@lb03.host.com:/var/named]# named-checkzone host.com /var/named/host.com.zone
zone host.com/IN: loaded serial 2020101001
OK
[root@lb03.host.com:/var/named]# named-checkzone iot.com /var/named/iot.com.zone
zone iot.com/IN: loaded serial 2020101001
OK

1.7 开启服务
[root@lb03.host.com:/var/named]# rndc reload
[root@lb03.host.com:/var/named]# rndc status
[root@lb03.host.com:/var/named]# systemctl enable named
[root@lb03.host.com:/var/named]# systemctl start named
[root@lb03.host.com:/var/named]# netstat -luntp | grep 53
1.8 修改resolv.conf
修改
[root@lb03.host.com:/var/named]# vim /etc/resolv.conf
追加以下内容
search host.com

二、部署harbor

2.1 部署docker-compose
[root@lb02.host.com:/root]# sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
[root@lb02.host.com:/root]# sudo chmod +x /usr/local/bin/docker-compose
[root@lb02.host.com:/root]# sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
[root@lb02.host.com:/root]# docker-compose --version
docker-compose version 1.26.2, build eefe0d31

2.2 安装harbor
2.2.1 下载和解压
mkdir -p /opt/src && cd /opt/src
wget -c https://github.com/goharbor/harbor/releases/download/v2.0.2/harbor-offline-installer-v2.0.2.tgz
tar -xf /opt/src/harbor-offline-installer-v2.0.2.tgz -C /data/
cd /data/harbor
mv harbor.yml.tmpl harbor.yml
mkdir -p /data/harbor/{data,logs}

2.2.2 自签证书
[root@k8smaster01.host.com:/data/ssl]# (umask 077; openssl genrsa -out harbor.iot.com.key 2048)
Generating RSA private key, 2048 bit long modulus
...................................+++
...................................................+++
e is 65537 (0x10001)
[root@k8smaster01.host.com:/data/ssl]# openssl req -new -key harbor.iot.com.key -out harbor.iot.com.csr -subj "/CN=harbor.iot.com/C=CN/ST=Beijing/L=Beijing/O=k8s/OU=system"
[root@k8smaster01.host.com:/data/ssl]# openssl x509 -req -in harbor.iot.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out harbor.iot.com.pem -days 3650
Signature ok
subject=/CN=harbor.iot.com/C=CN/ST=Beijing/L=Beijing/O=k8s/OU=system
Getting CA Private Key
[root@k8smaster01.host.com:/data/ssl]# ls
admin.csr             ca-config.json  ca.srl              harbor.iot.com.pem   kube-proxy.kubeconfig     metrics-server.csr       rancher.lowan.com.key  server.pem
admin-csr.json        ca.csr          hakubernetes.pem    kubeconfig.sh        kube-proxy.pem            metrics-server-csr.json  rancher.lowan.com.pem  token.csv
admin-key.pem         ca-csr.json     harancher.pem       kube-proxy.csr       kubernetes.lowan.com.csr  metrics-server-key.pem   server.csr
admin.pem             ca-key.pem      harbor.iot.com.csr  kube-proxy-csr.json  kubernetes.lowan.com.key  metrics-server.pem       server-csr.json
bootstrap.kubeconfig  ca.pem          harbor.iot.com.key  kube-proxy-key.pem   kubernetes.lowan.com.pem  rancher.lowan.com.csr    server-key.pem

[root@k8smaster01.host.com:/data/ssl]# scp harbor.iot.com.pem harbor.iot.com.key lb02:/etc/cert/

2.2.3 修改配置文件
[root@lb02.host.com:/data/harbor]# vim harbor.yml
修改如下内容:
grep -vE "^#|^  #|^    #" harbor.yml

hostname: harbor.iot.com

http:
  port: 180

https:
  port: 1443
  certificate: /etc/cert/harbor.iot.com.pem
  private_key: /etc/cert/harbor.iot.com.key

harbor_admin_password: appleMysql

database:
  password: appleMysql%
  max_idle_conns: 50
  max_open_conns: 1000

data_volume: /data/harbor/data

clair:
  updaters_interval: 12

trivy:
  ignore_unfixed: false
  skip_update: false
  insecure: false

jobservice:
  max_job_workers: 10

notification:
  webhook_job_max_retry: 10

chart:
  absolute_url: disabled

log:
  level: info
  local:
    rotate_count: 50
    rotate_size: 2000M
    location: /data/harbor/logs


_version: 2.0.0

proxy:
  http_proxy:
  https_proxy:
  no_proxy:
  components:
    - core
    - jobservice
    - clair
    - trivy

注意:如果不使用HTTPS的话,可能报如下错误
如报:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
解决办法是把这些配置都注释掉。
# https related config
# https:
  # # https port for harbor, default is 443
  # port: 443
  # # The path of cert and key files for nginx
  # certificate: /your/certificate/path
  # private_key: /your/private/key/path
:x 保存退出

[root@lb02.host.com:/data/harbor]# bash install.sh----Harbor has been installed and started successfully.--------Harbor has been installed and started successfully.----
[root@lb02.host.com:/data/harbor]# docker-compose ps
      Name                     Command                       State                     Ports          
------------------------------------------------------------------------------------------------------
harbor-core         /harbor/entrypoint.sh            Up (health: starting)                            
harbor-db           /docker-entrypoint.sh            Up (healthy)            5432/tcp                 
harbor-jobservice   /harbor/entrypoint.sh            Up (health: starting)                            
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)            127.0.0.1:1514->10514/tcp
harbor-portal       nginx -g daemon off;             Up (healthy)            8080/tcp                 
nginx               nginx -g daemon off;             Up (health: starting)   0.0.0.0:180->8080/tcp    
redis               redis-server /etc/redis.conf     Up (healthy)            6379/tcp                 
registry            /home/harbor/entrypoint.sh       Up (healthy)            5000/tcp                 
registryctl         /home/harbor/start.sh            Up (healthy)                                      
2.2.2 添加DNS解析
[root@lb03.host.com:/root]# vim /var/named/iot.com.zone 
$ORIGIN iot.com.
$TTL 600        ; 10 minutes
@       IN SOA  dns.iot.com.   dnsadmin.iot.com. (
                                        2020101002        ; serial
                                        10800             ; refresh (3 hours)
                                        900               ; retry (15 minutes)
                                        604800            ; expire (1 week)
                                        86400 )           ; minimum (1 day)
        NS      dns.iot.com.
$TTL 60         ; 1 minutes
dns             A       192.168.13.99
harbor          A       192.168.13.98

:x 保存退出

[root@lb03.host.com:/root]# systemctl restart named
[root@lb03.host.com:/root]# dig -t A  harbor.iot.com @192.168.13.99 +short
192.168.13.98

2.2.3 编辑HTTPS的反向代理
[root@lb03.host.com:/data/nginx/conf]# vim nginx.conf 
添加:
    server {
        listen       443 ssl http2;
        server_name  harbor.iot.com;

     #   ssl_certificate      "certs/harbor.iot.com.pem";
     #   ssl_certificate_key  "certs/harbor.iot.com.key";

        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
        ssl_prefer_server_ciphers  on;

        location / {
            proxy_pass https://harbor_registry;
            client_max_body_size  2048m;
            proxy_set_header Host   $host:$server_port;
            proxy_set_header    X-Real-IP      $remote_addr;
            proxy_set_header x-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
同理修改lb01和lb02 并重启nginx

三、部署NFS

3.1 部署服务端
[root@lb01.host.com:/root]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb               8:16   0 232.9G  0 disk 
├─sdb2            8:18   0    10G  0 part /boot
├─sdb3            8:19   0 222.4G  0 part 
│ ├─centos-swap 253:1    0    24G  0 lvm  [SWAP]
│ └─centos-root 253:0    0 198.4G  0 lvm  /
└─sdb1            8:17   0   512M  0 part /boot/efi
sda               8:0    0 931.5G  0 disk 
└─sda1            8:1    0 931.5G  0 part 
[root@lb01.host.com:/root]# mkfs.xfs /dev/sda -f
meta-data=/dev/sda               isize=512    agcount=4, agsize=61047668 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=244190672, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=119233, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@lb01.host.com:/root]# partprobe
[root@lb01.host.com:/root]# blkid /dev/sda >> /etc/fstab
[root@lb01.host.com:/root]# vim /etc/fstab 
UUID="2277df52-d5c9-41bb-aa1d-879f483a533e"     /nfs    xfs     defaults        0 0

[root@lb01.host.com:/root]# mkdir /nfs
[root@lb01.host.com:/root]# mount -a
[root@lb01.host.com:/root]# systemctl start nfs-server
[root@lb01.host.com:/root]# systemctl enable nfs-server
[root@lb01.host.com:/root]# systemctl status nfs-server

[root@lb01.host.com:/root]# vim /etc/exports
添加下列内容
/nfs         192.168.13.100/27(rw,sync,no_root_squash)

//启动nfs
[root@lb01.host.com:/root]# systemctl restart  nfs-server
[root@lb01.host.com:/root]# exportfs -avr
exporting 192.168.13.100/27:/nfs


3.2 部署客户端
客户端:(全部客户端操作,已k8smaster02为例)
[root@k8smaster02.host.com:/root]# showmount -e lb01
Export list for lb01:
/nfs 192.168.13.100/27

[root@k8smaster02.host.com:/root]# systemctl restart nfs-server 
[root@k8smaster02.host.com:/root]# systemctl enable nfs-server
[root@k8smaster02.host.com:/root]# systemctl status nfs-server


[root@k8smaster02.host.com:/root]# mkdir /nfs

[root@k8smaster02.host.com:/root]# vim /etc/fstab
添加下列内容
lb01:/nfs       /nfs    nfs     defaults        0 0

:x 保存退出
[root@k8smaster02.host.com:/root]# mount -a

[root@k8smaster02.host.com:/root]# df -Th

四、部署负载均衡和反向代理

4.1 部署pcs+pacemaker+corosync
4.1.1 安装pcs+pacemaker+corosync

​ 安装pcs、pacemaker、corosync, pacemaker是资源管理器,corosync提供心跳机制。

yum install -y lvm2 cifs-utils quota psmisc pcs pacemaker corosync fence-agents-all resource-agents crmsh

systemctl enable pcsd corosync
systemctl start pcsd && systemctl status pcsd
4.1.2 设置集群密码

​ 注:三个节点密码需一致为:apple#Pcs

echo "apple#Pcs" |passwd --stdin hacluster
4.1.3 节点创建配置文件corosync.conf
cat <<EOF>/etc/corosync/corosync.conf
totem { 
        version: 2
        secauth:off
        cluster_name:lb-cluster
        transport:udpu
}

nodelist {
        node {
                ring0_addr:lb01 
                nodeid:1 
        }
        node {
                ring0_addr:lb02
                nodeid:2
        }
        node {
                ring0_addr:lb03
                nodeid:3
        }
}

logging {
        to_logfile: yes
        logfile: /var/log/cluster/corosync.log
        to_syslog: yes
        debug: off
}

quorum {
        provider: corosync_votequorum
}

EOF

//分发至其他节点
scp /etc/corosync/corosync.conf lb02:/etc/corosync/
scp /etc/corosync/corosync.conf lb03:/etc/corosync/
4.1.4 设置集群互相认证
ssh-keygen
ssh-copy-id lb01
ssh-copy-id lb02
ssh-copy-id lb03
4.1.5 配置节点认证
pcs cluster auth lb01 lb02 lb03 -u hacluster -p"apple#Pcs"
4.1.6 创建集群
pcs cluster setup --force  --name lb-cluster lb01 lb02 lb03
4.1.7 启动集群并查看集群状态
pcs cluster enable --all
pcs cluster start --all
pcs cluster status
ps aux | grep pacemaker
4.1.8 检验corosync状态
(执行crm_verify -L -V时会报错)
corosync-cfgtool -s
corosync-cmapctl | grep members
pcs status corosync
crm_verify -L -V
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
crm_verify -L -V

pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000

pcs property set cluster-recheck-interval=5 
4.1.9 创建集群VIP
[root@lb01.host.com:/root]# pcs resource create lbvip ocf:heartbeat:IPaddr2 ip=192.168.13.100 cidr_netmask=32 op monitor interval=10s timeout=30s on-fail=restart


[root@lb01.host.com:/root]# crm_mon -1
Stack: corosync
Current DC: lb01 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
Last updated: Mon Sep  7 09:36:33 2020
Last change: Mon Sep  7 09:36:24 2020 by root via cibadmin on lb01

3 nodes configured
1 resource configured

Online: [ lb01 lb02 lb03 ]

Active resources:

 lbvip	(ocf::heartbeat:IPaddr2):	Started lb01


4.2 部署Haproxy
4.2.1 安装haproxy

​ LB所有节点安装Haproxy,并保持相同的配置

[root@lvs01:/root]# yum -y install haproxy
配置HAProxy
定义haproxy资源
pcs resource create haproxy systemd:haproxy op monitor interval="10s"

定义运行的HAProxy和VIP必须在同一节点上:(定义资源组)
pcs constraint colocation add lbvip haproxy INFINITY
定义约束,先启动VIP之后才启动HAProxy:
pcs constraint order lbvip then haproxy

[root@lb01.host.com:/root]# vim /etc/rsyslog.d/haproxy.conf

##配置HAProxy的日志

$ModLoad imudp

$UDPServerRun 514

$template Haproxy,"%msg%n"

local3.info -/var/log/haproxy.log;Haproxy

local3.notice -/var/log/haproxy-status.log;Haproxy

local3.*~



[root@lb01:/root]# scp /etc/rsyslog.d/haproxy.conf lb02:/etc/rsyslog.d/
[root@lb01:/root]# scp /etc/rsyslog.d/haproxy.conf lb03:/etc/rsyslog.d/

//启动
systemctl enable haproxy.service && systemctl restart haproxy.service
4.2.2 配置haproxy的文件
[root@lb01.host.com:/root]# vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local3

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     500000
    user        haproxy
    group       haproxy
    daemon
    tune.ssl.default-dh-param 2048
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
    spread-checks       3
    tune.bufsize        32768
    tune.maxrewrite     1024
    tune.ssl.default-dh-param   2048

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  tcplog
    option                  splice-auto
    option http-server-close
#    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
#retries 定义连接后端服务器的失败重连次数,连接失败次数超过此值后,将会对应后端服务器标记为不可用。
    timeout http-request    60s
    timeout queue           1m
    timeout connect         50000ms
    timeout client          50000ms
    timeout server          50000ms
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 500000
#mode 语法:mode{http|tcp|health}  http是七层模式,tcp是四层模式,health是健康检查。
listen admin_stats
        bind    0.0.0.0:8789    #监听的ip端口
        mode    http
        stats   enable
        stats   uri /
        stats   realm Haproxy\ Statistics
        stats   auth admin:appleHaproxy
        stats   refresh 15s             #统计页面自动刷新时间
        stats   show-node
        stats   show-legends
        stats   hide-version    #隐藏HAProxy的版本号
        stats   admin if TRUE   #管理界面,如果认证成功了,可通过webui管理节点


listen k8smaster_cluster
        bind    192.168.13.100:7443
#       http-request    set-header      X-Forwarded-Proto       https if { ssl_fc }
        mode    tcp
#       option  tcpka
#       option  tcplog
        balance roundrobin
        server  k8smaster01 192.168.13.101:6443 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8smaster02 192.168.13.102:6443 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8smaster03 192.168.13.103:6443 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1

listen mysql_cluster
        bind    192.168.13.100:3306
        mode    tcp
        option  tcplog
        option  clitcpka
#       option  forwardfor
        timeout client 28801s
        timeout server 28801s
        server  k8smaster02 192.168.13.102:3306 check
listen p022_cluster
        bind    192.168.13.100:22210
        mode    tcp
        option  tcplog
        option  clitcpka
        timeout client 28801s
        timeout server 28801s
        server  k8sworker01 192.168.13.105:22210 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker02 192.168.13.106:22210 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker03 192.168.13.107:22210 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker04 192.168.13.108:22210 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1

listen p021_cluster_1883
        bind    192.168.13.100:1883
        mode    tcp
        option  tcplog
        option  clitcpka
        timeout client 28801s
        timeout server 28801s
        server  k8sworker01 192.168.13.105:1883 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker02 192.168.13.106:1883 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker03 192.168.13.107:1883 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker04 192.168.13.108:1883 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1


listen p021_cluster_8083
        bind    192.168.13.100:8083
        mode    tcp
        option  tcplog
        option  clitcpka
        timeout client 28801s
        timeout server 28801s
        server  k8sworker01 192.168.13.105:8083 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker02 192.168.13.106:8083 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker03 192.168.13.107:8083 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1
        server  k8sworker04 192.168.13.108:8083 check inter 20s fastinter 2s downinter 2s rise 3 fall 3 weight 1


listen  zookeeper_health
        bind    *:2181
        mode    health
        option  tcplog
        option  clitcpka
        timeout client 28801s
        timeout server 28801s
        balance roundrobin
        server  k8smaster01 192.168.13.101:2181 check
        server  k8smaster02 192.168.13.102:2181 check
        server  k8smaster03 192.168.13.103:2181 check

listen  kafka_health
        bind    *:9092
        mode    health
        option  tcplog
        option  clitcpka
        timeout client 28801s
        timeout server 28801s
        balance roundrobin
        server  k8smaster01 192.168.13.101:9092 check
        server  k8smaster02 192.168.13.102:9092 check
        server  k8smaster03 192.168.13.103:9092 check


frontend k8straefik_cluster
        bind *:49999                    #监听80端口
       # bind *:443
        mode http
        maxconn 100000
        log global
        option httplog
        option httpclose                #每次请求完毕后主动关闭 http 通道

backend traefik-server
        mode http
        balance roundrobin
        cookie SERVERID insert indirect nocache
        #option httpchk GET /index.html
        option httpclose
        option forwardfor
        timeout server  15s
        timeout connect 15s
        server  k8sworker01 192.168.13.105:81 weight 1 cookie 5 check inter 5000 rise 2 fall 3
        server  k8sworker02 192.168.13.106:81 weight 1 cookie 6 check inter 5000 rise 2 fall 3
        server  k8sworker03 192.168.13.107:81 weight 1 cookie 7 check inter 5000 rise 2 fall 3
        server  k8sworker04 192.168.13.108:81 weight 1 cookie 8 check inter 5000 rise 2 fall 3
        server  k8smaster01 192.168.13.101:81 weight 1 cookie 1 check inter 5000 rise 2 fall 3
        server  k8smaster02 192.168.13.102:81 weight 1 cookie 2 check inter 5000 rise 2 fall 3
        server  k8smaster03 192.168.13.103:81 weight 1 cookie 3 check inter 5000 rise 2 fall 3

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
#frontend  main *:5000
#    acl url_static       path_beg       -i /static /images /javascript /stylesheets
#    acl url_static       path_end       -i .jpg .gif .png .css .js

#    use_backend static          if url_static
#    default_backend             app


#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
#    balance     roundrobin
#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
#backend app
#    balance     roundrobin
#    server  app1 127.0.0.1:5001 check
#    server  app2 127.0.0.1:5002 check
#    server  app3 127.0.0.1:5003 check
#    server  app4 127.0.0.1:5004 check

//保存

[root@lb01.host.com:/root]# scp /etc/haproxy/haproxy.cfg lb02:/etc/haproxy/
[root@lb01.host.com:/root]# scp /etc/haproxy/haproxy.cfg lb03:/etc/haproxy/

systemctl restart haproxy.service
systemctl status haproxy

4.2.3 验证haproxy界面

登陆网址:http://192.168.13.100:8789/

账户 admin 密码:appleHaproxy
在这里插入图片描述
在这里插入图片描述

五、部署nginx
5.1 环境准备
wget -c http://nginx.org/download/nginx-1.19.1.tar.gz
yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel
mkdir -p /data/nginx/logs && cd /data/nginx
5.2 安装nginx
 tar -xf /opt/src/nginx-1.19.1.tar.gz -C /data/nginx/   
 cd nginx-1.19.1/
 编译:
 ./configure --prefix=/data/nginx --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_addition_module --with-http_flv_module --with-http_gzip_static_module --with-http_realip_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_dav_module --with-http_v2_module
 make && make install
 echo 'export PATH=/data/nginx/sbin:$PATH' >> /etc/profile
 source /etc/profile
5.3 启动nginx 并配置文件
启动
nginx


mkdir /data/nginx/conf/conf.d
设置配置文件
vim /data/nginx/conf/nginx.conf
include /data/nginx/conf/conf.d/*.conf;    增加行 http中default_type的下一行

重启
nginx -t
nginx -s reload

[root@inweb:/etc/systemd/system]# vim nginx.service

[Unit]
Description=nginx - high performance web server
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf
ExecReload=/data/nginx/sbin/nginx -s reload
ExecStop=/data/nginx/sbin/nginx -s stop
PrivateTmp=true
Restart=on-failure

[Install]
WantedBy=multi-user.target

后续参考(集群):

01 kubernetes二进制部署
02 kubernetes辅助环境设置
03 K8S集群网络ACL规则
04 Ceph集群部署
05 部署zookeeper和kafka集群
06 部署日志系统
07 部署Indluxdb-telegraf
08 部署jenkins
09 部署k3s和Helm-Rancher
10 部署maven软件

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值