kubease二进制部署kubernetes

目录

准备

1、所有机器绑定ip、设置主机名、同步时间、内核优化(可不做,kubeasz自动做了优化)

2、添加域名解析:master1、master2、master3、node1、etcd1、etcd2、etcd3、harbor

3、所有机器统一设置root密码,为后续添加ssh公钥做准备

ha1(ubuntu)

ha2(停掉ha1的keepalived,保证VIP能够漂移至备节点)

harbor(使用https+cert证书)

k8s-master1(ubuntu)

ansible部署k8s集群(https://github.com/easzlab/kubeasz)

添加master节点

添加node节点

升级k8s(当前版本1.17.4升级到1.17.17,注意不要跨大版本) 

使用easzctl升级

部署dashboard

配置DNS域名解析 

配置kubedns(即将被coredns替代)、coredns

使用busybox进行测试

配置coredns(前提是装了kubedns,因为需要获取clusterip等参数)


角色

主机名

IP 地址

harbor、ansible

harbor.zzj.com

10.0.0.3

ha1

ha1.zzj.com10.0.0.4

ha2

ha2.zzj.com10.0.0.5

ecd1

etcd1.zzj.com10.0.0.6
ecd2etcd2.zzj.com10.0.0.7

ecd3

etcd3.zzj.com10.0.0.8

master1

master1.zzj.com10.0.0.9
master2master2.zzj.com10.0.0.10
master3master3.zzj.com10.0.0.11
node1node1.zzj.com10.0.0.12
node2node2.zzj.com10.0.0.13
node3node3.zzj.com10.0.0.14

准备

1、所有机器绑定ip、设置主机名、同步时间、内核优化(可不做,kubeasz自动做了优化)

1、# ubuntu18.04采用netplan管理ip
vim /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    ens33:
      dhcp4: no
      addresses: [10.0.0.3/24]
      gateway4: 10.0.0.2
      nameservers:
               addresses: [8.8.8.8]
  version: 2
# 设置好后netplan apply应用一下

2、# 更改主机名    
vim  /etc/hostname
# 重启
reboot

3、同步时间
ntpdate time1.aliyun.com && hwclock -w

4、内核优化(使用kubeasz可以不做)
cat <<EOF > /etc/sysctl.d/k8s.conf
# 不设置不能进行跨主机访问,不支持路由转换
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 131072
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

apt install ipset ipvsadm wget  -y

cat <<EOF >> /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

2、添加域名解析:master1、master2、master3、node1、etcd1、etcd2、etcd3、harbor

127.0.0.1 localhost
127.0.1.1 zzj

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.0.0.3 harbor.zzj.com
10.0.0.4 ha1.zzj.com
10.0.0.5 ha2.zzj.com
10.0.0.6 etcd1.zzj.com
10.0.0.7 etcd2.zzj.com
10.0.0.8 etcd3.zzj.com
10.0.0.9 master1.zzj.com
10.0.0.10 master2.zzj.com
10.0.0.11 master3.zzj.com
10.0.0.12 node1.zzj.com
10.0.0.13 node2.zzj.com
10.0.0.14 node3.zzj.com

3、所有机器统一设置root密码,为后续添加ssh公钥做准备

passwd root
密码:root

sudo vi /etc/ssh/sshd_config
调整PermitRootLogin参数值为yes
重启ssh服务

ha1(ubuntu)

apt-get install haproxy keepalived
# 使用keepalived生成虚拟ip
1、find / -name keepalived.conf*

2、cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

3、vim /etc/keepalived/keepalived.conf
内容如下:
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    garp_master_delay 10
    smtp_alert
    virtual_router_id 88
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        # optional label. should be of the form "realdev:sometext" for
        # compatibility with ifconfig.
        10.0.0.200 dev ens33 label ens33:1
        10.0.0.201 dev ens33 label ens33:2
    }
}
4、systemctl restart/enable keepalived
5、ifconfig
ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:50:56:35:63:a9  txqueuelen 1000  (Ethernet)

ens33:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.201  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:50:56:35:63:a9  txqueuelen 1000  (Ethernet)

# 配置haproxy
1、vim /etc/haproxy/haproxy.cfg
内容如下:
# 添加状态页
listen stats
        mode http
        bind 0.0.0.0:9999
        stats enable
        log global
        stats uri /haproxy-status
        stats auth haadmin:123456

listen k8s-api-6443
        bind 10.0.0.200:6443
        mode tcp
        server 10.0.0.9 10.0.0.9:6443 check inter 3s fall 3 rise 5
        #server 10.0.0.10 10.0.0.10:6443 check inter 3s fall 3 rise 5
        #server 10.0.0.11 10.0.0.11:6443 check inter 3s fall 3 rise 5
2、systemctl restart/enable haproxy
3、ss -ntl | grep 6443
LISTEN   0         2000             10.0.0.200:6443             0.0.0.0:*

ha2(停掉ha1的keepalived,保证VIP能够漂移至备节点)

apt-get install haproxy keepalived
# 使用keepalived生成虚拟ip
1、find / -name keepalived.conf*

2、cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

3、vim /etc/keepalived/keepalived.conf
内容如下:
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    garp_master_delay 10
    smtp_alert
    virtual_router_id 88
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        # optional label. should be of the form "realdev:sometext" for
        # compatibility with ifconfig.
        10.0.0.200 dev ens33 label ens33:1
        10.0.0.201 dev ens33 label ens33:2
    }
}
4、systemctl restart/enable keepalived
5、ifconfig
ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:50:56:35:63:a9  txqueuelen 1000  (Ethernet)

ens33:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.201  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:50:56:35:63:a9  txqueuelen 1000  (Ethernet)

# 配置haproxy
1、vim /etc/haproxy/haproxy.cfg
内容如下:
# 添加状态页
listen stats
        mode http
        bind 0.0.0.0:9999
        stats enable
        log global
        stats uri /haproxy-status
        stats auth haadmin:123456

listen k8s-api-6443
        bind 10.0.0.200:6443
        mode tcp
        server 10.0.0.9 10.0.0.9:6443 check inter 3s fall 3 rise 5
        #server 10.0.0.10 10.0.0.10:6443 check inter 3s fall 3 rise 5
        #server 10.0.0.11 10.0.0.11:6443 check inter 3s fall 3 rise 5
2、systemctl restart/enable haproxy
ss -ntl | grep 6443
LISTEN   0         2000             10.0.0.200:6443             0.0.0.0:*

harbor(使用https+cert证书)

# 安装docker和docker-compose
脚本见上。
一、# 配置harbor.cfg
root@etcd1:/usr/local/src# cd /usr/local/src/

root@etcd1:/usr/local/src# tar -zxf harbor-offline-installer-v1.7.6.tgz

root@etcd1:/usr/local/src# cd harbor

root@etcd1:/usr/local/src/harbor# vim harbor.cfg
    hostname = harbor.zzj.com
    ui_url_protocol = https
    ssl_cert = /usr/local/src/harbor/certs/harbor-ca.crt
    ssl_cert_key = /usr/local/src/harbor/certs/harbor-ca.key
    harbor_admin_password = 123456

二、# 生成自签名证书,并拷贝至远程机器
root@etcd1:/usr/local/src/harbor# mkdir certs
#生成私有key
root@etcd1:/usr/local/src/harbor# openssl genrsa -out /usr/local/src/harbor/certs/harbor-ca.key 
#签证
root@etcd1:/usr/local/src/harbor# openssl req -x509 -new -nodes -key /usr/local/src/harbor/certs/harbor-ca.key -subj "/CN=harbor.zzj.com" -days 7120 -out /usr/local/src/harbor/certs/harbor-ca.crt 

root@etcd1:/usr/local/src/harbor# ./install.sh

三、拷贝证书至远程机器,并测试(以master1为例,所有master节点和node节点都加上)
# 同步harbor的crt证书
1、root@master1:/home/zzj# mkdir /etc/docker/certs.d/harbor.zzj.com -p

2、将harbor服务器创建的证书拷贝到此目录下
root@etcd1:/usr/local/src/harbor# scp /usr/local/src/harbor/certs/harbor-ca.crt zzj@11.0.0.7:/home/zzj/
root@master1:/home/zzj# mv /home/zzj/harbor-ca.crt /etc/docker/certs.d/harbor.zzj.com/

3、添加host文件解析
root@master1:/home/zzj# vim /etc/hosts
10.0.0.3 harbor.zzj.com
重启docker:root@master1:/home/zzj# systemctl restart docker

四、在master1测试harbor
root@master1:/etc/docker/certs.d/harbor.zzj.com# docker login harbor.zzj.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
五、设置宿主机windows的host解析
编辑:C:\Windows\System32\drivers\etc\hosts
11.0.0.4 harbor.zzj.com
浏览器访问:https://harbor.zzj.com

k8s-master1(ubuntu)

一、同步harbor的crt证书
1、root@master1:/home/zzj# mkdir /etc/docker/certs.d/harbor.zzj.com -p

2、将harbor服务器创建的证书拷贝到此目录下
root@etcd1:/usr/local/src/harbor# scp /usr/local/src/harbor/certs/harbor-ca.crt zzj@10.0.0.9:/home/zzj/
root@master1:/home/zzj# mv /home/zzj/harbor-ca.crt /etc/docker/certs.d/harbor.zzj.com/

3、添加host文件解析
root@master1:/home/zzj# vim /etc/hosts
11.0.0.4 harbor.zzj.com
重启docker:root@master1:/home/zzj# systemctl restart docker

二、在master1测试harbor
root@master1:/etc/docker/certs.d/harbor.zzj.com# docker login harbor.zzj.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

ansible部署k8s集群(https://github.com/easzlab/kubeasz

一、在master、node、etcd节点安装python2.7
apt-get install python2.7
# 制作软链接
ln -s /usr/bin/python2.7 /usr/bin/python

二、在ansible即master1机器安装ansible
# apt-get install git ansible -y 
# ssh-keygen #生成密钥对 
# apt-get install sshpass
#ssh同步公钥到各k8s服务器
脚本见上。

1、mv /etc/ansible/* /opt/
2、将ansible.zip的内容移动到/etc/ansible/
3、编辑主机名配置文件hosts.multi-node
root@harbor:/etc/ansible# cp example/hosts.multi-node ./hosts
root@harbor:/etc/ansible# vim hosts
内容如下:
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
10.0.0.6 NODE_NAME=etcd1

# master node(s)
[kube-master]
10.0.0.9

# work node(s)
[kube-node]
10.0.0.12

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#10.0.0.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes

# [optional] loadbalance for accessing k8s from outside
[ex-lb]
10.0.0.4 LB_ROLE=master EX_APISERVER_VIP=10.0.0.200 EX_APISERVER_PORT=6443
10.0.0.5 LB_ROLE=backup EX_APISERVER_VIP=10.0.0.200 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]
#10.0.0.1

[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="flannel"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="iptables"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.10.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="30000-60000"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/bin"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"


4、检查ansible安装
root@harbor:/etc/ansible# ansible all -m ping
10.0.0.9 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
10.0.0.12 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
10.0.0.6 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

#开始安装

1、root@harbor:/etc/ansible/playbooks# vim 01.prepare.yml  
#  - ex_lb
#  - chrony

2、安装pip2 root@harbor:/etc/ansible/playbooks# apt install python-pip

3、执行01.prepare.yml
ansible-playbook 01.prepare.yml
ansible-playbook 02.etcd.yml
ansible-playbook 03.containerd.yml
ansible-playbook 03.docker.yml
ansible-playbook 04.kube-master.yml

5、拉取pause镜像并上传至harbor
root@harbor:/etc/ansible# docker pull mirrorgooglecontainers/pause-amd64:3.1

root@harbor:/etc/ansible# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
mirrorgooglecontainers/pause-amd64   3.1                 da86e6ba6ca1        3 years ago         742kB

root@harbor:/etc/ansible# docker tag mirrorgooglecontainers/pause-amd64:3.1 harbor.zzj.com/baseimage/pause-amd64:3.1

root@harbor:/etc/ansible# docker login harbor.zzj.com
root@harbor:/etc/ansible# docker push harbor.zzj.com/baseimage/pause-amd64:3.1
root@harbor:/etc/ansible# vim roles/kube-node/defaults/main.yml
修改镜像下载地址:SANDBOX_IMAGE: "harbor.zzj.com/baseimage/pause-amd64:3.1"

6、部署node节点
root@harbor:/etc/ansible# ansible-playbook 05.kube-node.yml
root@node1:/home/zzj# cat /etc/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStartPre=/bin/mount -o remount,rw '/sys/fs/cgroup'
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service
ExecStart=/usr/bin/kubelet \
  --config=/var/lib/kubelet/config.yaml \
  --cni-bin-dir=/usr/bin \
  --cni-conf-dir=/etc/cni/net.d \
  --hostname-override=11.0.0.10 \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --network-plugin=cni \
  --pod-infra-container-image=harbor.zzj.com/baseimage/pause-amd64:3.1 \
  --root-dir=/var/lib/kubelet \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

root@node1:/home/zzj# cat /etc/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后,kube-proxy 会对访问 Service IP 的请求做 SNAT
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/bin/kube-proxy \
  --bind-address=11.0.0.10 \
  --cluster-cidr=11.20.0.0/16 \
  --hostname-override=11.0.0.10 \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
  --logtostderr=true \
  --proxy-mode=iptables
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

7、部署网络组件(master\node)
root@harbor:/etc/ansible# ansible-playbook 06.network.yml

8、测试
root@harbor:/etc/ansible# kubectl run net-test1 --image=alpine --replicas=4 sleep 360000

root@harbor:/etc/ansible# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP          NODE        NOMINATED NODE   READINESS GATES
net-test1-5fcc69db59-5j4kd   1/1     Running   0          3m22s   11.20.3.3   11.0.0.10   <none>           <none>
net-test1-5fcc69db59-cnqth   1/1     Running   0          3m22s   11.20.3.4   11.0.0.10   <none>           <none>
net-test1-5fcc69db59-l2rkb   1/1     Running   0          3m22s   11.20.3.2   11.0.0.10   <none>           <none>
net-test1-5fcc69db59-sgnn6   1/1     Running   0          3m22s   11.20.3.5   11.0.0.10   <none>           <none>
root@harbor:/etc/ansible# kubectl exec -it net-test1-5fcc69db59-5j4kd sh
/ # ping 223.6.6.6
PING 223.6.6.6 (223.6.6.6): 56 data bytes
64 bytes from 223.6.6.6: seq=0 ttl=127 time=8.084 ms
64 bytes from 223.6.6.6: seq=1 ttl=127 time=7.517 ms
64 bytes from 223.6.6.6: seq=2 ttl=127 time=6.599 ms
^C
--- 223.6.6.6 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 6.599/7.400/8.084 ms
/ # ping 11.20.3.4
PING 11.20.3.4 (11.20.3.4): 56 data bytes
64 bytes from 11.20.3.4: seq=0 ttl=64 time=0.093 ms
64 bytes from 11.20.3.4: seq=1 ttl=64 time=0.065 ms
^C
--- 11.20.3.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.065/0.079/0.093 ms

添加master节点

# 绑定ip
# 添加主机名
# 配置ssh免密登陆(root允许远程)
1.添加harbor证书(见上)
scp harbor-ca.crt master2:/etc/docker/certs.d/harbor.zzj.com
2、同步ansible秘钥(见上)


4、安装python2.7
apt-get install python2.7
# 制作软链接
ln -s /usr/bin/python2.7 /usr/bin/python

5、执行
root@harbor:/usr/local/src/harbor/certs# chmod +x /etc/ansible/tools/easzctl
root@harbor:/usr/local/src/harbor/certs# easzctl add-master 10.0.0.10

添加node节点

# 绑定ip
# 添加主机名
1.添加harbor证书(见上)
scp harbor-ca.crt master2:/etc/docker/certs.d/harbor.zzj.com
2、同步ansible秘钥(见上)


4、安装python2.7
apt-get install python2.7
# 制作软链接
ln -s /usr/bin/python2.7 /usr/bin/python

5、执行
root@harbor:/usr/local/src/harbor/certs# chmod +x /etc/ansible/tools/easzctl
root@harbor:/usr/local/src/harbor/certs# easzctl add-node 10.0.0.13

root@harbor:/usr/local/src/harbor/certs# kubectl get nodes
NAME        STATUS                     ROLES    AGE     VERSION
10.0.0.10   Ready,SchedulingDisabled   master   5m18s   v1.17.4
10.0.0.12   Ready                      node     78m     v1.17.4
10.0.0.13   Ready                      node     52s     v1.17.4
10.0.0.9    Ready,SchedulingDisabled   master   86m     v1.17.4

升级k8s(当前版本1.17.4升级到1.17.17,注意不要跨大版本) 

root@harbor:/etc/ansible# mkdir /opt/k8s-v1.17.4
root@harbor:/etc/ansible# cd /etc/ansible/bin
root@harbor:/etc/ansible/bin# cp kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler /opt/k8s-v1.17.4/
  •  拷贝新版本二进制文件
# 准备新的二进制文件
root@harbor:/opt# mkdir /opt/k8s-v1.17.17
root@harbor:/opt# tar -zxf kubernetes-server-linux-amd64.tar.gz 
root@harbor:/opt/kubernetes/server/bin# cp kubelet kubectl kube-apiserver kube-controller-manager kube-proxy kube-scheduler /opt/k8s-v1.17.17/

# 备份旧二进制文件
root@harbor:/opt# mkdir k8s-v1.17.4
root@harbor:/etc/ansible/bin# cp kube-apiserver kube-proxy kubectl kubelet kube-scheduler kube-controller-manager /opt/k8s-v1.17.4/

# 停止要升级的master1的服务
root@master1:/home/zzj# systemctl stop kube-controller-manager kubelet kube-proxy kube-scheduler

# 替换各二进制文件
root@master1:/home/zzj# scp -r harbor:/opt/k8s-v1.17.17 ./
root@master1:/home/zzj# mv k8s-v1.17.17/* /usr/bin/

# 重新启动master服务
root@master1:/home/zzj# systemctl start kube-controller-manager kubelet kube-proxy kube-scheduler

# 验证
root@master1:/home/zzj# kubectl get node
NAME        STATUS                     ROLES    AGE   VERSION
10.0.0.10   Ready,SchedulingDisabled   master   27h   v1.17.4
10.0.0.12   Ready                      node     20m   v1.17.4
10.0.0.13   Ready                      node     27h   v1.17.4
10.0.0.9    Ready,SchedulingDisabled   master   28h   v1.17.17

# node节点同理,但只需要替换kube-proxy kubectl kubelet

使用easzctl升级

root@harbor:/opt# mv /opt/k8s-v1.17.17/* /etc/ansible/bin/
root@harbor:/opt# easzctl upgrade

部署dashboard

root@harbor:/usr/local/src# cd /etc/ansible/manifests/

root@harbor:/etc/ansible/manifests# mkdir dashboard-2.0.6

# 上传部署dashboard所需yml文件
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# ls
admin-user.yml  dashboard-2.0.0-rc6.yml
# 修改镜像文件里的镜像地址(可以改为自己的harbor仓库)

root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl apply -f .
#报错不存在命名空间:
Error from server (AlreadyExists): namespaces "kubernetes-dashboard" already exists
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl create ns kubernetes-dashboard

# 再次执行
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl apply -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user unchanged
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged

# 查看运行状态
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
default                net-test1-5fcc69db59-2x9t4                   1/1     Running   6          2d23h
default                net-test1-5fcc69db59-rvtsw                   1/1     Running   6          2d23h
default                net-test2-8456fd74f7-6ccnv                   1/1     Running   1          42h
default                net-test2-8456fd74f7-bkhjw                   1/1     Running   1          28h
default                net-test2-8456fd74f7-jb4zm                   1/1     Running   1          28h
default                net-test2-8456fd74f7-zmtvx                   1/1     Running   1          42h
kube-system            kube-flannel-ds-amd64-c4g6d                  1/1     Running   1          27h
kube-system            kube-flannel-ds-amd64-frps8                  1/1     Running   3          28h
kube-system            kube-flannel-ds-amd64-ksdfq                  1/1     Running   1          27h
kube-system            kube-flannel-ds-amd64-thsv6                  1/1     Running   1          27h
kubernetes-dashboard   dashboard-metrics-scraper-7b8b58dc8b-tg74n   1/1     Running   0          8m14s
kubernetes-dashboard   kubernetes-dashboard-5f5f847d57-dtkxk        1/1     Running   0          31s

# 查看dashboard端口(30002)
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl get service -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
default                kubernetes                  ClusterIP   10.10.0.1      <none>        443/TCP         3d1h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.10.7.186    <none>        8000/TCP        9m3s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.10.184.59   <none>        443:30002/TCP   9m4s

# 访问:https://10.0.0.12:30002/#/login
# 获取token值
root@node1:/home/zzj# kubectl get secret -A | grep admin
kubernetes-dashboard   admin-user-token-s76n8                           kubernetes.io/service-account-token   3      10m
root@node1:/home/zzj# kubectl describe secret admin-user-token-s76n8 -n kubernetes-dashboard
Name:         admin-user-token-s76n8
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: c63dccd6-527a-4dec-9e14-ad4f5136f4b6

Type:  kubernetes.io/service-account-token

Data
====
token:      
eyJhbGciOiJSUzI1NiIsImtpZCI6IlAtYWJKVmN4cGd3QnM0Z3lPNjg0R0lsTEN0SXFVMy15UWhhdXJybzhTY1EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXM3Nm44Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNjNkY2NkNi01MjdhLTRkZWMtOWUxNC1hZDRmNTEzNmY0YjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.mVJ8fwD6pbwQehudBJa4vldgI09s8AKoeEuuwg4gBNgD5EtNjRoIkK3GW8vQIawunykcqkmFbJCtlNtgoWNy66W0Qs6Ytd1FXS93cTeJbx1z9J_Jg7ybY_cCpmDDc3z8CRNSskW3FbsnsfAmUE6t3NvcqIfdwm3NDfNgT3n0syh6n42VrnAyjdp3Zb9ayhE9_A2LdVYuZstef7GXOx_VFiAGRmsfZ0vWUnwpdcFSudu_Y03dftzrnp_sfEslZY6LBgjDEZmd5o0KcJgEmKKtkuTUUyFG_-zbTXDWU qZpA5ahNl4Pymw4qyTDstbpweJdKq7yn8H29tPTkd_whXVPg
ca.crt:     1350 bytes
namespace:  20 bytes

# 更改dashboard有效token周期(添加一行- --token-ttl=43200)
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# vim dashboard-2.0.0-rc6.yml
containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-rc6
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            - --token-ttl=43200
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl apply -f .

配置DNS域名解析 

配置kubedns(即将被coredns替代)、coredns

  • 进入创建好的容器,发现能ping外网ip,但是无法ping域名
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
net-test1-5fcc69db59-5vhzc   1/1     Running   0          11s   10.20.5.7    10.0.0.12   <none>           <none>
net-test1-5fcc69db59-69pkq   1/1     Running   0          11s   10.20.3.25   10.0.0.13   <none>           <none>
net-test1-5fcc69db59-r5jvf   1/1     Running   0          11s   10.20.3.26   10.0.0.13   <none>           <none>
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl exec -it net-test1-5fcc69db59-5vhzc sh
/ # ping 223.6.6.6
PING 223.6.6.6 (223.6.6.6): 56 data bytes
64 bytes from 223.6.6.6: seq=0 ttl=127 time=11.031 ms
64 bytes from 223.6.6.6: seq=1 ttl=127 time=10.610 ms
^C
--- 223.6.6.6 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 10.610/10.820/11.031 ms
/ # ping www.baidu.com
^C
/ # cat /etc/resolv.conf 
nameserver 10.10.0.2
search default.svc.k8s.zzj.com svc.k8s.zzj.com k8s.zzj.com
options ndots:5
  • 拷贝dns文件至/etc/ansible/manifests/dns
root@harbor:/etc/ansible/manifests/dns# ls
CoreDNS  coredns-linux39.yml  deployment  kube-dns  kube-dns.yaml

root@harbor:/etc/ansible/manifests/dns# cd kube-dns

1、root@harbor:/etc/ansible/manifests/dns/kube-dns# vim kube-dns.yaml
内容如下:
# 集群ip
  clusterIP: 10.10.0.2
# tcp\udp设置
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
# dns的pod内存限制
image: harbor.zzj.com/baseimage/k8s-dns-kube-dns-amd64:1.14.13
        resources:
          limits:
            memory: 512Mi
 - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
 - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV

# 域名后缀:(cat /etc/ansible/hosts)
- --domain=cluster.local.
args:
- --server=/cluster.local/127.0.0.1#10053
解释:/访问的域名/dns服务器地址#端口,常用来配置公司内部的dns服务器
# kube-dns:负责service-name的域名解析
image: harbor.zzj.com/baseimage/k8s-dns-kube-dns-amd64:1.14.13
# 提供dns缓存,降低kubedns负载,提高性能
image: harbor.zzj.com/baseimage/k8s-dns-dnsmasq-nanny-amd64:v1.14.13
# 定期检查kubedns和dnsmasq的健康状态
image: harbor.zzj.com/baseimage/k8s-dns-sidecar-amd64:1.14.13

2、# 上传镜像到本地harbor(这里我habor机器登录失败,使用master1上传的镜像)
root@harbor:/etc/ansible/manifests/dns/kube-dns# docker load -i k8s-dns-kube-dns-amd64_1.14.13.tar.gz
root@harbor:/etc/ansible/manifests/dns/kube-dns# docker load -i k8s-dns-dnsmasq-nanny-amd64_1.14.13.tar.gz 
root@harbor:/etc/ansible/manifests/dns/kube-dns# docker load -i k8s-dns-sidecar-amd64_1.14.13.tar.gz

root@harbor:/etc/ansible/manifests/dns/kube-dns# docker tag gcr.io/google-containers/k8s-dns-kube-dns-amd64:1.14.13 harbor.zzj.com/baseimage/k8s-dns-kube-dns-amd64:1.14.13
root@harbor:/etc/ansible/manifests/dns/kube-dns# docker tag gcr.io/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.13 harbor.zzj.com/baseimage/k8s-dns-dnsmasq-nanny-amd64:1.14.13
root@harbor:/etc/ansible/manifests/dns/kube-dns# docker tag gcr.io/google-containers/k8s-dns-sidecar-amd64:1.14.13 harbor.zzj.com/baseimage/k8s-dns-sidecar-amd64:1.14.13

root@master1:/home/zzj# docker login harbor.zzj.com
root@master1:/home/zzj# docker push harbor.zzj.com/baseimage/k8s-dns-sidecar-amd64:1.14.13
root@master1:/home/zzj# docker push harbor.zzj.com/baseimage/k8s-dns-kube-dns-amd64:1.14.13
root@master1:/home/zzj# docker push harbor.zzj.com/baseimage/k8s-dns-dnsmasq-nanny-amd64:1.14.13

3、root@harbor:/etc/ansible/manifests/dns/kube-dns# kubectl apply -f kube-dns-magedu.yaml
service/kube-dns unchanged
serviceaccount/kube-dns unchanged
configmap/kube-dns unchanged
deployment.apps/kube-dns created

4、root@harbor:/etc/ansible/manifests/dns/kube-dns# kubectl get pod -A
NAMESPACE     NAME                          READY   STATUS    RESTARTS   AGE
kube-system   kube-dns-b647ff6db-g2hv7      3/3     Running   3          3m38s
kube-system   kube-flannel-ds-amd64-8nhmh   1/1     Running   4          4d23h
kube-system   kube-flannel-ds-amd64-j5bzc   1/1     Running   4          4d23h
kube-system   kube-flannel-ds-amd64-rmhxq   1/1     Running   4          4d23h
kube-system   kube-flannel-ds-amd64-vs78q   1/1     Running   5          4d23h


5、测试
# ping外网域名
root@harbor:/etc/ansible/manifests/dns/kube-dns# kubectl exec -it net-test1-5fcc69db59-wfsxk sh
/ # cat /etc/resolv.conf
nameserver 10.10.0.2
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

/ # ping www.baidu.com
PING www.baidu.com (103.235.46.39): 56 data bytes
64 bytes from 103.235.46.39: seq=0 ttl=127 time=210.553 ms
64 bytes from 103.235.46.39: seq=1 ttl=127 time=211.345 ms


# ping service域名
root@harbor:/etc/ansible/manifests/dashboard-2.0.6# kubectl get service -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default                kubernetes                  ClusterIP   10.10.0.1       <none>        443/TCP         4d23h
kube-system            kube-dns                    ClusterIP   10.10.0.2       <none>        53/UDP,53/TCP   20m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.10.115.229   <none>        8000/TCP        41s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.10.94.170    <none>        443:30002/TCP   41s
root@harbor:/home/zzj# kubectl exec -it  net-test1-5fcc69db59-gpx6d sh
/ # ping kubernetes
PING kubernetes (10.10.0.1): 56 data bytes
64 bytes from 10.10.0.1: seq=0 ttl=127 time=5.469 ms
64 bytes from 10.10.0.1: seq=1 ttl=127 time=5.070 ms
# 注意,无法ping通dashboard-metrics-scraper和kubernetes-dashboard,因为所在的namespace不同,namespace是相互隔离的。

使用busybox进行测试

# 上传镜像
root@master1:/home/zzj# docker load -i busybox-online.tar.gz

root@master1:/home/zzj# docker tag quay.io/prometheus/busybox:latest harbor.zzj.com/baseimage/busybox:latest

root@master1:/home/zzj# docker login harbor.zzj.com

root@master1:/home/zzj# docker push harbor.zzj.com/baseimage/busybox:latest

# 修改yaml文件
root@harbor:/etc/ansible/manifests/dns/kube-dns# vim busybox.yaml
- image: harbor.zzj.com/baseimage/busybox:latest

root@harbor:/etc/ansible/manifests/dns/kube-dns# kubectl apply -f busybox.yaml
# 测试(这里注意如果不通的话,看一下busybox容器里面的/etc/resolve.conf的nameserver是否是10.10.9.9)
root@harbor:/etc/ansible/manifests/dns/kube-dns# kubectl exec busybox nslookup dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local
Server:    10.10.0.2
Address 1: 10.10.0.2 kube-dns.kube-system.svc.cluster.local

Name:      dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local
Address 1: 10.10.91.20 dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local

配置coredns(前提是装了kubedns,因为需要获取clusterip等参数)

# coredns的deploymemt项目---deploy.sh
root@harbor:/etc/ansible/manifests/dns# git clone https://github.com.cnpmjs.org/coredns/deployment.git

root@harbor:/etc/ansible/manifests/dns# cd deployment/kubernetes
root@harbor:/etc/ansible/manifests/dns/deployment/kubernetes# ./deploy.sh > coredns.yml


root@harbor:/etc/ansible/manifests/dns/deployment/kubernetes# vim coredns.yml
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        # forward . /etc/resolv.conf {
        # max_concurrent 1000
        #}
        forward . 223.6.6.6
        cache 30
        loop
        reload
        loadbalance
    }
image: harbor.zzj.com/baseimage/coredns:1.8.6
#内存尽量改大点
resources:
          limits:
            memory: 512Mi
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.10.0.2

# 拉取镜像并上传至harbor
root@master1:/home/zzj# docker pull coredns/coredns:1.8.6
root@master1:/home/zzj# docker tag coredns/coredns:1.8.6 harbor.zzj.com/baseimage/coredns:1.8.6
root@master1:/home/zzj# docker push harbor.zzj.com/baseimage/coredns:1.8.6

# 删除kubedns安装coredns
root@harbor:/etc/ansible/manifests/dns/deployment/kubernetes# kubectl delete -f /etc/ansible/manifests/dns/kube-dns/kube-dns.yaml

root@harbor:/etc/ansible/manifests/dns/deployment/kubernetes# kubectl apply -f /etc/ansible/manifests/dns/deployment/kubernetes/coredns.yml

# 测试
root@harbor:/etc/ansible/manifests/dns/deployment/kubernetes# kubectl exec busybox nslookup dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local
Server:    10.10.0.2
Address 1: 10.10.0.2 kube-dns.kube-system.svc.cluster.local

Name:      dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local
Address 1: 10.10.91.20 dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值