kubernetes集群搭建

搭建一个完整的K8S集群

在这里插入图片描述
https://note.youdao.com/yws/public/resource/63c8e5f9635f0ccbf163eba5076475f5/xmlnote/EBC5CB92FF2A471ABF82A43A8069DA8C/1329
https://note.youdao.com/yws/public/resource/63c8e5f9635f0ccbf163eba5076475f5/xmlnote/863D9BFDCB1847F3862E1C2D86DB458C/1331
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

#### minikube安装
https://kubernetes.io/docs/setup/learning-environment/minikube/
#### kubeadm安装
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
#### 二进制安装
https://github.com/kubernetes/kubernetes/releases

具体操作流程:

机器规划:

192.168.0.111   k8s-master1
192.168.0.112   k8s-master2
192.168.0.113   k8s-node1
192.168.0.114   k8s-node2
192.168.0.115   LB1
192.168.0.116   LB2

环境准备:

开机自动关闭防火墙
[root@kube-node1 ~]#   systemctl disable firewalld.service 
关闭防火墙
[root@kube-node1 ~]#   systemctl stop firewalld.service 
计划任务同步时间:
0 */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org > /dev/null 2>&1
关闭selinux
[root@kube-node1 ~]#   sed -i "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config  
[root@kube-node1 ~]#   sed -i "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/sysconfig/selinux
修改主机名
[root@kube-node1 ~]#   hostnamectl set-hostname kube-node1
同步时间
[root@kube-node1 ~]#   sudo timedatectl set-timezone Asia/Shanghai
[root@kube-node1 ~]#   date
[root@kube-node1 ~]#   sudo timedatectl set-local-rtc 0
[root@kube-node1 ~]#   sudo systemctl restart rsyslog
[root@kube-node1 ~]#   sudo systemctl restart crond
关闭swap分区
[root@kube-node1 ~]#   sudo swapoff -a
[root@kube-node1 ~]#   free -m
开机自动关闭swap分区
[root@kube-node1 ~]#   sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
将主机解析到hosts文件里面
[root@kube-node1 ~]#  cat >> /etc/hosts << EOF
192.168.0.112    k8s-node1
192.168.0.113    k8s-node2
192.168.0.114    k8s-master1
192.168.0.115    k8s-master2
EOF
二进制包下载地址
https://github.com/etcd-io/etcd/releases
 

安装etcd集群

生成etcd ssl秘钥证书
生成秘钥必备的文件:
[root@k8s-master1 etcd]# cat ca-config.json 
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
 
[root@k8s-master1 etcd]# cat ca-csr.json 
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
 
[root@k8s-master1 etcd]# cat server-csr.json 
{
    "CN": "etcd",
    "hosts": [
        "192.168.0.111",  此处根据实际etcd集群的IP写
        "192.168.0.113",
        "192.168.0.114"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
 
根据此脚本生成秘钥文件
[root@k8s-master1 etcd]# cat generate_etcd_cert.sh 
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
 
将根据脚本生成的三个秘钥拷贝到如下目录
[root@k8s-master1 ssl]# ls
ca.pem  server-key.pem  server.pem
[root@k8s-master1 ssl]# pwd
/opt/etcd/ssl
 
执行文件位置
[root@k8s-master1 cfg]# cd /opt/etcd/bin/
[root@k8s-master1 bin]# ls
etcd  etcdctl
配置文件位置
[root@k8s-master1 bin]# cd /opt/etcd/cfg/
[root@k8s-master1 cfg]# ls
etcd.conf
 
ssl验证文件位置
[root@k8s-master1 cfg]# cd /opt/etcd/ssl/
[root@k8s-master1 ssl]# ls
ca.pem  server-key.pem  server.pem
 
192.168.0.111的配置文件
[root@k8s-master1 cfg]# cat etcd.conf 
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.111:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.111:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.111:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.111:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.111:2380,etcd-2=https://192.168.0.113:2380,etcd-3=https://192.168.0.114:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
 
192.168.0.113的配置文件

[root@k8s-master1 cfg]# cat etcd.conf

#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.113:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.113:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.113:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.113:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.111:2380,etcd-2=https://192.168.0.113:2380,etcd-3=https://192.168.0.114:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
192.168.0.114的配置文件

[root@k8s-master1 cfg]# cat etcd.conf

#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.114:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.114:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.114:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.114:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.111:2380,etcd-2=https://192.168.0.113:2380,etcd-3=https://192.168.0.114:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
 
将脚本放置到此目录下面
[root@k8s-master1 cfg]# cat /usr/lib/systemd/system/etcd.service  
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
        --name=${ETCD_NAME} \
        --data-dir=${ETCD_DATA_DIR} \
        --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
        --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
        --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
        --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
        --initial-cluster=${ETCD_INITIAL_CLUSTER} \
        --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
        --initial-cluster-state=new \
        --cert-file=/opt/etcd/ssl/server.pem \
        --key-file=/opt/etcd/ssl/server-key.pem \
        --peer-cert-file=/opt/etcd/ssl/server.pem \
        --peer-key-file=/opt/etcd/ssl/server-key.pem \
        --trusted-ca-file=/opt/etcd/ssl/ca.pem \
        --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
 
[root@k8s-master1 ssl]# systemctl daemon-reload
[root@k8s-master1 ssl]# systemctl start etcd
[root@k8s-master1 ssl]# systemctl status etcd  
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2019-11-23 11:49:58 CST; 19min ago
 Main PID: 1934 (etcd)
   CGroup: /system.slice/etcd.service
           └─1934 /opt/etcd/bin/etcd --name=etcd-1 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.0.111:2380 --listen-client-urls=https://19...
验证集群状态是否正常
[root@k8s-master1 ssl]# /opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.0.111:2379,https://192.168.0.112:2379,https://192.168.0.113:2379" \
 cluster-health
member 5befac745d0af312 is healthy: got healthy result from https://192.168.0.111:2379
member 921ffdb040a91eb0 is healthy: got healthy result from https://192.168.0.113:2379
member bb27c2d889f2120d is healthy: got healthy result from https://192.168.0.114:2379
cluster is healthy
安装K8S-MASTER节点

应用程序---->HTTPS API(自签证书)
1、证书添加IP可信任
2、携带CA证书

生成CA证书
[root@k8s-master1 k8s]# cat server-csr.json 
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local",
      "192.168.0.111",  ----根据自己真实网段写,可多写,方便扩容
      "192.168.0.112",
      "192.168.0.113",
      "192.168.0.114",
      "192.168.0.115",
      "192.168.0.116",
      "192.168.0.117",
      "192.168.0.118",
      "192.168.0.119",
      "192.168.0.120"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
 
[root@k8s-master1 k8s]# cat generate_k8s_cert.sh 
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
 
k8s-master节点上面

上传k8s-master.tar.gz并解压

[root@k8s-master1 src]# tar xf k8s-master.tar.gz
生成证书:
[root@k8s-master1 k8s]# ./generate_k8s_cert.sh
 
[root@k8s-master1 k8s]# ls *.pem
ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem
 
[root@k8s-master1 ssl]# rm -f kube-proxy*
[root@k8s-master1 ssl]# ls
ca-key.pem  ca.pem  server-key.pem  server.pem
 
将生成的四个证书拷贝到/kubernetes/ssl下面

目录层级:

[root@k8s-master1 src]# tree kubernetes
kubernetes
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-scheduler.conf
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
 
4 directories, 12 files
 
[root@k8s-master1 cfg]# cat /usr/local/src/kubernetes/cfg/kube-apiserver.conf 
KUBE_APISERVER_OPTS="--logtostderr=false \        --输出日志
--v=2 \                    ---日志级别(0--8)
--log-dir=/opt/kubernetes/logs \      ---日志目录                    
--etcd-     servers=https://192.168.0.111:2379,https://192.168.0.113:2379,https://192.168.0.114:2379 \                        ---etcd地址
--bind-address=192.168.0.111 \        --本机内网ip
--secure-port=6443 \                         ---使用端口
--advertise-address=192.168.0.111 \       ---本机内网IP
--allow-privileged=true \                           ---允许超级权限
--service-cluster-ip-range=10.0.0.0/24 \          ---service集群的IP范围
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \                                     ----准入控制的插件
--authorization-mode=RBAC,Node \      ----权限认证模式
--enable-bootstrap-token-auth=true \        ----启用token认证
--token-auth-file=/opt/kubernetes/cfg/token.csv \   ---token验证文件
--service-node-port-range=30000-32767 \             ---服务端口访问
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \    ---验证证书文件
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \       ---验证证书文件
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \                        ---验证证书文件
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \       ---验证证书文件
--client-ca-file=/opt/kubernetes/ssl/ca.pem \                             ---验证证书文件
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \    ---验证证书文件
--etcd-cafile=/opt/etcd/ssl/ca.pem \                                           ---验证证书文件
--etcd-certfile=/opt/etcd/ssl/server.pem \                                   ---验证证书文件
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \                             ---验证证书文件
--audit-log-maxage=30 \            ---审计日志最多30--audit-log-maxbackup=3 \        ---最大三天备份
--audit-log-maxsize=100 \         ---最大100M
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"     ---审计日志路径
 
[root@k8s-master1 cfg]# cat kube-controller-manager.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \              ---选举
--master=127.0.0.1:8080 \     ---默认监听端口
--address=127.0.0.1 \
--allocate-node-cidrs=true \     
--cluster-cidr=10.244.0.0/16 \      --集群IP段
--service-cluster-ip-range=10.0.0.0/24 \      --service IP范围
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \   
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
 
[root@k8s-master1 cfg]# cat kube-scheduler.conf 
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"
 
将文件移动到/usr/lib/systemd/system/目录下面
[root@k8s-master1 src]# cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system/
启动这三个服务并加入开机自启动
[root@k8s-master1 src]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@k8s-master1 src]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master1 src]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@k8s-master1 src]# systemctl start kube-apiserver          
[root@k8s-master1 src]# systemctl start kube-controller-manager
[root@k8s-master1 src]# systemctl start kube-scheduler
 
将kubectl移动到/usr/local/bin下面
[root@k8s-master1 bin]# cp /opt/kubernetes/bin/kubectl /usr/local/bin/    
查看状态#
[root@k8s-master1 ~]# kubectl get cs
NAME                 AGE
scheduler            <unknown>
controller-manager   <unknown>
 
etcd-1               <unknown>
etcd-2               <unknown>
etcd-0               <unknown>   
 
给kubelet-bootstrap授权
[root@k8s-master1 ~]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
 
 
安装K8S-NODE1节点

上传k8s-node.tar.gz包并解压

[root@k8s-node1 src]# tar -zxvf k8s-node.tar.gz 
[root@k8s-node1 src]# mv kubernetes /opt/
解压docker
[root@k8s-node1 src]# tar -zxvf docker-18.09.6.tgz 
docker/
docker/docker
docker/docker-init
docker/ctr
docker/docker-proxy
docker/runc
docker/containerd
docker/dockerd
docker/containerd-shim
 
[root@k8s-node1 src]# cp -r docker/* /usr/bin/
[root@k8s-node1 src]# cp docker.service /usr/lib/systemd/system/
[root@k8s-node1 src]# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
 
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
 
[Install]
WantedBy=multi-user.target
 
docker配置文件:
[root@harbor docker]# cat /etc/docker/daemon.json 
{
   "registry-mirrors": ["http://f1361db2.m.daocloud.io"],
   "insecure-registries" :["192.168.0.113"]   ===harbor仓库地址 
}
 
创建配置文件目录并拷贝配置文件
[root@k8s-node1 src]# mkdir /etc/docker [root@k8s-node1 src]# cp daemon.json /etc/docker/
启动docker并加入开机自启动
[root@k8s-node1 src]# systemctl start docker 
[root@k8s-node1 src]# systemctl enable docker 
配置文件详解
[root@k8s-node1 kubernetes]# tree 
.
├── bin
│   ├── kubelet
│   └── kube-proxy
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   └── kube-proxy.kubeconfig
├── logs
└── ssl
conf                  基本配置文件
kubeconfig       链接apiserver的配置文件
yml                   主要配置文件
 
[root@k8s-node1 src]# ls
cni-plugins-linux-amd64-v0.8.2.tgz  daemon.json  docker  docker-18.09.6.tgz  docker.service  k8s-node.tar.gz  kubelet.service  kube-proxy.service
[root@k8s-node1 src]# cp *.service /usr/lib/systemd/system/
 
修改IP
[root@k8s-node1 cfg]# ls
bootstrap.kubeconfig  kubelet.conf  kubelet-config.yml  kube-proxy.conf  kube-proxy-config.yml  kube-proxy.kubeconfig
将bootstrap.kubeconfig与kube-proxy.kubeconfig的IP替换成master节点的IP
 
修改节点名称:
[root@k8s-node1 cfg]# grep "k8s-node1" * 
kubelet.conf:--hostname-override=k8s-node1 \
kube-proxy-config.yml:hostnameOverride: k8s-node1
设置开机启动并启动
[root@k8s-node1 src]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node1 src]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-node1 src]# systemctl start kubelet    
[root@k8s-node1 src]# systemctl start kube-proxy
拷贝主节点三个证书到node节点
[root@k8s-master1 k8s]# scp ca.pem kube-proxy*pem root@192.168.0.113:/opt/kubernetes/ssl/
root@192.168.0.113's password: 
ca.pem                                                                                                                                 100% 1359   271.2KB/s   00:00    
kube-proxy-key.pem                                                                                                                     100% 1679   306.6KB/s   00:00    
kube-proxy.pem                                                                                                                         100% 1403   244.1KB/s   00:00 
 
node向master请求证书
[root@k8s-master1 k8s]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-9ia5Ul-NVcg1t-3PmX3la5G_rrso0U-0L4T2VqjO0lM   2m51s   kubelet-bootstrap   Pending
 
master给node颁发证书
[root@k8s-master1 k8s]# kubectl certificate approve node-csr-9ia5Ul-NVcg1t-3PmX3la5G_rrso0U-0L4T2VqjO0lM 
certificatesigningrequest.certificates.k8s.io/node-csr-9ia5Ul-NVcg1t-3PmX3la5G_rrso0U-0L4T2VqjO0lM approved
 
node已经加入集群中了
[root@k8s-master1 k8s]# kubectl get node
NAME        STATUS     ROLES    AGE     VERSION
k8s-node1   NotReady   <none>   2m43s   v1.16.0
 
安装k8s-node2节点,就是把安装k8s-node1节点的步骤再重复一遍
[root@k8s-master1 k8s]# kubectl get node
NAME        STATUS     ROLES    AGE   VERSION
k8s-node1   NotReady   <none>   49m   v1.16.0
k8s-node2   NotReady   <none>   33s   v1.16.0
 
k8s-node1和k8s-node2上面安装cni插件
[root@k8s-node2 bin]#  mkdir /opt/cni/bin /etc/cni/net.d -p
[root@k8s-node2 bin]# tar -zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
k8s-master上面运行命令
[root@k8s-master1 src]# kubectl apply -f kube-flannel.yaml
过几分钟看状态:
状态OK,cni插件部署完毕
[root@k8s-master1 src]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    <none>   74m   v1.16.0
k8s-node2   Ready    <none>   25m   v1.16.0
 
状态OK.cni插件部署完毕
[root@k8s-master1 src]# kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-27mg6   1/1     Running   0          4m27s
kube-flannel-ds-amd64-7hghd   1/1     Running   0          4m27s
 
[root@k8s-master1 src]# cat apiserver-to-kubelet-rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
 
授权可以访问Pods日志,nodes日志
[root@k8s-master1 src]# kubectl apply -f apiserver-to-kubelet-rbac.yaml 
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
查看kube-system域名空间下的pods
[root@k8s-master1 src]# kubectl get pods -n kube-system                        
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-27mg6   1/1     Running   0          13m
kube-flannel-ds-amd64-7hghd   1/1     Running   0          13m
 
查看pods的日志
[root@k8s-master1 src]# kubectl logs kube-flannel-ds-amd64-7hghd -n kube-system
查看详细信息
[root@k8s-master1 src]# kubectl get pods -n kube-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
kube-flannel-ds-amd64-27mg6   1/1     Running   0          15m   192.168.0.114   k8s-node2   <none>           <none>
kube-flannel-ds-amd64-7hghd   1/1     Running   0          15m   192.168.0.113   k8s-node1   <none>           <none>
 
创建并发布一个已Nginx为镜像的web应用
[root@k8s-master1 src]# kubectl create deployment web --image=nginx
验证是否运行成功
[root@k8s-master1 src]# kubectl get pods -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
web-d86c95cc9-mkbkg   1/1     Running   0          98s   10.244.1.2   k8s-node2   <none>           <none>
 
端口暴露:
[root@k8s-master1 src]# kubectl expose deployment web --port=80 --type=NodePort
service/web exposed
 
查看Pods及网络是否正常
[root@k8s-master1 src]# kubectl get pods,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-d86c95cc9-mkbkg   1/1     Running   0          4m44s
 
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        166m
service/web          NodePort    10.0.0.71    <none>        80:32437/TCP   56s
访问:http://192.168.0.113:32437/
http://192.168.0.114:32437/
https://note.youdao.com/yws/public/resource/63c8e5f9635f0ccbf163eba5076475f5/xmlnote/E396FAB2221B44028AD53A6A2CC18F08/1659
 
WEB UI部署:https://github.com/kubernetes/dashboard

部署文档:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
dashboard.yaml
6.93KB

[root@k8s-master1 src]# kubectl apply -f dashboard.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
 
[root@k8s-master1 src]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-566cddb686-2fbgx   1/1     Running   0          71s
kubernetes-dashboard-7b5bf5d559-gdtm2        1/1     Running   0          72s
 
[root@k8s-master1 src]# kubectl get svc -n kubernetes-dashboard     
NAME                        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.0.0.173   <none>        8000/TCP        3m50s
kubernetes-dashboard        NodePort    10.0.0.25    <none>        443:30001/TCP   3m51s
访问:https://192.168.0.114:30001/#/login
https://note.youdao.com/yws/public/resource/63c8e5f9635f0ccbf163eba5076475f5/xmlnote/120D63C6C55A491EB7E2E68615AAB719/1677
 
授权:

dashboard-adminuser.yaml 373B

[root@k8s-master1 src]# kubectl apply -f dashboard-adminuser.yaml
 
创建能访问dashboard的token
[root@k8s-master1 src]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-kdm2g
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 36c39b3d-ffda-475f-824b-cc2af6c1649f
 
Type:  kubernetes.io/service-account-token
Data

====
namespace: 20 bytes
token:

eyJhbGciOiJSUzI1NiIsImtpZCI6IkNuRnJnbmMzdEhneTh5cjNPZFRDLXk2ZDNDck9XMUd6UjNQTDh0aWozTUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWtkbTJnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNmMzOWIzZC1mZmRhLTQ3NWYtODI0Yi1jYzJhZjZjMTY0OWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.FXZ3m6SItNlMbrDiCip47tjKhAPTxmy2Xt88moZMIflAvHJam41PXFP1_2OV9EhGdKR0JGyaVErKT2whbAsSm9ZgLNsOEoPG2YIvcBqRMgTxdO-uxU7rlA2ZR4nMjRq5NkZ0D5RMTqwdujXbFqSrJR8pOg6Yyqli511Qxs_lJwOfi07f2ETzZrhljg5GuS6LxsRzPExWS4wgGWViLnBe3dIpgqSZbIuZdfdDN5YLIrPMzO-CeIX5Pl5dXckItnKAFkkLfbaEQVc7GAUKR8mFPpfa3hCLhR026PVrobkx1Stn61ERaLlOoenWtxt0cSjp3guWea–Bk_NIeYZ-_Byrw

ca.crt: 1359 bytes
将绿色的token部分复制进去,就可以访问到dashboard页面了
https://note.youdao.com/yws/public/resource/63c8e5f9635f0ccbf163eba5076475f5/xmlnote/0A67792A12604696BE80E923A40EB707/1694

部署coredns:
[root@k8s-master1 src]# kubectl apply -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
运用bs.yml文件进行测试,看看dns是否能解析
[root@k8s-master1 src]# kubectl apply -f bs.yaml 
pod/busybox created
 
[root@k8s-master1 src]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
busybox               1/1     Running   0          42s
web-d86c95cc9-jrk94   1/1     Running   0          9m11s
web-d86c95cc9-mkbkg   1/1     Running   0          69m
web-d86c95cc9-q2r4d   1/1     Running   0          9m10s
测试:
[root@k8s-master1 src]# kubectl exec -it busybox sh 
/ # ping 10.0.0.25
PING 10.0.0.25 (10.0.0.25): 56 data bytes
64 bytes from 10.0.0.25: seq=0 ttl=64 time=0.606 ms
^C
--- 10.0.0.25 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.606/0.606/0.606 ms
测试coredns解析功能
/ # nslookup web
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name:      web
Address 1: 10.0.0.71 web.default.svc.cluster.local
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
 
Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
 
部署k8s-master2
将k8s-master1的以下东西拷贝过去
[root@k8s-master1 opt]# scp -r kubernetes root@192.168.0.112:/opt/
[root@k8s-master1 opt]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.0.112:/usr/lib/systemd/system/
root@192.168.0.112's password: 
kube-apiserver.service                                                                                                                 100%  286    55.7KB/s   00:00    
kube-controller-manager.service                                                                                                        100%  321    56.2KB/s   00:00    
kube-scheduler.service                                                                                                                 100%  285    58.8KB/s   00:00   
 
将证书拷贝到0.112上面去
[root@k8s-master1 opt]# scp -r /opt/etcd/ssl root@192.168.0.112:/opt/etcd/
root@192.168.0.112's password: 
ca.pem                                                                                                                                 100% 1265   174.7KB/s   00:00    
server-key.pem                                                                                                                         100% 1675   152.1KB/s   00:00    
server.pem                                                                                                                             100% 1338    98.9KB/s   00:00    
 
将kubectl拷贝到192.168.0.112上面去
[root@k8s-master2 cfg]# vim kube-apiserver.conf 
--bind-address=192.168.0.112 \         ---修改此IP为本机IP
[root@k8s-master2 cfg]# systemctl daemon-reload
 
[root@k8s-master2 opt]# systemctl enable kube-apiserver
[root@k8s-master2 opt]# systemctl restart kube-apiserver
[root@k8s-master2 opt]# systemctl status kube-apiserver
 
[root@k8s-master2 opt]# systemctl enable kube-controller-manager
[root@k8s-master2 opt]# systemctl start kube-controller-manager 
[root@k8s-master2 opt]# systemctl status kube-controller-manager
 
[root@k8s-master2 opt]# systemctl enable kube-scheduler
[root@k8s-master2 opt]# systemctl start kube-scheduler
[root@k8s-master2 opt]# systemctl status kube-scheduler
 
[root@k8s-master1 opt]# scp /usr/local/bin/kubectl root@192.168.0.112:/usr/local/bin/
 
出现下面的内容说明k8s-master2是正常的
[root@k8s-master2 opt]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
busybox               1/1     Running   1          <invalid>
web-d86c95cc9-jh7kk   1/1     Running   0          <invalid>
web-d86c95cc9-jrk94   1/1     Running   0          <invalid>
web-d86c95cc9-mxq58   1/1     Running   0          <invalid>
[root@k8s-master2 opt]# kubectl get node
NAME        STATUS   ROLES    AGE         VERSION
k8s-node1   Ready    <none>   <invalid>   v1.16.0
k8s-node2   Ready    <none>   <invalid>   v1.16.0
部署准备keepalived nginx
两台机同时操作
[root@lb2 ~]# rpm -ivh http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm
 
[root@lb1 ~]# rpm -ivh http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm
 
[root@lb1 ~]# systemctl enable nginx
[root@lb1 ~]# systemctl restart nginx
nginx.conf
1015B
[root@lb2 ~]# systemctl enable nginx
[root@lb2 ~]# systemctl restart nginx
nginx.conf
1015B
[root@lb1 ~]# yum install keepalived -y
keepalived.conf
803B
[root@lb2 ~]# yum install keepalived -y
keepalived.conf
802B
将2个node节点中的IP换成VIP,再重启kubelet与kube-proxy
[root@k8s-node1 cfg]# sed -i "s#192.168.0.110#192.168.0.120#g" *
[root@k8s-node1 cfg]# systemctl restart kubelet
[root@k8s-node1 cfg]# systemctl restart kube-proxy
 
[root@k8s-node2 cfg]# sed -i "s#192.168.0.110#192.168.0.120#g" *
[root@k8s-node2 cfg]# systemctl restart kubelet
[root@k8s-node2 cfg]# systemctl restart kube-proxy
 
[root@k8s-node1 cfg]# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.0.110:6443/version
{
  "major": "1",
  "minor": "16",
  "gitVersion": "v1.16.0",
  "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
  "gitTreeState": "clean",
  "buildDate": "2019-09-18T14:27:17Z",
  "goVersion": "go1.12.9",
  "compiler": "gc",
  "platform": "linux/amd64"
 
启动/开机加载/查看状态命令
master:
[root@k8s-masster1 ~]#  systemctl restart kube-apiserver && systemctl restart kube-controller-manager && systemctl restart kube-scheduler
[root@k8s-masster1 ~]#  systemctl enable kube-apiserver && systemctl enable kube-controller-manager && systemctl enable kube-scheduler
[root@k8s-masster1 ~]#  systemctl status kube-apiserver && systemctl status kube-controller-manager && systemctl status kube-scheduler
 
node:
[root@k8s-node1 ~]#  systemctl restart kubelet && systemctl restart kube-proxy
[root@k8s-node1 ~]#  systemctl enable kubelet && systemctl enable kube-proxy
[root@k8s-node1 ~]#  systemctl status kubelet && systemctl status kube-proxy
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值