k8s集群部署

78 篇文章 1 订阅

k8s集群部署

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

官网:Kubernetes

官方文档:Kubernetes Documentation |Kubernetes

环境说明:

主机IP硬盘
master/CentOS8192.168.101.11030G
node1/CentOS7192.168.101.23030G
node2/CentOS7192.168.101.24030G
准备开始
  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里 了解更多详细信息。
  • 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。

准备工作

# 修改三台主机主机名
[root@localhost ~]# hostnamectl set-hostname master.example.com
[root@localhost ~]# bash
[root@master ~]#

[root@localhost ~]# hostnamectl set-hostname node1.example.com
[root@localhost ~]# bash
[root@node1 ~]#

[root@localhost ~]# hostnamectl set-hostname node2.example.com
[root@localhost ~]# bash
[root@node2 ~]#


# 关闭三台主机firewall和seLinux
[root@master ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@master ~]# sed -ri 's/(SELINUX=).*/\1disabled/g' /etc/selinux/config
[root@master ~]# setenforce 0
[root@master ~]# getenforce 
Permissive
[root@master ~]# reboot
[root@master ~]# getenforce 
Disabled


# 三台都删除或注释掉swap空间
[root@master ~]# cat /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Sat Dec 18 13:26:48 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cs-root     /                       xfs     defaults        0 0
UUID=f792010c-33ab-4638-aa60-4a00717326de /boot                   xfs     defaults        0 0
/dev/mapper/cs-home     /home                   xfs     defaults        0 0
#/dev/mapper/cs-swap     none                    swap    defaults        0 0                  # 删除或注释掉

[root@master ~]# reboot

master上添加域名访问

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.101.110 master master.example.com
192.168.101.230 node1 node1.example.com
192.168.101.240 node2 node2.example.com


[root@master ~]# ping node1
PING node1 (192.168.101.230) 56(84) bytes of data.
64 bytes from node1 (192.168.101.230): icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from node1 (192.168.101.230): icmp_seq=2 ttl=64 time=0.814 ms
64 bytes from node1 (192.168.101.230): icmp_seq=3 ttl=64 time=0.633 ms


[[root@master ~]# ping master
PING master (192.168.101.110) 56(84) bytes of data.
64 bytes from master (192.168.101.110): icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from master (192.168.101.110): icmp_seq=2 ttl=64 time=0.025 ms
64 bytes from master (192.168.101.110): icmp_seq=3 ttl=64 time=0.038 ms



[root@master ~]# ping node2
PING node2 (192.168.101.240) 56(84) bytes of data.
64 bytes from node2 (192.168.101.240): icmp_seq=1 ttl=64 time=0.312 ms
64 bytes from node2 (192.168.101.240): icmp_seq=2 ttl=64 time=0.623 ms
64 bytes from node2 (192.168.101.240): icmp_seq=3 ttl=64 time=0.419 ms

master上将桥接的IPv4流量传递到iptables的链:

[root@master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF


[root@master ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1


[root@master ~]# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
net.core.optmem_max = 81920
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...

三台主机安装时间同步服务

[root@master ~]# dnf -y install  chrony

[root@node1 ~]# dnf -y install chrony

[root@node2 ~]# dnf -y install chrony

# 三台都修改为aliyun的时间同步
[root@master ~]# cat /etc/chrony.conf 
pool time1.aliyun.com iburst                                            
[root@master ~]# systemctl enable --now chronyd
[root@node1 ~]# vim /etc/chrony.conf 
server time1.aliyun.com iburst
[root@node1 ~]# systemctl enable --now chronyd
[root@node2 ~]# vim /etc/chrony.conf 
server time1.aliyun.com iburst
[root@node2 ~]# systemctl enable --now chronyd

master上做免密登录

[root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:dYbSEiqrqFYLgd4UggXxWJF81kftQKKDs/OoBrBoypU root@master.example.com
The key's randomart image is:
+---[RSA 3072]----+
|+=+o ..o+.       |
|.=ooo..ooo..     |
|o =o= ..oo+ o    |
|o. + +   +.o     |
|+.* o   S        |
|+= E             |
|* * o            |
|o= .             |
|=                |
+----[SHA256]-----+


root@master ~]# ssh-copy-id master
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

root@master's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'master'"
and check to make sure that only the key(s) you wanted were added.



[root@master ~]# ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node1 (192.168.101.230)' can't be established.
ECDSA key fingerprint is SHA256:fDjjJRE1/uPXYdYNfn97U9+3Z1wu1Xj8EfprIgeiI5Q.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node1'"
and check to make sure that only the key(s) you wanted were added.



[root@master ~]# ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node2 (192.168.101.240)' can't be established.
ECDSA key fingerprint is SHA256:9n6hqOi0XdDbamsDPBZTDPyTAt3apPgHGpp/kocNLFM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node2'"
and check to make sure that only the key(s) you wanted were added.

查看三台主机的时间同步是否一致

[root@master ~]# for i in master node1 node2;do ssh $i 'date';done
2021年 12月 18日 星期六 06:57:46 EST
2021年 12月 18日 星期六 19:57:46 CST
2021年 12月 18日 星期六 19:57:47 CST

三台主机安装docker

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

# 下载docker源
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

# 查看
[root@master ~]# ls /etc/yum.repos.d/
CentOS-Stream-AppStream.repo  CentOS-Stream-Extras.repo            CentOS-Stream-PowerTools.repo
CentOS-Stream-BaseOS.repo     CentOS-Stream-HighAvailability.repo  CentOS-Stream-RealTime.repo
CentOS-Stream-Debuginfo.repo  CentOS-Stream-Media.repo             docker-ce.repo

# 安装docker
[root@master ~]# yum -y install docker-ce


# 三台都配置加速器
[root@master ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://wn5c7d7w.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],   
  "log-driver": "json-file",                      
  "log-opts": {                   
    "max-size": "100m"                            
  },    
  "storage-driver": "overlay2"                    
}

# 查看docker版本
[root@master ~]# docker --version
Docker version 20.10.12, build e91ed57

# 查看加速器生效
[root@master ~]# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.7.1-docker)
  scan: Docker Scan (Docker Inc., v0.12.0)
.............
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://wn5c7d7w.mirror.aliyuncs.com/   # 加速已生效
 Live Restore Enabled: false

三台都添加kubernetes阿里云YUM软件源

地址:kubernetes镜像-kubernetes下载地址-kubernetes安装教程-阿里巴巴开源镜像站 (aliyun.com)

# 三台都配置kubernetes.repo源并安装kubelet kubeadm kubectl

cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

[root@master ~]# yum install -y kubelet kubeadm kubectl
[root@master ~]# systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

[root@node1 ~]# yum install -y kubelet kubeadm kubectl
[root@node1 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@node2 ~]# yum install -y kubelet kubeadm kubectl
[root@node2 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

master上部署Kubernetes Master

在192.168.101.110上(Master)执行。

[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.220.17 \   # masterIP
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.1 \                                                 # kubernetes版本
--service-cidr=10.96.0.0/12 \                                                  # 不能改变
--pod-network-cidr=10.244.0.0/16                                               # 不能改变

[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.101.110 --image-repository registry.aliyuncs.com/google_containers --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.1 --service-cidr=10.96.0.0/12
[init] Using Kubernetes version: v1.23.1
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.101.110:6443 --token fa1n3s.ual9ran55u1sb959 \
        --discovery-token-ca-cert-hash sha256:9e3a3a0fc4d7d8e97e67a342eb9d9f94f5454e8d927957da726e8dfc3edf5b12 

        
[root@master ~]# vim init 
kubeadm join 192.168.101.110:6443 --token fa1n3s.ual9ran55u1sb959 \
        --discovery-token-ca-cert-hash sha256:9e3a3a0fc4d7d8e97e67a342eb9d9f94f5454e8d927957da726e8dfc3edf5b12 
        
# 设置环境变量使用kubectl工具
[root@master ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
[root@master ~]# source /etc/profile.d/k8s.sh

# 查看镜像
[root@master ~]# docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED        SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.23.1   b6d7abedde39   44 hours ago   135MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.23.1   b46c42588d51   44 hours ago   112MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.23.1   f51846a4fd28   44 hours ago   125MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.23.1   71d575efe628   44 hours ago   53.5MB
registry.aliyuncs.com/google_containers/etcd                      3.5.1-0   25f8c7f3da61   6 weeks ago    293MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.6    a4ca41631cc7   2 months ago   46.8MB
registry.aliyuncs.com/google_containers/pause                     3.6       6270bb605e12   3 months ago   683kB


[root@master ~]# docker ps
CONTAINER ID   IMAGE                                               COMMAND                  CREATED         STATUS         PORTS     NAMES
70b8d97bb83f   b46c42588d51                                        "/usr/local/bin/kube…"   8 minutes ago   Up 8 minutes             k8s_kube-proxy_kube-proxy-j7hqc_kube-system_02f2421f-17f1-4708-bbaf-39f8c8291848_0
7fd8eff04c65   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes             k8s_POD_kube-proxy-j7hqc_kube-system_02f2421f-17f1-4708-bbaf-39f8c8291848_0
a99e24d6990a   25f8c7f3da61                                        "etcd --advertise-cl…"   8 minutes ago   Up 8 minutes             k8s_etcd_etcd-master.example.com_kube-system_88f66f7493adcab2ec614fff53ea6c21_0
82b4584d727a   71d575efe628                                        "kube-scheduler --au…"   8 minutes ago   Up 8 minutes             k8s_kube-scheduler_kube-scheduler-master.example.com_kube-system_78d116366c5c52e663d3704a9b950ba6_0
5d8b0b477342   f51846a4fd28                                        "kube-controller-man…"   8 minutes ago   Up 8 minutes             k8s_kube-controller-manager_kube-controller-manager-master.example.com_kube-system_e3c7337cbdf9f732e45b211a57aa7a54_0
a3ae8429535d   b6d7abedde39                                        "kube-apiserver --ad…"   8 minutes ago   Up 8 minutes             k8s_kube-apiserver_kube-apiserver-master.example.com_kube-system_0bd35d96e524489e8ac2242562841834_0
f12debbb16f7   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes             k8s_POD_kube-controller-manager-master.example.com_kube-system_e3c7337cbdf9f732e45b211a57aa7a54_0
c8dbb3ff416b   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes             k8s_POD_kube-scheduler-master.example.com_kube-system_78d116366c5c52e663d3704a9b950ba6_0
c6486a53c89c   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes             k8s_POD_kube-apiserver-master.example.com_kube-system_0bd35d96e524489e8ac2242562841834_0
641bd0770eb0   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes             k8s_POD_etcd-master.example.com_kube-system_88f66f7493adcab2ec614fff53ea6c21_0

# 查看端口
[root@master ~]# ss -antl
State          Recv-Q         Send-Q                  Local Address:Port                    Peer Address:Port         Process         
LISTEN         0              128                         127.0.0.1:10248                        0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:10249                        0.0.0.0:*                            
LISTEN         0              128                    192.168.220.17:2379                         0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:2379                         0.0.0.0:*                            
LISTEN         0              128                    192.168.220.17:2380                         0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:2381                         0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:10257                        0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:10259                        0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:43669                        0.0.0.0:*                            
LISTEN         0              128                           0.0.0.0:22                           0.0.0.0:*                            
LISTEN         0              128                                 *:10250                              *:*                            
LISTEN         0              128                                 *:6443                               *:*                            
LISTEN         0              128                                 *:10256                              *:*                            
LISTEN         0              128                              [::]:22                              [::]:*


# 查看节点
[root@master ~]# kubectl get nodes
NAME                 STATUS     ROLES                  AGE     VERSION
master.example.com   NotReady   control-plane,master   2m22s   v1.23.1 # "NotReady"表示还没就绪,后台还在运行

master上安装Pod网络插件(CNI)

Flannel可以添加到任何现有的Kubernetes集群中,尽管在使用pod网络的任何pod启动之前添加它是最简单的。

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

加入Kubernetes Node

将 node1 和 node2 加入到集群中,使用之前创建的文件内容init

# 未加入之前master上查看
[root@master ~]# kubectl get nodes
NAME                 STATUS   ROLES                  AGE   VERSION
master.example.com   Ready    control-plane,master   23m   v1.23.1  # 只有一个

[root@master ~]# cat init  # 向集群添加新节点,执行在kubeadm init输出的以下内容
kubeadm join 192.168.101.110:6443 --token fa1n3s.ual9ran55u1sb959 \
        --discovery-token-ca-cert-hash sha256:9e3a3a0fc4d7d8e97e67a342eb9d9f94f5454e8d927957da726e8dfc3edf5b12


# 在node1上将node1上加入集群
[root@node1 ~]# kkubeadm join 192.168.101.110:6443 --token fa1n3s.ual9ran55u1sb959 \
        --discovery-token-ca-cert-hash sha256:9e3a3a0fc4d7d8e97e67a342eb9d9f94f5454e8d927957da726e8dfc3edf5b12

[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING Hostname]: hostname "node1.example.com" could not be reached
        [WARNING Hostname]: hostname "node1.example.com": lookup node1.example.com on 114.114.114.114:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


# 在node2上将node2上加入集群
[root@node2 ~]# kubeadm join 192.168.101.110:6443 --token fa1n3s.ual9ran55u1sb959 \
        --discovery-token-ca-cert-hash sha256:9e3a3a0fc4d7d8e97e67a342eb9d9f94f5454e8d927957da726e8dfc3edf5b12
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING Hostname]: hostname "node2.example.com" could not be reached
        [WARNING Hostname]: hostname "node2.example.com": lookup node2.example.com on 114.114.114.114:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

将node1和node2加入集群之后master上查看

[root@master ~]# kubectl get nodes
NAME                 STATUS   ROLES                  AGE     VERSION
master.example.com   Ready    control-plane,master   5m44s   v1.23.1
node1.example.com    Ready    <none>                 98s     v1.23.1
node2.example.com    Ready    <none>                 93s     v1.23.1
测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

# 创建一个pod,是deployment类型的nginx,使用nginx镜像,没有指定在哪个节点运行
[root@master ~]# kubectl create deployment nginx --image=nginx 
deployment.apps/nginx created

# 暴露pod是deployment类型的nginx端口80,暴露在节点上
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

# 查看
[root@master ~]# kubectl get pod,svc
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-85b98978db-4qpxj   0/1     ContainerCreating   0          55s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        6m56s
service/nginx        NodePort    10.100.107.67   <none>        80:32427/TCP   23s

# 查看在哪个节点上运行
[root@master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
nginx-85b98978db-xd6wz   1/1     Running   0          87s   10.244.2.2   node1.example.com(运行在)   <none>           <none>

# 访问seriveIP
[root@master ~]# curl http://10.100.107.67
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
..............

# node2上查看映射的随机端口
[root@node2 ~]# ss -antl
State          Recv-Q         Send-Q                  Local Address:Port                    Peer Address:Port         Process         
LISTEN         0              128                         127.0.0.1:37919                        0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:10248                        0.0.0.0:*                            
LISTEN         0              128                         127.0.0.1:10249                        0.0.0.0:*                            
LISTEN         0              128                           0.0.0.0:31343(此端口)                        0.0.0.0:*                            
LISTEN         0              128                           0.0.0.0:22                           0.0.0.0:*                            
LISTEN         0              128                                 *:10250                              *:*                            
LISTEN         0              128                                 *:10256                              *:*                            
LISTEN         0              128                              [::]:22                              [::]:* 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
K8sKubernetes)是一个全局开源容器集群管理系统。它可以管理容器化应用程序以及它们的工作负载、服务和容器等。K8s使您可以部署、管理和扩展容器化应用程序,无论它们在哪个云集或物理集群上运行。K8s使用Docker等容器化技术,让应用更便捷、灵活,同时也更易于管理、测试和部署。 针对K8s集群部署技术架构图的下载,我们需要关注以下问题: 1. K8s集群部署技术架构的基本组成 2. K8s集群部署技术架构的下载方式 3. K8s集群部署技术架构图的使用价值 K8s集群部署技术架构图包含了基本组成,包括master、node等节点组成的架构。K8s集群的中心是由一组Master节点组成的控制器平面(也称为管理员节点或管道),它们是要将多个K8s集群部署在一起的协调器。每个集群还有自己的工作节点,也称为Node节点。这些节点管理着容器和容器工作负载的实际运行。 在下载K8s集群部署技术架构图时,可以通过搜索引擎或官方网站进行下载。根据所需资料,可以选择下载不同版本、不同规格的K8s集群部署技术架构图。对于不同的使用者,可以选择适合自己的K8s技术架构图,从而更好地了解K8s集群部署、管理、扩展。 K8s集群部署技术架构图的使用价值非常高,能够更清晰地了解K8s集群的架构构成、节点的作用,能够使使用者更好地管理和维护K8s集群,从而提高对容器化应用的部署、管理和扩展能力。 综上所述,下载K8s集群部署技术架构图可以让使用者更好地了解K8s集群的架构构成和节点的作用,使其更好地部署和管理容器化应用程序,进而提高企业的应用交付能力。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值