ubuntu安装k8s

准备工作

系统准备

本次安装 Kubernetes 是在 Ubuntu Server 22.04.4 LTS 版本的操作系统,请提前下载和安装好操作系统,系统下载地址:
  • 官方下载地址 :
  • https://ubuntu.com/22.04.4/ubuntu-22.04.4-live-server-amd64.iso
  • 阿里云下载地址:
  • https://mirrors.aliyun.com/ubuntu-releases/jammy/ubuntu-22.04-live-server-amd64.iso
    # ubuntu官网地址:
    https://ubuntu.com/
    
    # ubuntu下载地址:
    https://ubuntu.com/22.04.4/ubuntu-22.04.4-live-server-amd64.iso
     
    
    # containerd 官网地址:
    https://containerd.io/
    
    # containerd 下载地址:
    wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-cni-1.7.14-linux-amd64.tar.gz
    # 解压 
    tar -zxvf cri-containerd-cni-1.7.14-linux-amd64.tar.gz -C /

软件包准备

本次使用的容器运行时为 containerd,对应版本官方下载地址:

主机和IP地址准备

本次总共创建了三台主机,配置为 2c2g40g,IP地址如下:
IP地址主机名字用途
172.18.8.150/16k8s-control-planek8s 控制平面接点(主节点)
172.18.8.151/16k8s-worker01k8s 工作节点1
172.18.8.152/16k8s-worker02k8s 工作节点2
172.18.0.0/16
255.255.0.0
​

安装步骤

安装前准备

关闭防火墙

执行下面命令永久关闭防火墙:
# 关闭防火墙
sudo systemctl disable --now ufw
# 查看防火墙的状态
sudo systemctl status ufw
# 关闭防火墙
sudo systemctl disable --now ufw

root@k8s-master-10:~# sudo systemctl disable --now ufw
Synchronizing state of ufw.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable ufw
Removed /etc/systemd/system/multi-user.target.wants/ufw.service.
root@k8s-master-10:~# 

# 查看防火墙的状态
sudo systemctl status ufw

root@k8s-master-10:~# sudo systemctl status ufw
○ ufw.service - Uncomplicated firewall
     Loaded: loaded (/lib/systemd/system/ufw.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:ufw(8)

Apr 01 23:04:37 k8s-master-10 systemd[1]: Starting Uncomplicated firewall...
Apr 01 23:04:37 k8s-master-10 systemd[1]: Finished Uncomplicated firewall.
Apr 02 12:19:44 k8s-master-10 systemd[1]: Stopping Uncomplicated firewall...
Apr 02 12:19:44 k8s-master-10 ufw-init[463644]: Skip stopping firewall: ufw (not enabled)
Apr 02 12:19:44 k8s-master-10 systemd[1]: ufw.service: Deactivated successfully.
Apr 02 12:19:44 k8s-master-10 systemd[1]: Stopped Uncomplicated firewall.
root@k8s-master-10:~# 

设置服务器时区

Ubuntu安装完成之后默认不是中国时区,需要执行以下命令设置为中国上海时区:
# 查看时间命令
date
# 设置为亚洲的上海时区
sudo timedatectl set-timezone Asia/Shanghai
# 重启时间同步服务
sudo systemctl restart systemd-timesyncd.service
# 确保时间同步服务正常运动 查看时间服务的状态
timedatectl status
# 查看时间命令
date

root@k8s-master-10:~# date
Tue Apr  2 12:22:07 PM UTC 2024

# 设置为亚洲的上海时区
sudo timedatectl set-timezone Asia/Shanghai

root@k8s-master-10:~# sudo timedatectl set-timezone Asia/Shanghai
root@k8s-master-10:~# date
Tue Apr  2 08:23:35 PM CST 2024


# 重启时间同步服务
sudo systemctl restart systemd-timesyncd.service

root@k8s-master-10:~# sudo systemctl restart systemd-timesyncd.service
root@k8s-master-10:~# date
Tue Apr  2 08:25:29 PM CST 2024


# 确保时间同步服务正常运动 查看时间服务的状态
timedatectl status

root@k8s-master-10:~# timedatectl status
               Local time: Tue 2024-04-02 20:26:47 CST
           Universal time: Tue 2024-04-02 12:26:47 UTC
                 RTC time: Tue 2024-04-02 12:26:47
                Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
root@k8s-master-10:~# 

关闭 swap 分区

需要关闭所有 swap 分区,可以修改 /etc/fstab 文件:
sudo vi /etc/fstab  永久关闭 swap 分区
注释掉带 swap 的这一行
上面是永久关闭,下面可以执行这行命令临时关闭
# 临时关闭 swap
sudo swapoff  -a

# 查看 swap 是否已经被关闭
free -h  
sudo vi /etc/fstab  永久关闭 swap 分区  vim /etc/fstab
注释掉带 swap 的这一行

root@k8s-master-10:~# vim /etc/fstab
root@k8s-master-10:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-7lUbs1hLJAy090czUjJo8WMCTrfnG3kRHZu0i9v9BSSALWqYh4m4Dvtf0SzVoR0b / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/89db87bc-1dd8-4e1a-979c-ff35333c0447 /boot ext4 defaults 0 1
# /swap.img     none    swap    sw      0       0
root@k8s-master-10:~#

# 临时关闭 swap
sudo swapoff  -a


# 查看 swap 是否已经被关闭
free -h  

root@k8s-master-10:~# free -h
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       321Mi       2.9Gi       1.0Mi       604Mi       3.2Gi
Swap:          3.8Gi          0B       3.8Gi
root@k8s-master-10:~# sudo swapoff  -a
root@k8s-master-10:~# free -h
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       321Mi       2.9Gi       1.0Mi       604Mi       3.2Gi
Swap:             0B          0B          0B
root@k8s-master-10:~#

关闭 SELinux

Ubuntu 默认关闭了 selinux,通过以下命令确保 selinux 已关闭
# 安装 policycoreutils 软件包
sudo apt install -y policycoreutils
# 检查 selinux 关闭状态
sestatus
可以看到所有服务器 selinux 已关闭:
# 安装 policycoreutils 软件包
sudo apt install -y policycoreutils

root@k8s-master-10:~# sudo apt install -y policycoreutils
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  selinux-utils
The following NEW packages will be installed:
  policycoreutils selinux-utils
0 upgraded, 2 newly installed, 0 to remove and 17 not upgraded.
Need to get 644 kB of archives.
After this operation, 4,661 kB of additional disk space will be used.
Get:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy/universe amd64 selinux-utils amd64 3.3-1build2 [107 kB]
Get:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy/universe amd64 policycoreutils amd64 3.3-1build1 [537 kB]
Fetched 644 kB in 4s (154 kB/s)
Selecting previously unselected package selinux-utils.
(Reading database ... 92796 files and directories currently installed.)
Preparing to unpack .../selinux-utils_3.3-1build2_amd64.deb ...
Unpacking selinux-utils (3.3-1build2) ...
Selecting previously unselected package policycoreutils.
Preparing to unpack .../policycoreutils_3.3-1build1_amd64.deb ...
Unpacking policycoreutils (3.3-1build1) ...
Setting up selinux-utils (3.3-1build2) ...
Setting up policycoreutils (3.3-1build1) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...
Scanning linux images...

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
root@k8s-master-10:~#



# 检查 selinux 关闭状态
sestatus

root@k8s-master-10:~# sestatus
SELinux status:                 disabled

配置 hosts 配置文件

需要修改 /etc/hosts 配置文件,通过下面命令:
sudo vim /etc/hosts
然后注释掉原来的主机名称配置,并将下面这几行解析添加到文件最后(注意修改为自己的IP地址):
172.18.8.150 k8s-control-plane
172.18.8.151 k8s-worker01
172.18.8.152 k8s-worker02
sudo vim /etc/hosts

# 然后注释掉原来的主机名称配置,并将下面这几行解析添加到文件最后(注意修改为自己的IP地址):
# 三台主机的解析如下:
172.18.26.152 k8s-master-10
172.18.26.153 k8s-node-11
172.18.26.143 k8s-node-12

172.18.8.150 k8s-control-plane
172.18.8.151 k8s-worker01
172.18.8.152 k8s-worker02

172.18.26.152 k8s-master-10
12.18.26.153 k8s-node-11
172.18.26.143 k8s-node-12

ping -2 172.18.26.141 k8s-master-10
ping -2 172.18.26.142 k8s-node-11
ping -2 172.18.26.143 k8s-node-12


# 将默认的  127.0.1.1 k8s-master-10 这一行要注释掉

root@k8s-master-10:~# cat /etc/hosts
127.0.0.1 localhost
# 127.0.1.1 k8s-master-10

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

172.18.26.152 k8s-master-10
172.18.26.153 k8s-node-11
172.18.26.143 k8s-node-12
root@k8s-master-10:~# 

转发 IPV4 并让 iptables 看到桥接流量

执行下述指令:这些命令都是从kubelet官网复制下来的
永久去加载 overlay 和 br_netfilter 这两模块

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
# 将 ipv6 和 ipv4 的转发开启
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forword = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system
通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:
lsmod | grep br_netfilter
lsmod | grep overlay
通过运行以下指令确认 net.bridge.bridge-nf-call-iptables,net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward系统变量在你的 sysctl 配置中被设置为1:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
root@k8s-master-10:~# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> overlay
> br_netfilter
> EOF
overlay
br_netfilter
root@k8s-master-10:~# sudo modprobe overlay
sudo modprobe br_netfilterroot@k8s-master-10:~# 
root@k8s-master-10:~# sudo modprobe overlay
root@k8s-master-10:~# sudo modprobe br_netfilter
root@k8s-master-10:~# 

root@k8s-master-10:~# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-iptables = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> net.ipv4.ip_forword = 1
> EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forword = 1

root@k8s-master-10:~# sudo sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
* Applying /etc/sysctl.conf ...
root@k8s-master-10:~# 


root@k8s-master-10:~# lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                307200  1 br_netfilter
root@k8s-master-10:~# lsmod | grep overlay
overlay               151552  0
root@k8s-master-10:~# 

root@k8s-master-10:~# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 0
root@k8s-master-10:~# 

https://download.java.net/java/GA/jdk21.0.2/f2283984656d49d69e91c558476027ac/13/GPL/openjdk-21.0.2_linux-x64_bin.tar.gz

iptables 看到桥接流量

安装容器运行时

本次安装的是 containerd 的容器运行时,下载地址:
https://github.com/containerd/containerd/releases/download/v1.7.13/cri-containerd-cni-1.7.13-linux-amd64.tar.gz,
# 可以通过下面命令进行下载:
curl -LO https://github.com/containerd/containerd/releases/download/v1.7.13/cri-containerd-cni-1.7.13-linux-amd64.tar.gz
# 然后解压到根目录:
sudo tar -zxvf cri-containerd-cni-1.7.13-linux-amd64.tar.gz -C /
# 然后可以通过下面这行命令查看安装的版本:
containerd -v


# 同步命令
scp longchi@172.18.8.150:/home/longchi/cri-containerd-cni-1.7.13-linux-amd64.tar.gz


# 通过下面命令创建配置文件目录
sudo mkdir /etc/containerd

# 然后通过下面命令先将配置文件创建出来:
containerd config default | sudo tee /etc/containerd/config.toml

# 然后修改一下这个文件:
sudo vim /etc/containerd/config.toml
1.大概在65行位置修改 sandbox_image 值为 registry.aliyuncs.com/google_containers/pause:3.9

2. 大概在137行位置将 SystendCgroup 的值修改为 true
然后保存退出之后通过下面命令启动 containerd:
sudo systemctl enable --now containerd # 开机自启
# 查看containerd 的状态
sudo systemctl status containerd
​

所有机器都安装 containerd  https://github.com/containerd/containerd/releases/tag/v1.7.14/cri-containerd-cni-1.7.14-linux-amd64.tar.gz

# 下载
wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-cni-1.7.14-linux-amd64.tar.gz

wget https://github.com/containerd/containerd/releases/download/v1.7.13/cri-containerd-cni-1.7.13-linux-amd64.tar.gz

# 查看压缩包内容 tar -tf cri-containerd-cni-1.7.14-linux-amd64.tar.gz
root@k8s-master-10:~# tar -tf cri-containerd-cni-1.7.14-linux-amd64.tar.gz
cri-containerd.DEPRECATED.txt
etc/
etc/crictl.yaml
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
usr/
usr/local/
usr/local/bin/
usr/local/bin/containerd-shim
usr/local/bin/critest
usr/local/bin/ctr
usr/local/bin/ctd-decoder
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/containerd
usr/local/bin/containerd-stress
usr/local/bin/crictl
usr/local/bin/containerd-shim-runc-v2
usr/local/sbin/
usr/local/sbin/runc
opt/
opt/containerd/
opt/containerd/cluster/
opt/containerd/cluster/gce/
opt/containerd/cluster/gce/env
opt/containerd/cluster/gce/cloud-init/
opt/containerd/cluster/gce/cloud-init/master.yaml
opt/containerd/cluster/gce/cloud-init/node.yaml
opt/containerd/cluster/gce/configure.sh
opt/containerd/cluster/gce/cni.template
opt/containerd/cluster/version
opt/cni/
opt/cni/bin/
opt/cni/bin/loopback
opt/cni/bin/bridge
opt/cni/bin/tuning
opt/cni/bin/sbr
opt/cni/bin/dhcp
opt/cni/bin/macvlan
opt/cni/bin/static
opt/cni/bin/host-local
opt/cni/bin/vrf
opt/cni/bin/firewall
opt/cni/bin/portmap
opt/cni/bin/bandwidth
opt/cni/bin/ptp
opt/cni/bin/dummy
opt/cni/bin/ipvlan
opt/cni/bin/vlan
opt/cni/bin/host-device
root@k8s-master-10:~#



# SCP同步服务  以下两条命令在包含 cri-containerd-cni-1.7.14-linux-amd64.tar.gz 这个文件的主机执行
scp -r cri-containerd-cni-1.7.14-linux-amd64.tar.gz root@172.18.26.153:/root
scp -r cri-containerd-cni-1.7.14-linux-amd64.tar.gz root@172.18.26.143:/root

SCP命令的基本语法: scp可以实现跨主机复制
$ scp [option] /path/to/source /path/to/source/file-这是打算复制到远程主机源文件
user@server-IP: -这是远程系统的用户名和 IP 地址。请注意 IP 地址后面加冒号


命令解释
'scp' 可以跨主机复制
'-r' -此选项递归复制目录及其内容
'-c'-这会在复制文件中压缩文件或者目录
'-p'-小写p 保留文件的访问和修改时间
'-P'-大写P 如果默认SSH端口不是22,则使用此选项指定 SSH 端口

# 将文件解压到根目录下
tar -zxvf cri-containerd-cni-1.7.14-linux-amd64.tar.gz -C /


root@k8s-node-12:~# tar -zxvf cri-containerd-cni-1.7.14-linux-amd64.tar.gz -C /
cri-containerd.DEPRECATED.txt
etc/
etc/crictl.yaml
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
usr/
usr/local/
usr/local/bin/
usr/local/bin/containerd-shim
usr/local/bin/critest
usr/local/bin/ctr
usr/local/bin/ctd-decoder
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/containerd
usr/local/bin/containerd-stress
usr/local/bin/crictl
usr/local/bin/containerd-shim-runc-v2
usr/local/sbin/
usr/local/sbin/runc
opt/
opt/containerd/
opt/containerd/cluster/
opt/containerd/cluster/gce/
opt/containerd/cluster/gce/env
opt/containerd/cluster/gce/cloud-init/
opt/containerd/cluster/gce/cloud-init/master.yaml
opt/containerd/cluster/gce/cloud-init/node.yaml
opt/containerd/cluster/gce/configure.sh
opt/containerd/cluster/gce/cni.template
opt/containerd/cluster/version
opt/cni/
opt/cni/bin/
opt/cni/bin/loopback
opt/cni/bin/bridge
opt/cni/bin/tuning
opt/cni/bin/sbr
opt/cni/bin/dhcp
opt/cni/bin/macvlan
opt/cni/bin/static
opt/cni/bin/host-local
opt/cni/bin/vrf
opt/cni/bin/firewall
opt/cni/bin/portmap
opt/cni/bin/bandwidth
opt/cni/bin/ptp
opt/cni/bin/dummy
opt/cni/bin/ipvlan
opt/cni/bin/vlan
opt/cni/bin/host-device


# 查看解压前的根目录
root@k8s-node-12:~# ls /
bin    dev   lib    libx32      mnt   root  snap      sys  var
boot   etc   lib32  lost+found  opt   run   srv       tmp
cdrom  home  lib64  media       proc  sbin  swap.img  usr
root@k8s-node-12:~#

# 查看解压后的根目录
root@k8s-node-12:~# ls /
bin                            dev   lib32       media  root  srv       usr
boot                           etc   lib64       mnt    run   swap.img  var
cdrom                          home  libx32      opt    sbin  sys
cri-containerd.DEPRECATED.txt  lib   lost+found  proc   snap  tmp
root@k8s-node-12:~#




# containerd 解压文件及目录为:etc opt usr cri-containerd.DEPRECATED.txt
root@k8s-master-10:~# rm -rf etc opt usr cri-containerd.DEPRECATED.txt

# 把可执行程序路径加入到 $PATH
vim /etc/profile   # 在该文件最下方添加下面一行,centos默认不需要添加
export PATH=$PATH:/usr/local/bin:/usr/local/sbin  # 添加这一行

# 生效
source /etc/profile

# 然后可以通过下面这行命令查看安装的版本:
containerd -v

root@k8s-node-11:~# containerd -v
containerd github.com/containerd/containerd v1.7.14 dcf2847247e18caba8dce86522029642f60fe96b
root@k8s-node-11:~#


# containerd的默认配置文件为 /etc/containerd/config.toml
# 通过下面命令创建配置文件目录
sudo mkdir /etc/containerd

# 我们可以通过如下所示命令生成一个默认的配置  第一种方式
containerd config default > /etc/containerd/config.toml
# 然后通过下面命令先将配置文件创建出来:  第二种方式
containerd config default | sudo tee /etc/containerd/config.toml
# 命令解释
'containerd config default' 默认的配置文件
'sudo tee /etc/containerd/config.toml' 通过管道符输出到 '/etc/containerd/config.toml'配置文件

root@k8s-node-11:~# sudo mkdir /etc/containerd
root@k8s-node-11:~# containerd config default | sudo tee /etc/containerd/config.toml

# 查看 ll /etc/containerd/

root@k8s-node-11:~# ll /etc/containerd/
total 20
drwxr-xr-x   2 root root 4096 Apr  3 14:08 ./
drwxr-xr-x 104 root root 4096 Apr  3 13:58 ../
-rw-r--r--   1 root root 8526 Apr  3 14:08 config.toml


# 然后修改一下这个文件:
sudo vim /etc/containerd/config.toml
1.大概在65行位置修改 sandbox_image 值为 registry.aliyuncs.com/google_containers/pause:3.9 

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

2. 大概在137行位置将 SystendCgroup 的值修改为 true
然后保存退出之后通过下面命令启动 containerd:

3 查看 cat /etc/containerd/config.toml

root@k8s-master-10:/bin# ls
containerd  containerd-shim  containerd-shim-runc-v1  containerd-shim-runc-v2  containerd-stress  ctr
root@k8s-master-10:/bin# 

sudo systemctl enable --now containerd # 开机自启

Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
root@k8s-master-10:~#

# 查看containerd 的状态
sudo systemctl status containerd

root@k8s-master-10:~# sudo systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor pr>
     Active: active (running) since Wed 2024-04-03 14:11:16 CST; 1min 10s ago
       Docs: https://containerd.io
    Process: 2531 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SU>
   Main PID: 2532 (containerd)
      Tasks: 7
     Memory: 9.6M
        CPU: 531ms
     CGroup: /system.slice/containerd.service
             └─2532 /usr/local/bin/containerd

[点击并拖拽以移动]
​

安装 Kubernetes

配置并安装 apt 包
以下内容直接在官网复制,安装的 1.28 版本:
官网地址:Kubernetes
https://kubernetes.io/
https://v1-28.docs.kubernetes.io/zh-cn/docs/home/

更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

下载用于 Kubernetes 软件包仓库的公共签名密钥,所有仓库都使用相同的签名密钥,因此你可以忽略URL中的版本:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

添加kubernetes apt 仓库,请注意,次仓库仅包含适用于 Kubernetes1.28 的软件包:对于其他 kubernetes 次要版本,则需要更改 URL中 Kubernetes 次要版本以匹配你所需的次要版本(你还应该检查正在阅读的安装文档是否为你计划安装的 Kubernetes 版本的文档)。
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

更新 apt 包索引,安装 kubelet, kubeadm, 和 kubectl, 并锁定其版本:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

通过下面命令查看安装的 kubeadm 版本:
kubeadm version

# 配置文件
sudo vim /etc/containersd/config.toml
#### containerd默认的配置文件 /etc/containerd/config.toml 参数说明如下:

##### version=2 : 这个是新版本基本默认的选项

##### root : containerd 保存元数据的地方。

##### state : containerd 的状态目录,重启数据就会刷新,就一个临时目录

##### address : 这个指的是 containerd 监听的套接字

##### plugins : 其中 sandbox_image 配置的是 cni 的插件,以及配置的 cni 的二进制目录和初始化目录;还有配置的私有库的地址,证书,访问的用户密码

##### path :  containerd 的二进制文件路径

##### interval : containerd 重启的时间间隔

##### runtime : 这部分配置需要的运行时

##### runc , containerd-shim 这个垫片可以选择用或者不用

##### containerd 的 service 文件

##### 由于我们下载的 containerd 压缩包中包含一个 etc/systemd/system/contained.service 的文件,这样我们就可以通过 systemd 来配置 containerd 作为守护进程运行了

## 	安装 Kubernetes

#### 配置并安装 apt 包

##### 以下内容直接在官网复制,安装的 1.28 版本:

##### 官网地址:https://kubernetes.io/

https://kubernetes.io/
https://v1-28.docs.kubernetes.io/zh-cn/docs/home/


# 更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

root@k8s-master-10:~# containerd -v
containerd github.com/containerd/containerd v1.7.14 dcf2847247e18caba8dce86522029642f60fe96b
root@k8s-master-10:~# sudo apt-get update
Get:1 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy InRelease
Get:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates InRelease [119 kB]
Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-backports InRelease
Get:5 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/main amd64 Packages [1,519 kB]
Get:6 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/restricted amd64 Packages [1,644 kB]
Get:7 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/universe amd64 Packages [1,060 kB]
Fetched 4,452 kB in 3s (1,766 kB/s)
Reading package lists... Done
root@k8s-master-10:~#

root@k8s-master-10:~# sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20230311ubuntu0.22.04.1).
ca-certificates set to manually installed.
curl is already the newest version (7.81.0-1ubuntu1.16).
curl set to manually installed.
gpg is already the newest version (2.2.27-3ubuntu2.1).
gpg set to manually installed.
The following NEW packages will be installed:
  apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 17 not upgraded.
Need to get 1,510 B of archives.
After this operation, 170 kB of additional disk space will be used.
Get:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.12 [1,510 B]
Fetched 1,510 B in 2s (784 B/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 93089 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_2.4.12_all.deb ...
Unpacking apt-transport-https (2.4.12) ...
Setting up apt-transport-https (2.4.12) ...
Scanning processes...
Scanning linux images...

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
root@k8s-master-10:~#


# 下载用于 Kubernetes 软件包仓库的公共签名密钥,所有仓库都使用相同的签名密钥,因此你可以忽略URL中的版本:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

root@k8s-master-10:~# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
root@k8s-master-10:~#



# 添加kubernetes apt 仓库,请注意,次仓库仅包含适用于 Kubernetes1.28 的软件包:对于其他 kubernetes 次要版本,则需要更改 URL中 Kubernetes 次要版本以匹配你所需的次要版本(你还应该检查正在阅读的安装文档是否为你计划安装的 Kubernetes 版本的文档)。


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

root@k8s-master-10:~# echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /
root@k8s-master-10:~#


# 更新 apt 包索引,安装 kubelet, kubeadm, 和 kubectl, 并锁定其版本:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# 通过下面命令查看安装的 kubeadm 版本:
kubeadm version

root@k8s-master-10:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.8", GitCommit:"fc11ff34c34bc1e6ae6981dc1c7b3faa20b1ac2d", GitTreeState:"clean", BuildDate:"2024-03-15T00:05:37Z", GoVersion:"go1.21.8", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master-10:~#


# 配置文件
sudo vim /etc/containerd/config.toml

可以看到安装的是 v1.28.8 版本:

配置并安装 apt 包

初始化集群

上一步已经确定了安装了v1.28.8的版本,接下来可以在主节点上执行这行命令将主节点的镜像拉取下来:
以下命令只在主节点执行:
sudo kubeadm config images pull \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.28.7 \
--cri-socket=unix://run/containerd/containerd.sock
# ##### 上一步已经确定了安装了v1.28.8的版本,接下来可以在主节点上执行这行命令将主节点的镜像拉取下来:

##### 以下命令只在主节点执行:

sudo kubeadm config images pull \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.28.8 \
--cri-socket=unix:///run/containerd/containerd.sock

root@k8s-master-10:~# sudo kubeadm config images pull \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.28.8 \
--cri-socket=unix:///run/containerd/containerd.sock

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.8
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.8
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.8
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.8
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.12-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
root@k8s-master-10:~#

# 接下来通过下面命令初始化集群(注意修改主节点IP地址和版本等):

sudo kubeadm init \
--apiserver-advertise-address=172.18.26.152 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.28.8 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///run/containerd/containerd.sock \
--ignore-preflight-errors=all



注意:初始化参数根据你的电脑以及安装软件版本而会有所不同,请慎重复制,以免报错


代码解释如下:
sudo kubeadm init \	
# 初始化
--apiserver-advertise-address=172.18.26.152 \	
# 为控制切面地址(Master 主机 IP)
--image-repository=registry.aliyuncs.com/google_containers \
# 阿里云镜像代理地址,否则拉取镜像会失败
--kubernetes-version=v1.28.8 \
# 为 k8s 版本
--service-cidr=10.96.0.0/12 \
# 
--pod-network-cidr=10.244.0.0/16 \
# 配置容器的 IP 网段
--cri-socket=unix:///run/containerd/containerd.sock \
# 
--ignore-preflight-errors=all
# 


root@k8s-master-10:~# sudo kubeadm init \
--apiserver-advertise-address=172.18.26.152 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.28.8 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///run/containerd/containerd.sock \
--ignore-preflight-errors=all
[init] Using Kubernetes version: v1.28.8
[preflight] Running pre-flight checks

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.18.26.152:6443 --token ow4k1a.s9yjp1pcid7u5qz5 \
        --discovery-token-ca-cert-hash sha256:ebe3d1ebb48c9d474d948d67589d0075b7257b11342d8c8d1baa9930b1976878
root@k8s-master-10:~#

执行如图:
接下来通过下面命令初始化集群(注意修改主节点IP地址和版本等):
sudo kubeadm init \
--apiserver-advertise-address=172.18.8.150 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=1.28.7 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=100244.0.0/16 \
--cri-socket=unix:///run/containerd/containerd.sock
执行结果如下:​
首先在本机上执行这三行命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
然后在所有工作节点上执行这行命令(注意修改为自己的 token), 注意后面拼接上 --cri-socket=unix:///var/run/containerd/containerd.sock 参数:
以下命令是在任意数量的工作节点执行(目的是让节点和主节点建立联系)
sudo kubeadm join 172.18.8.150:6443 --token kxz9ng.mhm3zut1x80phcsd \
	--discovery-token-ca-cert-hash sha256:f.... \
	--cri-socket=unix:///run/containerd/containerd.sock
然后可以通过命令在主节点上查看所有节点:(在主节点执行下面这条命令 打印各节点详细信息)
kubectl get nodes -o wide
可以查看执行结果,获取了所有节点:
# 首先在本机上执行这三行命令:在主节点执行

第一种方式
root@k8s-master-10:~# vim /etc/profile
root@k8s-master-10:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root@k8s-master-10:~# ll /etc/kubernetes/admin.conf
-rw------- 1 root root 5651 Apr  4 08:32 /etc/kubernetes/admin.conf
root@k8s-master-10:~# sudo chmod 644 /etc/kubernetes/admin.conf
root@k8s-master-10:~#

root@k8s-master-10:~# kubectl get nodes
NAME            STATUS   ROLES           AGE   VERSION
k8s-master-10   Ready    control-plane   13m   v1.28.8
root@k8s-master-10:~#

root@k8s-master-10:~# kubectl get nodes -o wide
NAME            STATUS   ROLES           AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-master-10   Ready    control-plane   14m   v1.28.8   192.168.222.152   <none>        Ubuntu 22.04.4 LTS   5.15.0-101-generic   containerd://1.7.14
root@k8s-master-10:~#



第二种方式
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config



# ##### 然后在所有工作节点上执行这行命令(注意修改为自己的 token), 注意后面拼接上 --cri-socket=unix:///var/run/containerd/containerd.sock 参数:

##### 以下命令是在任意数量的工作节点执行(目的是让节点和主节点建立联系)

# 注意:以下参数根据自己的电脑配置
# 老师的示例代码
sudo kubeadm join 172.18.8.150:6443 --token kxz9ng.mhm3zut1x80phcsd \
	--discovery-token-ca-cert-hash sha256:f.... \
	--cri-socket=unix:///run/containerd/containerd.sock
	

# 自己机器上执行的代码  在工作节点(node节点)执行
sudo kubeadm join 172.18.26.152:6443 --token ow4k1a.s9yjp1pcid7u5qz5 \
        --discovery-token-ca-cert-hash sha256:ebe3d1ebb48c9d474d948d67589d0075b7257b11342d8c8d1baa9930b1976878 \
        --cri-socket=unix:///run/containerd/containerd.sock \
        --ignore-preflight-errors=all



root@k8s-node-12:~#sudo kubeadm join 172.18.26.152:6443 --token ow4k1a.s9yjp1pcid7u5qz5 \
        --discovery-token-ca-cert-hash sha256:ebe3d1ebb48c9d474d948d67589d0075b7257b11342d8c8d1baa9930b1976878 \
        --cri-socket=unix:///run/containerd/containerd.sock \
        --ignore-preflight-errors=all
[preflight] Running pre-flight checks
        [WARNING FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-node-12:~#


# 然后可以通过命令在主节点上查看所有节点:(在主节点执行下面这条命令 打印各节点详细信息)

kubectl get nodes -o wide


root@k8s-master-10:~# kubectl get nodes
NAME            STATUS   ROLES           AGE   VERSION
k8s-master-10   Ready    control-plane   33m   v1.28.8
k8s-node-11     Ready    <none>          32s   v1.28.8



root@k8s-master-10:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION     CONTAINER-RUNTIME
k8s-master-10   Ready    control-plane   36m     v1.28.8   192.168.222.152   <none>        Ubuntu 22.04.4 LTS   5.15.0-101-generic   containerd://1.7.14
k8s-node-11     Ready    <none>          4m9s    v1.28.8   192.168.222.153   <none>        Ubuntu 22.04.4 LTS   5.15.0-101-generic   containerd://1.7.14
k8s-node-12     Ready    <none>          3m16s   v1.28.8   192.168.222.143   <none>        Ubuntu 22.04.4 LTS   5.15.0-101-generic   containerd://1.7.14
root@k8s-master-10:~#


# 可以查看执行结果,获取了所有节点:

root@k8s-master-10:~# kubectl get nodes
NAME            STATUS   ROLES           AGE   VERSION
k8s-master-10   Ready    control-plane   33m   v1.28.8
k8s-node-11     Ready    <none>          62s   v1.28.8
k8s-node-12     Ready    <none>          9s    v1.28.8
root@k8s-master-10:~#

root@k8s-master-10:~# kubectl get nodes -o wide
NAME            STATUS   ROLES           AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-master-10   Ready    control-plane   129m   v1.28.8   192.168.222.152   <none>        Ubuntu 22.04.4 LTS   5.15.0-101-generic   containerd://1.7.14
k8s-node-11     Ready    <none>          96m    v1.28.8   192.168.222.153   <none>        Ubuntu 22.04.4 LTS   5.15.0-101-generic   containerd://1.7.14
k8s-node-12     Ready    <none>          95m    v1.28.8   192.168.222.143   <none>        Ubuntu 22.04.4 LTS   5.15.0-101-generic   containerd://1.7.14
root@k8s-master-10:~#

# 查看日志
root@k8s-master-10:~# journalctl -f -u kubelet

设置命令补全

一般主节点设置即可。

# 设置k8s补充命令
apt install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc
 
 
source .bashrc



root@k8s-master-10:~# apt install bash-completion -y
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
bash-completion is already the newest version (1:2.11-5ubuntu1).
bash-completion set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded.
root@k8s-master-10:~# echo "source <(kubectl completion bash)" >> ~/.bashrc
root@k8s-master-10:~# source .bashrc




root@k8s-master-10:~# apt update
Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy InRelease
Get:2 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates InRelease [119 kB]
Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-backports InRelease
Get:6 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/main amd64 Packages [1,519 kB]
Get:7 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/restricted amd64 Packages [1,648 kB]
Get:8 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/restricted Translation-en [275 kB]
Get:9 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/universe amd64 Packages [1,060 kB]
Get:10 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/multiverse amd64 Packages [49.6 kB]
Get:11 http://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates/multiverse Translation-en [12.0 kB]
Get:12 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1,303 kB]
Hit:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb  InRelease
Get:13 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [852 kB]
Fetched 6,947 kB in 3s (2,406 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
17 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@k8s-master-10:~# apt list --upgradable
Listing... Done
apt-utils/jammy-updates 2.4.12 amd64 [upgradable from: 2.4.11]
apt/jammy-updates 2.4.12 amd64 [upgradable from: 2.4.11]
cloud-init/jammy-updates 23.4.4-0ubuntu0~22.04.1 all [upgradable from: 23.3.3-0ubuntu0~22.04.1]
coreutils/jammy-updates 8.32-4.1ubuntu1.2 amd64 [upgradable from: 8.32-4.1ubuntu1.1]
dpkg/jammy-updates 1.21.1ubuntu2.3 amd64 [upgradable from: 1.21.1ubuntu2.2]
ethtool/jammy-updates 1:5.16-1ubuntu0.1 amd64 [upgradable from: 1:5.16-1]
libapt-pkg6.0/jammy-updates 2.4.12 amd64 [upgradable from: 2.4.11]
libgpgme11/jammy-updates 1.16.0-1.2ubuntu4.2 amd64 [upgradable from: 1.16.0-1.2ubuntu4.1]
libldap-2.5-0/jammy-updates 2.5.17+dfsg-0ubuntu0.22.04.1 amd64 [upgradable from: 2.5.16+dfsg-0ubuntu0.22.04.2]
libldap-common/jammy-updates 2.5.17+dfsg-0ubuntu0.22.04.1 all [upgradable from: 2.5.16+dfsg-0ubuntu0.22.04.2]
python3-update-manager/jammy-updates 1:22.04.19 all [upgradable from: 1:22.04.18]
snapd/jammy-updates 2.61.3+22.04 amd64 [upgradable from: 2.58+22.04.1]
tcpdump/jammy-updates 4.99.1-3ubuntu0.2 amd64 [upgradable from: 4.99.1-3ubuntu0.1]
ubuntu-advantage-tools/jammy-updates 31.2~22.04 amd64 [upgradable from: 30~22.04]
ubuntu-pro-client-l10n/jammy-updates 31.2~22.04 amd64 [upgradable from: 30~22.04]
update-manager-core/jammy-updates 1:22.04.19 all [upgradable from: 1:22.04.18]
update-notifier-common/jammy-updates 3.192.54.8 all [upgradable from: 3.192.54.6]
root@k8s-master-10:~# source ~/.bashrc

安装 calico 网络插件

该步骤依照官网,calico官网地址:
https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
安装 Tigera Calico 操作符和自定义资源定义:
# 这条命令在主节点执行
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml
可以看到创建成功:
# 查看资源创建情况
kubectl get all -o wide -n tigera-operator
接下来需要安装必须的客户端资源,因为我们 pod 的网段与 calico 官网不相同,所以先将这个文件下载下来然后更改一下网段地址:
# 下载客户端资源文件(在主节点执行)
curl -LO https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
# 修改 pod 的网段地址 cat custom-resources.yaml
sed -i 's/cidr: 192.168.0.0/cidr: 10.244.0.0/g' custom-resources.yaml

然后可以看修改成功:
最后根据这个文件创建资源,执行下面这行命令:
kubectl create -f custom-resources.yaml
# 创建成功然后获取一下
watch kubectl get all -o wide -n calico-system
watch kubectl get pods -n calico-system  # 官网

# 官网复制
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
可以看到创建成功:
# 通过windows 下载 v3.27.3.tar.gz 这个 calico 网络插件 用 winSCP 远程工具上传至服务器
https://github.com/projectcalico/calico/archive/refs/tags/v3.27.3.tar.gz

# 查看 calico-3.27.3.tar.gz 包目录
root@k8s-master-10:~# tar -tf calico-3.27.3.tar.gz

root@k8s-master-10:~# ls
calico-3.27.3.tar.gz  cri-containerd-cni-1.7.14-linux-amd64.tar.gz  snap


# 解压 
root@k8s-master-10:~# tar -zxvf calico-3.27.3.tar.gz -C /

root@k8s-master-10:~# ls /
app  boot           cdrom                          dev  home  lib32  libx32      media  opt   root  sbin  srv       sys  usr
bin  calico-3.27.3  cri-containerd.DEPRECATED.txt  etc  lib   lib64  lost+found  mnt    proc  run   snap  swap.img  tmp  var
root@k8s-master-10:~#
root@k8s-master-10:~# cd /
root@k8s-master-10:/# ls
app  boot           cdrom                          dev  home  lib32  libx32      media  opt   root  sbin  srv       sys  usr
bin  calico-3.27.3  cri-containerd.DEPRECATED.txt  etc  lib   lib64  lost+found  mnt    proc  run   snap  swap.img  tmp  var
root@k8s-master-10:/# cd calico-3.27.3/
root@k8s-master-10:/calico-3.27.3# ls
api         calico      confd                 DEVELOPER_GUIDE.md  go.mod            libcalico-go  manifests          pod2daemon     SECURITY.md
apiserver   calicoctl   CONTRIBUTING_DOCS.md  devstack            go.sum            lib.Makefile  metadata.mk        process        typha
app-policy  charts      CONTRIBUTING.md       e2e                 hack              LICENSE.md    networking-calico  README.md
AUTHORS.md  cni-plugin  crypto                felix               kube-controllers  Makefile      node               release-notes
root@k8s-master-10:/calico-3.27.3#


# 修改配置文件 将 'cidr: 192.168.0.0/16'修改为 'cidr: 10.244.0.0/16'
root@k8s-master-10:~/calico-3.27.3# vim /root/calico-3.27.3/manifests/custom-resources.yaml
root@k8s-master-10:~/calico-3.27.3# vim /root/calico-3.27.3/manifests/custom-resources.yaml

root@k8s-master-10:~/calico-3.27.3# cat /root/calico-3.27.3/manifests/custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

root@k8s-master-10:~/calico-3.27.3/manifests# ls
alp                  calicoctl.yaml           calico-vxlan.yaml  crds.yaml              generate.sh                                   operator-crds.yaml
apiserver.yaml       calico-etcd.yaml         calico.yaml        csi-driver.yaml        grafana-dashboards.yaml                       README.md
calico-bpf.yaml      calico-policy-only.yaml  canal-etcd.yaml    custom-resources.yaml  ocp                                           tigera-operator.yaml
calicoctl-etcd.yaml  calico-typha.yaml        canal.yaml         flannel-migration      ocp-tigera-operator-no-resource-loading.yaml
root@k8s-master-10:~/calico-3.27.3/manifests# pwd
/root/calico-3.27.3/manifests

# 创建资源定义符 安装 Tigera Calico 操作符和自定义资源定义
root@k8s-master-10:~/calico-3.27.3/manifests# vim tigera-operator.yaml
root@k8s-master-10:~/calico-3.27.3/manifests# kubectl create -f tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
root@k8s-master-10:~/calico-3.27.3/manifests#

# 需要安装必须的客户端资源,因为我们 pod 的网段与 calico 官网不相同,所以先将这个文件下载下来然后更改一下网段地址:
# 修改配置文件 将 'cidr: 192.168.0.0/16'修改为 'cidr: 10.244.0.0/16'
root@k8s-master-10:~/calico-3.27.3# vim /root/calico-3.27.3/manifests/custom-resources.yaml

# 需要安装必须的客户端资源
root@k8s-master-10:~/calico-3.27.3/manifests# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

root@k8s-master-10:~/calico-3.27.3/manifests# kubectl create -f /root/calico-3.27.3/manifests/custom-resources.yaml


root@k8s-master-10:~/calico-3.27.3/manifests# kubectl get all -o wide -n tigera-operator
NAME                                   READY   STATUS    RESTARTS   AGE     IP                NODE          NOMINATED NODE   READINESS GATES
pod/tigera-operator-6bfc79cb9c-4bv7n   1/1     Running   0          9m28s   192.168.222.153   k8s-node-11   <none>           <none>

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS        IMAGES                            SELECTOR
deployment.apps/tigera-operator   1/1     1            1           9m29s   tigera-operator   quay.io/tigera/operator:v1.32.7   name=tigera-operator

NAME                                         DESIRED   CURRENT   READY   AGE     CONTAINERS        IMAGES                            SELECTOR
replicaset.apps/tigera-operator-6bfc79cb9c   1         1         1       9m28s   tigera-operator   quay.io/tigera/operator:v1.32.7   name=tigera-operator,pod-template-hash=6bfc79cb9c
root@k8s-master-10:~/calico-3.27.3/manifests# watch kubectl get all -o wide -n calico-system
root@k8s-master-10:~/calico-3.27.3/manifests#

kubectl create -f custom-resources.yaml
# 创建成功然后获取一下
watch kubectl get all -o wide -n calico-system
watch kubectl get pods -n calico-system 

# 官网复制
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml


watch kebuctl get all -o calico-system
root@k8s-master-10:~/calico-3.27.3/manifests# kubectl get pods -n calico-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-8bd8cf9f9-vh7dj   1/1     Running   0          18m
calico-node-fqftx                         1/1     Running   0          18m
calico-node-j7klt                         1/1     Running   0          18m
calico-node-sxl89                         1/1     Running   0          18m
calico-typha-5b8b66bbb9-8xb6q             1/1     Running   0          18m
calico-typha-5b8b66bbb9-dqrbs             1/1     Running   0          18m
csi-node-driver-mt4gt                     2/2     Running   0          18m
csi-node-driver-pcj5h                     2/2     Running   0          18m
csi-node-driver-r7tq2                     2/2     Running   0          18m

删除
root@k8s-master-10:~/calico-3.27.3/manifests# kubectl delete pods calico-node-tjjsl -n calico-system

watch kubectl get pods -n calico-system
root@k8s-master-10:~# watch kubectl get pods -n calico-system

# 查看节点状态 kubectl get nodes

root@k8s-master-10:~# kubectl get nodes
NAME            STATUS   ROLES           AGE   VERSION
k8s-master-10   Ready    control-plane   11h   v1.28.8
k8s-node-11     Ready    <none>          11h   v1.28.8
k8s-node-12     Ready    <none>          11h   v1.28.8
root@k8s-master-10:~#
root@k8s-master-10:~# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS       AGE
coredns-66f779496c-9fc8c                1/1     Running   2 (108m ago)   11h
coredns-66f779496c-s94mm                1/1     Running   2 (108m ago)   11h
etcd-k8s-master-10                      1/1     Running   2 (108m ago)   11h
kube-apiserver-k8s-master-10            1/1     Running   2 (108m ago)   11h
kube-controller-manager-k8s-master-10   1/1     Running   2 (108m ago)   11h
kube-proxy-79785                        1/1     Running   2 (108m ago)   11h
kube-proxy-cctnx                        1/1     Running   1 (109m ago)   11h
kube-proxy-tqq9m                        1/1     Running   1 (110m ago)   11h
kube-scheduler-k8s-master-10            1/1     Running   2 (108m ago)   11h
root@k8s-master-10:~#
# 看一下能否做域名解析 dig -t a www.baidu.com @10.96.0.10
root@k8s-master-10:~# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   11h
root@k8s-master-10:~#

# 解析 IP 地址
root@k8s-master-10:~# dig -t a www.baidu.com @10.96.0.10

; <<>> DiG 9.18.18-0ubuntu0.22.04.2-Ubuntu <<>> -t a www.baidu.com @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30419
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 049a04f4fc15cd71 (echoed)
;; QUESTION SECTION:
;www.baidu.com.                 IN      A

;; ANSWER SECTION:
www.baidu.com.          5       IN      CNAME   www.a.shifen.com.
www.a.shifen.com.       5       IN      A       180.101.50.242
www.a.shifen.com.       5       IN      A       180.101.50.188

;; Query time: 8 msec
;; SERVER: 10.96.0.10#53(10.96.0.10) (UDP)
;; WHEN: Thu Apr 04 20:19:08 CST 2024
;; MSG SIZE  rcvd: 161

root@k8s-master-10:~#
 # 安装 ipvsadm -l
 root@k8s-master-10:~# ipvsadm -l
Command 'ipvsadm' not found, but can be installed with:
apt install ipvsadm
root@k8s-master-10:~# apt install ipvsadm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done

root@k8s-master-10:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
root@k8s-master-10:~#

root@k8s-master-10:~# iptables -nL

# 将 mode: "" 修改为 mode: "ipvs" ,将 strictARP: false 修改为 strictARP: true
kubectl edit configmap kube-proxy -n kube-system
root@k8s-master-10:~# kubectl edit configmap kube-proxy -n kube-system
configmap/kube-proxy edited

root@k8s-master-10:~# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS       AGE
coredns-66f779496c-9fc8c                1/1     Running   2 (137m ago)   12h
coredns-66f779496c-s94mm                1/1     Running   2 (137m ago)   12h
etcd-k8s-master-10                      1/1     Running   2 (137m ago)   12h
kube-apiserver-k8s-master-10            1/1     Running   2 (137m ago)   12h
kube-controller-manager-k8s-master-10   1/1     Running   2 (137m ago)   12h
kube-proxy-79785                        1/1     Running   2 (137m ago)   12h
kube-proxy-cctnx                        1/1     Running   1 (138m ago)   11h
kube-proxy-tqq9m                        1/1     Running   1 (139m ago)   11h
kube-scheduler-k8s-master-10            1/1     Running   2 (137m ago)   12h
root@k8s-master-10:~#

# 重启  kubectl rollout restart daemonset kube-proxy -n kube-system
root@k8s-master-10:~# kubectl
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/

Basic Commands (Beginner):
  create          Create a resource from a file or from stdin
  expose          Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service
  run             Run a particular image on the cluster
  set             Set specific features on objects

Basic Commands (Intermediate):
  explain         Get documentation for a resource
  get             Display one or many resources
  edit            Edit a resource on the server
  delete          Delete resources by file names, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout         Manage the rollout of a resource
  scale           Set a new size for a deployment, replica set, or replication controller
  autoscale       Auto-scale a deployment, replica set, stateful set, or replication controller

# pod 重启命令
root@k8s-master-10:~# kubectl rollout restart daemonset kube-proxy -n kube-system
daemonset.apps/kube-proxy restarted
root@k8s-master-10:~#

# 查看 kubectl get pods -n kube-system 

root@k8s-master-10:~# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS       AGE
coredns-66f779496c-9fc8c                1/1     Running   2 (145m ago)   12h
coredns-66f779496c-s94mm                1/1     Running   2 (145m ago)   12h
etcd-k8s-master-10                      1/1     Running   2 (145m ago)   12h
kube-apiserver-k8s-master-10            1/1     Running   2 (145m ago)   12h
kube-controller-manager-k8s-master-10   1/1     Running   2 (145m ago)   12h
kube-proxy-p7bx8                        1/1     Running   0              2m20s
kube-proxy-r78bv                        1/1     Running   0              2m19s
kube-proxy-vwmjr                        1/1     Running   0              2m21s
kube-scheduler-k8s-master-10            1/1     Running   2 (145m ago)   12h
root@k8s-master-10:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  k8s-master-10:https rr
  -> k8s-master-10:6443           Masq    1      0          0
TCP  k8s-master-10:domain rr
  -> 10.88.0.6:domain             Masq    1      0          0
  -> 10.88.0.7:domain             Masq    1      0          0
TCP  k8s-master-10:9153 rr
  -> 10.88.0.6:9153               Masq    1      0          0
  -> 10.88.0.7:9153               Masq    1      0          0
TCP  k8s-master-10:https rr
  -> 10.244.42.129:5443           Masq    1      0          0
  -> 10.244.46.193:5443           Masq    1      0          0
TCP  k8s-master-10:5473 rr
  -> k8s-node-12:5473             Masq    1      0          0
  -> k8s-node-11:5473             Masq    1      0          0
UDP  k8s-master-10:domain rr
  -> 10.88.0.6:domain             Masq    1      0          0
  -> 10.88.0.7:domain             Masq    1      0          0
root@k8s-master-10:~#

部署应用

将会部署一个 nginx 应用,并以 NodePort 形式暴露此 nginx.创建一个 nginx-deploy.yaml 文件。文件内容如下:
vim nginx-deploy.yaml (在主节点执行)
kubectl apply -f nginx-deploy.yaml
# 创建如下
deployment.apps/nginx-deploy created
service/nginx-syc created
# 获取
kubectl get all -o wide
# 直接访问
curl 10.100.140.66
# 获取节点
kubectl get node -o wide

 下面是 nginx-deploy.yaml

root@k8s-master-10:~# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30080
  type: NodePort

 

然后执行下面命令部署这个应用程序:
kubectl apply -f nginx-deploy.yaml

执行结果如图:

root@k8s-master-10:~# vim nginx-deploy.yaml
root@k8s-master-10:~# kubectl apply -f nginx-deploy.yaml
deployment.apps/nginx-deployment unchanged
service/nginx-service unchanged


root@k8s-master-10:~# kubectl get all -o wide

root@k8s-master-10:~# kubectl get pods
NAME                                                       READY   STATUS    RESTARTS   AGE
my-kubernetes-dashboard-api-5f65f88d8d-bv2sm               1/1     Running   0          5h2m
my-kubernetes-dashboard-auth-5dd996bdf8-875fc              1/1     Running   0          5h2m
my-kubernetes-dashboard-kong-565d77fbd4-4qmh7              1/1     Running   0          5h2m
my-kubernetes-dashboard-metrics-scraper-564687b79b-4p5b7   1/1     Running   0          5h2m
my-kubernetes-dashboard-web-6dcff9c9f8-6wklc               1/1     Running   0          5h2m
nginx-deployment-f7f5c78c5-fpzng                           1/1     Running   0          28m
nginx-deployment-f7f5c78c5-mhvvd                           1/1     Running   0          28m
nginx-deployment-f7f5c78c5-sf4ch                           1/1     Running   0          28m
root@k8s-master-10:~#

root@k8s-master-10:~# kubectl get pods -o wide
NAME                                                       READY   STATUS    RESTARTS   AGE    IP              NODE          NOMINATED NODE   READINESS GATES
my-kubernetes-dashboard-api-5f65f88d8d-bv2sm               1/1     Running   0          5h3m   10.244.42.132   k8s-node-12   <none>           <none>
my-kubernetes-dashboard-auth-5dd996bdf8-875fc              1/1     Running   0          5h3m   10.244.42.133   k8s-node-12   <none>           <none>
my-kubernetes-dashboard-kong-565d77fbd4-4qmh7              1/1     Running   0          5h3m   10.244.46.196   k8s-node-11   <none>           <none>
my-kubernetes-dashboard-metrics-scraper-564687b79b-4p5b7   1/1     Running   0          5h3m   10.244.42.131   k8s-node-12   <none>           <none>
my-kubernetes-dashboard-web-6dcff9c9f8-6wklc               1/1     Running   0          5h3m   10.244.46.195   k8s-node-11   <none>           <none>
nginx-deployment-f7f5c78c5-fpzng                           1/1     Running   0          30m    10.244.46.197   k8s-node-11   <none>           <none>
nginx-deployment-f7f5c78c5-mhvvd                           1/1     Running   0          30m    10.244.42.134   k8s-node-12   <none>           <none>
nginx-deployment-f7f5c78c5-sf4ch                           1/1     Running   0          30m    10.244.42.135   k8s-node-12   <none>           <none>
root@k8s-master-10:~#

 访问nginx应用

root@k8s-master-10:~# curl 11.96.169.211
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@k8s-master-10:~#


root@k8s-master-10:~# kubectl get pods -o wide
NAME                                                       READY   STATUS    RESTARTS   AGE     IP              NODE          NOMINATED NODE   READINESS GATES
my-kubernetes-dashboard-api-5f65f88d8d-bv2sm               1/1     Running   0          4h50m   11.26.42.132   k8s-node-12   <none>           <none>
my-kubernetes-dashboard-auth-5dd996bdf8-875fc              1/1     Running   0          4h50m   11.26.42.133   k8s-node-12   <none>           <none>
my-kubernetes-dashboard-kong-565d77fbd4-4qmh7              1/1     Running   0          4h50m   11.26.46.196   k8s-node-11   <none>           <none>
my-kubernetes-dashboard-metrics-scraper-564687b79b-4p5b7   1/1     Running   0          4h50m   11.26.42.131   k8s-node-12   <none>           <none>
my-kubernetes-dashboard-web-6dcff9c9f8-6wklc               1/1     Running   0          4h50m   11.26.46.195   k8s-node-11   <none>           <none>
nginx-deployment-f7f5c78c5-fpzng                           1/1     Running   0          16m     11.26.46.197   k8s-node-11   <none>           <none>
nginx-deployment-f7f5c78c5-mhvvd                           1/1     Running   0          16m     11.26.42.134   k8s-node-12   <none>           <none>
nginx-deployment-f7f5c78c5-sf4ch                           1/1     Running   0          16m     11.26.42.135   k8s-node-12   <none>           <none>
root@k8s-master-10:~# kubectl get all -o wide
NAME                                                           READY   STATUS    RESTARTS   AGE     IP              NODE          NOMINATED NODE   READINESS GATES
pod/my-kubernetes-dashboard-api-5f65f88d8d-bv2sm               1/1     Running   0          4h53m   11.26.42.132   k8s-node-12   <none>           <none>
pod/my-kubernetes-dashboard-auth-5dd996bdf8-875fc              1/1     Running   0          4h53m   11.26.42.133   k8s-node-12   <none>           <none>
pod/my-kubernetes-dashboard-kong-565d77fbd4-4qmh7              1/1     Running   0          4h53m   11.26.46.196   k8s-node-11   <none>           <none>
pod/my-kubernetes-dashboard-metrics-scraper-564687b79b-4p5b7   1/1     Running   0          4h53m   11.26.42.131   k8s-node-12   <none>           <none>
pod/my-kubernetes-dashboard-web-6dcff9c9f8-6wklc               1/1     Running   0          4h53m   11.26.46.195   k8s-node-11   <none>           <none>
pod/nginx-deployment-f7f5c78c5-fpzng                           1/1     Running   0          19m     11.26.46.197   k8s-node-11   <none>           <none>
pod/nginx-deployment-f7f5c78c5-mhvvd                           1/1     Running   0          19m     11.26.42.134   k8s-node-12   <none>           <none>
pod/nginx-deployment-f7f5c78c5-sf4ch                           1/1     Running   0          19m     11.26.42.135   k8s-node-12   <none>           <none>

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE     SELECTOR
service/kubernetes                                ClusterIP   11.96.0.1        <none>        443/TCP                         2d8h    <none>
service/my-kubernetes-dashboard-api               ClusterIP   11.97.201.221    <none>        8000/TCP                        4h53m   app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-api,app.kubernetes.io/part-of=kubernetes-dashboard
service/my-kubernetes-dashboard-auth              ClusterIP   11.105.130.155   <none>        8000/TCP                        4h53m   app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-auth,app.kubernetes.io/part-of=kubernetes-dashboard
service/my-kubernetes-dashboard-kong-manager      NodePort    11.110.152.188   <none>        8002:31095/TCP,8445:30614/TCP   4h53m   app.kubernetes.io/component=app,app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kong
service/my-kubernetes-dashboard-kong-proxy        ClusterIP   11.96.184.198    <none>        443/TCP                         4h53m   app.kubernetes.io/component=app,app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kong
service/my-kubernetes-dashboard-metrics-scraper   ClusterIP   11.98.26.39      <none>        8000/TCP                        4h53m   app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-metrics-scraper,app.kubernetes.io/part-of=kubernetes-dashboard
service/my-kubernetes-dashboard-web               ClusterIP   11.102.237.200   <none>        8000/TCP                        4h53m   app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-web,app.kubernetes.io/part-of=kubernetes-dashboard
service/nginx-service                             NodePort    11.97.163.211    <none>        80:30080/TCP                    15m     app=nginx

NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                             IMAGES                                                   SELECTOR
deployment.apps/my-kubernetes-dashboard-api               1/1     1            1           4h53m   kubernetes-dashboard-api               docker.io/kubernetesui/dashboard-api:1.4.1               app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-api,app.kubernetes.io/part-of=kubernetes-dashboard
deployment.apps/my-kubernetes-dashboard-auth              1/1     1            1           4h53m   kubernetes-dashboard-auth              docker.io/kubernetesui/dashboard-auth:1.1.2              app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-auth,app.kubernetes.io/part-of=kubernetes-dashboard
deployment.apps/my-kubernetes-dashboard-kong              1/1     1            1           4h53m   proxy                                  kong:3.6                                                 app.kubernetes.io/component=app,app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kong
deployment.apps/my-kubernetes-dashboard-metrics-scraper   1/1     1            1           4h53m   kubernetes-dashboard-metrics-scraper   docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1   app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-metrics-scraper,app.kubernetes.io/part-of=kubernetes-dashboard
deployment.apps/my-kubernetes-dashboard-web               1/1     1            1           4h53m   kubernetes-dashboard-web               docker.io/kubernetesui/dashboard-web:1.2.3               app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-web,app.kubernetes.io/part-of=kubernetes-dashboard
deployment.apps/nginx-deployment                          3/3     3            3           19m     nginx                                  nginx:alpine                                             app=nginx

NAME                                                                 DESIRED   CURRENT   READY   AGE     CONTAINERS                             IMAGES                                                   SELECTOR
replicaset.apps/my-kubernetes-dashboard-api-5f65f88d8d               1         1         1       4h53m   kubernetes-dashboard-api               docker.io/kubernetesui/dashboard-api:1.4.1               app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-api,app.kubernetes.io/part-of=kubernetes-dashboard,pod-template-hash=5f65f88d8d
replicaset.apps/my-kubernetes-dashboard-auth-5dd996bdf8              1         1         1       4h53m   kubernetes-dashboard-auth              docker.io/kubernetesui/dashboard-auth:1.1.2              app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-auth,app.kubernetes.io/part-of=kubernetes-dashboard,pod-template-hash=5dd996bdf8
replicaset.apps/my-kubernetes-dashboard-kong-565d77fbd4              1         1         1       4h53m   proxy                                  kong:3.6                                                 app.kubernetes.io/component=app,app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kong,pod-template-hash=565d77fbd4
replicaset.apps/my-kubernetes-dashboard-metrics-scraper-564687b79b   1         1         1       4h53m   kubernetes-dashboard-metrics-scraper   docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1   app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-metrics-scraper,app.kubernetes.io/part-of=kubernetes-dashboard,pod-template-hash=564687b79b
replicaset.apps/my-kubernetes-dashboard-web-6dcff9c9f8               1         1         1       4h53m   kubernetes-dashboard-web               docker.io/kubernetesui/dashboard-web:1.2.3               app.kubernetes.io/instance=my-kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-web,app.kubernetes.io/part-of=kubernetes-dashboard,pod-template-hash=6dcff9c9f8
replicaset.apps/nginx-deployment-f7f5c78c5                           3         3         3       19m     nginx                                  nginx:alpine                                             app=nginx,pod-template-hash=f7f5c78c5
root@k8s-master-10:~# kubectl get all
NAME                                                           READY   STATUS    RESTARTS   AGE
pod/my-kubernetes-dashboard-api-5f65f88d8d-bv2sm               1/1     Running   0          4h53m
pod/my-kubernetes-dashboard-auth-5dd996bdf8-875fc              1/1     Running   0          4h53m
pod/my-kubernetes-dashboard-kong-565d77fbd4-4qmh7              1/1     Running   0          4h53m
pod/my-kubernetes-dashboard-metrics-scraper-564687b79b-4p5b7   1/1     Running   0          4h53m
pod/my-kubernetes-dashboard-web-6dcff9c9f8-6wklc               1/1     Running   0          4h53m
pod/nginx-deployment-f7f5c78c5-fpzng                           1/1     Running   0          20m
pod/nginx-deployment-f7f5c78c5-mhvvd                           1/1     Running   0          20m
pod/nginx-deployment-f7f5c78c5-sf4ch                           1/1     Running   0          20m

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
service/kubernetes                                ClusterIP   11.96.0.1        <none>        443/TCP                         2d8h
service/my-kubernetes-dashboard-api               ClusterIP   11.97.201.221    <none>        8000/TCP                        4h53m
service/my-kubernetes-dashboard-auth              ClusterIP   11.105.130.155   <none>        8000/TCP                        4h53m
service/my-kubernetes-dashboard-kong-manager      NodePort    11.110.152.188   <none>        8002:31095/TCP,8445:30614/TCP   4h53m
service/my-kubernetes-dashboard-kong-proxy        ClusterIP   11.96.184.198    <none>        443/TCP                         4h53m
service/my-kubernetes-dashboard-metrics-scraper   ClusterIP   11.98.26.39      <none>        8000/TCP                        4h53m
service/my-kubernetes-dashboard-web               ClusterIP   11.102.237.200   <none>        8000/TCP                        4h53m
service/nginx-service                             NodePort    11.97.163.211    <none>        80:30080/TCP                    15m

NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-kubernetes-dashboard-api               1/1     1            1           4h53m
deployment.apps/my-kubernetes-dashboard-auth              1/1     1            1           4h53m
deployment.apps/my-kubernetes-dashboard-kong              1/1     1            1           4h53m
deployment.apps/my-kubernetes-dashboard-metrics-scraper   1/1     1            1           4h53m
deployment.apps/my-kubernetes-dashboard-web               1/1     1            1           4h53m
deployment.apps/nginx-deployment                          3/3     3            3           20m

NAME                                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/my-kubernetes-dashboard-api-5f65f88d8d               1         1         1       4h53m
replicaset.apps/my-kubernetes-dashboard-auth-5dd996bdf8              1         1         1       4h53m
replicaset.apps/my-kubernetes-dashboard-kong-565d77fbd4              1         1         1       4h53m
replicaset.apps/my-kubernetes-dashboard-metrics-scraper-564687b79b   1         1         1       4h53m
replicaset.apps/my-kubernetes-dashboard-web-6dcff9c9f8               1         1         1       4h53m
replicaset.apps/nginx-deployment-f7f5c78c5                           3         3         3       20m



  • 16
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值