kubeadm部署kubernetes1.29

一、kubernetes集群节点准备

1.1、服务器要求

两台或多台安装linux服务器,此处使用vmware安装虚拟服务器
硬件配置:2GB或更多RAM,2个CPU或更多CPU
需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点

1.2、集群规划

软件版本
操作系统CentOS Stream release 9
kubernetes1.29
docker26.0.1
角色ip备注
k8s-master192.168.205.130node
k8s-node01192.168.205.131node

1.3、服务器环境准备(所有节点)

# 根据规划设置主机名【master节点上操作】
hostnamectl set-hostname k8s-master

# 根据规划设置主机名【node01节点操作】
hostnamectl set-hostname k8s-node01

1.4、主机名与IP解析(所有节点)

cat >> /etc/hosts << EOF
192.168.205.130 k8s-master
192.168.205.131 k8s-node01
EOF

1.5、关闭防火墙与SELINUX(所有节点)

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

1.6、时间同步配置(所有节点)

yum install chrony -y
systemctl start chronyd && systemctl enable chronyd && chronyc sources
date

1.7、配置内核路由转发及网桥过滤(所有节点)

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system

# 加载br_netfilter模块
modprobe  br_netfilter
lsmod |grep  br_netfilter

1.8、配置ipvs转发(所有节点)

yum -y install ipset ipvsadm

# 配置ipvsadm模块加载方式
# 添加需要加载的模块
mkdir -p /etc/sysconfig/ipvsadm
cat > /etc/sysconfig/ipvsadm/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

# 授权、运行、检查是否加载
chmod 755 /etc/sysconfig/ipvsadm/ipvs.modules && bash /etc/sysconfig/ipvsadm/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

1.9、关闭swap分区(所有节点)

sed -ri 's/.*swap.*/#&/' /etc/fstab  
swapoff -a 
grep swap /etc/fstab 

二、Docker-ce及cri-dockerd准备

k8s(v1.24版本以前)使用docker-shim调用流程:kubelet(客户端) -> docker shim -> dockerd -> containerd -> containerd-shim -> runc
k8s(v1.24版本以后)使用CRI shim调用流程:kubelet(客户端) ->CRI shim(被contained内置) -> containerd -> containerd-shim -> runc

为什么要安装docker和ci-dockerd?

Kubernetes v1.24移除docker-shim的支持,而Docker Engine默认又不支持CRI标准,因此二者默认无法再直接集成。为此,Mirantis和Docker联合创建了cri-dockerd项目,用于为Docker Engine提供一个能够支持到CRI规范的桥梁,从而能够让Docker作为Kubernetes容器引擎。

2.1、Docker安装YUM源准备(所有节点)

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2.2、Docker安装(所有节点)

yum -y install docker-ce

2.3、启动Docker服务(所有节点)

systemctl enable --now docker

2.4、修改cgroup方式(所有节点)

在/etc/docker/daemon.json添加如下内容

# cat > /etc/docker/daemon.json <<EOF
{
        "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

2.5、重新启动docker(所有节点)

systemctl restart docker

2.6、cri-dockerd安装(所有节点)
参考:https://github.com/Mirantis/cri-dockerd

kubernetes中的pause容器主要为每个业务容器提供以下功能:

  • PID命名空间:Pod中的不同应用程序可以看到其他应用程序的进程ID。
  • 网络命名空间:Pod中的多个容器能够访问同一个IP和端口范围。
  • IPC命名空间:Pod中的多个容器能够使用SystemV IPC或POSIX消息队列进行通信。
  • UTS命名空间:Pod中的多个容器共享一个主机名;Volumes(共享存储卷):
  • Pod中的各个容器可以访问在Pod级别定义的Volum
# 下载安装最新版的cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8.amd64.tgz  # amd架构下载此安装包
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8.amd64.tgz  # arm架构下载此安装包
tar xf cri-dockerd-0.3.8.amd64.tgz 
mv cri-dockerd/cri-dockerd  /usr/bin/
rm -rf  cri-dockerd  cri-dockerd-0.3.8.amd64.tgz

# 配置启动项
cat > /etc/systemd/system/cri-docker.service<<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
# ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
# 指定用作 Pod 的基础容器的容器镜像(“pause 镜像”)
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.k8s.io/pause:3.9 --container-runtime-endpoint fd:// 
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

cat > /etc/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF

systemctl daemon-reload 
systemctl enable cri-docker && systemctl start cri-docker && systemctl status cri-docker

三、安装kubelet、kubeadm、kubectl

3.1、配置k8s源(所有节点)

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
# exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

3.2、安装kubelet、kubeadm、kubectl (所有节点)

yum install -y kubelet kubeadm kubectl # 安装默认版本
yum -y install  kubeadm-1.29.0-150500.1.1  kubelet-1.29.0-150500.1.1 kubectl-1.29.0-150500.1.1  # 安装指定版本

3.3、配置 cgroup 驱动与docker一致(所有节点)

cp /etc/sysconfig/kubelet{,.bak}
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
systemctl enable kubelet

3.4、安装自动补全工具(可选)

yum install bash-completion -y 
source /usr/share/bash-completion/bash_completion
echo "source <(kubectl completion bash)" >> ~/.bashrc
source  ~/.bashrc   

3.5、查看配置镜像

# kubeadm config images list --kubernetes-version=v1.29.0
registry.k8s.io/kube-apiserver:v1.29.0
registry.k8s.io/kube-controller-manager:v1.29.0
registry.k8s.io/kube-scheduler:v1.29.0
registry.k8s.io/kube-proxy:v1.29.0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.10-0

3.6、如果网络有限制,请提前下载需要用到的镜像

# 重点,重点,重点
# kubeadm部署集群,需要用到k8s配置镜像和Calico网络配置镜像
# 由于默认拉取镜像地址k8s.gcr.io国内无法访问,国内镜像仓库我也没有找到,所以建议提前下载好,导入镜像。

# 所有需要用到镜像(k8s配置镜像和Calico网络配置镜像):
# docker images
REPOSITORY                                TAG        IMAGE ID       CREATED         SIZE
calico/kube-controllers                   v3.27.0    4e87edec0297   12 days ago     75.5MB
calico/cni                                v3.27.0    8e8d96a874c0   12 days ago     211MB
calico/pod2daemon-flexvol                 v3.27.0    6506d2e0be2d   12 days ago     15.4MB
calico/node                               v3.27.0    1843802b91be   13 days ago     340MB
registry.k8s.io/kube-apiserver            v1.29.0    1443a367b16d   2 weeks ago     127MB
registry.k8s.io/kube-scheduler            v1.29.0    7ace497ddb8e   2 weeks ago     59.5MB
registry.k8s.io/kube-controller-manager   v1.29.0    0824682bcdc8   2 weeks ago     122MB
registry.k8s.io/kube-proxy                v1.29.0    98262743b26f   2 weeks ago     82.2MB
registry.k8s.io/etcd                      3.5.10-0   a0eed15eed44   8 weeks ago     148MB
registry.k8s.io/coredns/coredns           v1.11.1    cbb01a7bd410   4 months ago    59.8MB
registry.k8s.io/pause                     3.9        e6f181688397   14 months ago   744kB

3.7、集群初始化(master节点运行)

# 初始化集群
kubeadm init \
--apiserver-advertise-address 192.168.205.130 # master节点ip  \
--kubernetes-version v1.29.0 \
--pod-network-cidr=10.244.0.0/16 # pod分配的ip \
--cri-socket=unix:///var/run/cri-dockerd.sock

# 如果网络有问题,请使用如下命令初始化
kubeadm init \
--apiserver-advertise-address 192.168.205.130 # master节点ip  \
--kubernetes-version v1.29.0 \
--pod-network-cidr=10.244.0.0/16 # pod分配的ip \
--cri-socket=unix:///var/run/cri-dockerd.sock --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

说明:
–apiserver-advertise-address 集群通告地址,node节点连接master的地址,如果是高可用,需要配置VIP的地址。这里是单master架构,默认master地址即可。
–kubernetes-version K8s版本,与上面安装的一致
–pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致
–cri-socket 指定cri-dockerd接口,如果是containerd则使用–cri-socket unix:///run/containerd/containerd.sock

3.8、集群初始化过程如下

kubeadm init \
--apiserver-advertise-address 192.168.205.130  \
--kubernetes-version v1.29.0 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.29.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.205.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.205.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.205.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.003337 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: nv9u6j.4n2jh1x6bgg7b1fd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.205.130:6443 --token nv9u6j.4n2jh1x6bgg7b1fd \
	--discovery-token-ca-cert-hash sha256:e39b95badc82de71bb2c933d10007d57f82718c28c492be4c214a8df642d4ae4 

3.9、创建配置目录(master)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 创建可永久使用的token
kubeadm token create --ttl 0  --print-join-command

3.10、node节点执行如下命令添加节点(node节点运行)

# 添加node节点输出过程
kubeadm join 192.168.205.130:6443 --token nv9u6j.4n2jh1x6bgg7b1fd --discovery-token-ca-cert-hash sha256:e39b95badc82de71bb2c933d10007d57f82718c28c492be4c214a8df642d4ae4  --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

四、集群网络插件 calico 部署(master节点运行)

建议使用flannel组件
# 将此文件下载直接apply即可
wget  https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl apply -f kube-flannel.yml

参考地址:https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
4.1、应用operator资源清单文件

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

4.2、通过自定义方式安装

wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml

4.2.1、修改文件第13行,修改为使用kubeadm init ----pod-network-cidr对应的IP地址段

vim custom-resources.yaml
 11     ipPools:
 12     - blockSize: 26
 13       cidr: 10.244.0.0/16 
 14       encapsulation: VXLANCrossSubnet

4.3、应用资源清单文件

kubectl apply -f custom-resources.yaml

4.4、监视calico-sysem命名空间中pod运行情况

watch kubectl get pods -n calico-system

4.5、查看calico是否正常运行

kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5485657c4c-jkzd7   1/1     Running   0          2m27s
calico-node-jw957                          1/1     Running   0          2m27s
calico-node-m5dfr                          1/1     Running   0          2m27s
calico-typha-5dd5d45968-s75bf              1/1     Running   0          2m27s
csi-node-driver-gfm64                      2/2     Running   0          2m27s
csi-node-driver-jzkhx                      2/2     Running   0          2m27s

4.6、查看集群节点是否正常运行

kubectl get nodes
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01   Ready    control-plane   38m   v1.29.0
k8s-node01     Ready    <none>          21m   v1.29.0

4.7、查看所有pod是否正常运行

kubectl get pod -A 
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-5447dffd95-44llf          1/1     Running   0          62s
calico-apiserver   calico-apiserver-5447dffd95-lvfs8          1/1     Running   0          62s
calico-system      calico-kube-controllers-5485657c4c-jkzd7   1/1     Running   0          2m35s
calico-system      calico-node-jw957                          1/1     Running   0          2m35s
calico-system      calico-node-m5dfr                          1/1     Running   0          2m35s
calico-system      calico-typha-5dd5d45968-s75bf              1/1     Running   0          2m35s
calico-system      csi-node-driver-gfm64                      2/2     Running   0          2m35s
calico-system      csi-node-driver-jzkhx                      2/2     Running   0          2m35s
kube-system        coredns-76f75df574-c6qh5                   1/1     Running   0          38m
kube-system        coredns-76f75df574-dttmd                   1/1     Running   0          38m
kube-system        etcd-k8s-master01                          1/1     Running   0          38m
kube-system        kube-apiserver-k8s-master01                1/1     Running   0          38m
kube-system        kube-controller-manager-k8s-master01       1/1     Running   0          38m
kube-system        kube-proxy-5ddt5                           1/1     Running   0          21m
kube-system        kube-proxy-bclmq                           1/1     Running   0          38m
kube-system        kube-scheduler-k8s-master01                1/1     Running   0          38m
tigera-operator    tigera-operator-7f8cd97876-bt6ph           1/1     Running   0          3m45s

注:以后所有yaml文件都只在Master节点执行。
安装目录:/etc/kubernetes/
组件配置文件目录:/etc/kubernetes/manifests/

五、测试集群是否正常运行pod

#创建测试pod nginx
kubectl create deployment web -r 2 --image=nginx
deployment.apps/web created
# 使用nodeport将端口映射出来
kubectl expose deployment web --port=80  --type=NodePort
service/web exposed

5.1、查看pod运行状态

 kubectl get pod,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-76fd95c67-65wpv   1/1     Running   0          21s
pod/web-76fd95c67-fj8p4   1/1     Running   0          21s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        40m
service/web          NodePort    10.102.193.245   <none>        80:31105/TCP   5s

5.2、命令行测试

curl 192.168.205.131:31105
111-test
curl 192.168.205.131:31105
2222-test
  • 26
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

运维那些事~

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值