2-Kubernetes-Kubeadm集群部署

<文章感谢 xingdian >

Kubernetes-Kubeadm集群部署

一:环境准备

三台服务器,一台master,两台node,master节点必须是2核cpu

master10.11.59.120
node-110.11.59.121
node-210.11.59.123

所有服务器关闭防火墙和selinux(所有节点)

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i '/^SELINUX=/c SELINUX=disabled/' /etc/selinux/config
关闭所有服务器的交换分区

保证yum仓库可用(使用国内镜像仓库,所有节点)

[root@localhost ~]# yum clean all
[root@localhost ~]# yum makecache fast

保证网络可用

[root@localhost ~]# ping www.baidu.com

修改主机名(所有节点)

[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# hostnamectl set-hostname node-1
[root@localhost ~]# hostnamectl set-hostname node-2

添加本地解析(所有节点)

[root@master ~]# cat >> /etc/hosts <<eof
10.11.59.120 master
10.11.59.121 node-1
10.11.59.123 node-2
eof

关闭交换分区(所有节点)

[root@master ~]# swapoff -a  临时关闭
[root@master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab 永久关闭

安装容器(所有节点)

[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@master ~]# yum -y install docker-ce
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker

安装kubeadm和kubelet(所有节点)

cat >> /etc/yum.repos.d/kubernetes.repo <<eof
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
eof

[root@xingdian ~]# yum -y install kubeadm-1.19.4  kubelet-1.19.4  kubectl-1.19.4 ipvsadm

获取docker的cgroups(所有节点)

[root@master ~]# DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d':' -f2)
[root@master ~]# echo $DOCKER_CGROUPS
cgroupfs

配置kubelet的cgroups(所有节点)

[root@master ~]# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF

加载内核模块(所有节点)

[root@master ~]# modprobe br_netfilter

修改内核参数(所有节点)

[root@master ~]# cat >> /etc/sysctl.conf <<eof
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
eof
[root@master ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0

检查内核模块是否加载成功

[root@master ~]# lsmod | grep ip
ipt_MASQUERADE         12678  1 
nf_nat_masquerade_ipv4    13412  1 ipt_MASQUERADE
iptable_filter         12810  1 
iptable_nat            12875  1 
nf_conntrack_ipv4      15053  2 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_nat_ipv4            14115  1 iptable_nat
nf_nat                 26787  2 nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack          133387  6 nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
ip_tables              27115  2 iptable_filter,iptable_nat
ip_set                 36439  0 
nfnetlink              14696  3 ip_set,nf_conntrack_netlink

二:部署Kubernetes

下载安装各个节点(所有节点)

kube-apiserver.service

kube-controller-manager.service

kube-scheduler.servic

kube-proxy.service

DASHBOARD

DNS

FLANNEL

PAUSE

注意:

有个别的组件下载不了,需要到我的服务器上下载(www.blackmed.cn)

[root@master ~]# cat >> kubernetes.sh << eof
#!/bin/bash
K8S_VERSION=v1.19.2
ETCD_VERSION=3.4.13-0
DASHBOARD_VERSION=v1.8.3
FLANNEL_VERSION=v0.10.0-amd64
DNS_VERSION=1.7.0
PAUSE_VERSION=3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:$K8S_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:$K8S_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:$K8S_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:$K8S_VERSION
wget http://www.blackmed.cn/kubeadm/etcd.tar
docker load < ./etcd.tar
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$DNS_VERSION
wget http://www.blackmed.cn/kubeadm/flannel.tar
docker load < ./flannel.tar
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:$K8S_VERSION k8s.gcr.io/kube-apiserver:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:$K8S_VERSION k8s.gcr.io/kube-controller-manager:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:$K8S_VERSION k8s.gcr.io/kube-scheduler:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$DNS_VERSION k8s.gcr.io/coredns:$DNS_VERSION
eof
[root@master ~]# bash kubernetes.sh

master节点初始化

[root@master ~]# kubeadm init --kubernetes-version=1.19.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.11.59.120
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.11.59.120:6443 --token gypd7a.rfhl9jg449jmh6ax \
    --discovery-token-ca-cert-hash sha256:d08786cfbcce9a4519736c0b4a02f3564e62d377eb787f4a4aa77687e5f7c1a6 
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

安装pod插件

[root@master ~]# git clone https://github.com/blackmed/kubernetes-kubeadm.git
注意:切换的下载目录下,找到flannel.yaml的文件
[root@k8s-master ~]# kubectl create -f flannel.yaml

将node加入工作节点(node节点)

注意:

这里使用的是master初始化产生的token

这里的token时间长了会改变,需要使用命令获取,见下期内容

没有记录集群 join 命令的可以通过以下方式重新获取:

kubeadm token create --print-join-command --ttl=0

[root@node-1 ~]# kubeadm join 10.11.59.120:6443 --token gypd7a.rfhl9jg449jmh6ax --discovery-token-ca-cert-hash sha256:d08786cfbcce9a4519736c0b4a02f3564e62d377eb787f4a4aa77687e5f7c1a6

master节点查看集群状态

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   64m   v1.19.2
node-1   Ready    <none>   59m   v1.19.2
node-2   Ready    <none>   59m   v1.19.2

三:Dashboard界面部署

1.kubeadm安装的k8s集群获取kube-scheduler和kube-controller-manager组件状态异常

[root@master ~]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}

先查看本地的端口,可以确认没有启动10251、10252端口

img

2.确认kube-scheduler和kube-controller-manager组件配置是否禁用了非安全端口

注意:controller-manager组件的配置如下:可以去掉–port=0这个设置,然后重启sudo systemctl restart kubelet

[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
....
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
#     - --port=0
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.1.0.0/16
    - --use-service-account-credentials=true
....  

[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
......
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
#     - --port=0
    image: k8s.gcr.io/kube-scheduler:v1.18.6
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10259
.....   

重启服务之后确认组件状态,显示就正常了

3.查看健康状态

[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

4.kube-proxy 开启 ipvs

[root@k8s-master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
[root@k8s-master ~]# sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl apply -f kube-proxy-configmap.yaml
[root@k8s-master ~]# rm -f kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' 

5.下载Dashboard安装脚本

[root@k8s-master ~]# cat > recommended.yaml<<-EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #增加
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30008 #增加
  selector:
    k8s-app: kubernetes-dashboard

---

#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: kubernetes-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: kubernetes-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: kubernetes-metrics-scraper
    spec:
      containers:
        - name: kubernetes-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.0
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
EOF

6.创建证书

[root@k8s-master ~]# mkdir dashboard-certs
[root@k8s-master ~]# cd dashboard-certs/
#创建命名空间
[root@k8s-master ~]# kubectl create namespace kubernetes-dashboard
# 创建私钥key文件
[root@k8s-master ~]# openssl genrsa -out dashboard.key 2048
#证书请求
[root@k8s-master ~]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自签证书
[root@k8s-master ~]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#创建kubernetes-dashboard-certs对象
[root@k8s-master ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

7.创建管理员

创建账户
[root@k8s-master ~]# vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard
#保存退出后执行
[root@k8s-master ~]# kubectl create -f dashboard-admin.yaml
为用户分配权限
[root@k8s-master ~]# vim dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
#保存退出后执行
[root@k8s-master ~]# kubectl create -f dashboard-admin-bind-cluster-role.yaml

8.安装 Dashboard

#安装
[root@k8s-master ~]# kubectl create -f  ~/recommended.yaml

#检查结果
[root@k8s-master ~]# kubectl get pods -A  -o wide

[root@k8s-master ~]# kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.1.186.219   <none>        8000/TCP        19m   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.1.60.1      <none>        443:30008/TCP   19m   k8s-app=kubernetes-dashboard

9.查看并复制token

注意:使用自己的token

[root@master ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-sd2nv
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 26c61425-5e42-4704-9532-f6a2d3d9fdca

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImRqbHRDWTFPaEFFYTQxNEZMNWlxWkVNMGt5Y0xLdkVaRG16MVFKdlByMWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc2QybnYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjZjNjE0MjUtNWU0Mi00NzA0LTk1MzItZjZhMmQzZDlmZGNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.aG4Vwb3INeKQVXMwRjsCmtMmuy5kVR8sYk0VUThCZUDOPXsXuxt560pscRSIpPrZObIDygA9er3WNG6v_9UwgSKeqoYUmnNfYrWsUzQPR_KDBURx6QxhaGgQWl-LRII6VMiBaXCzvUltBYrIct8I06aMxSM1yo35FBeUO8TRyKq6gqa9qhZVhadTGv40f1oU5wqSRPpleSIrnV0ghUVCEXojHWH59T47zX-o8vYBUEb_sK1Yydqjh9F-669uJ8f_twIre2gYJ4TIuQT0gqwsRjdD3ln2YT8jqJlYwWOZvdiDViTCRrnqop3vdMua9YzmEGXjgxD1K6QavNfnMnGBHQ
ca.crt:     1066 bytes

10.浏览器访问

https://159.138.40.79:30008

img

<文章感谢 xingdian >

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值