k8s集群管理(一)

cfssl 工具

cfssl-certinfo 验证证书信息
  1. 用法:

    cfssl-certinfo -cert xxx.pem
    cfssl-certinfo -domain www.baidu.com
    

    将证书还原成 json 结构

    md5sum命令:
    验证一个文件的md5值

    md5sum 文件
    
  2. 从 .kubeconfig 配置文中反解获取证书

    echo “LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3RENDQXFpZ0F3SUJBZ0lVYkh1czBSZkE2dHA1TjRYZnYvRWhkSlA4dytvd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hqRUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjJKbGFXcHBibWN4RURBT0JnTlZCQWNUQjJKbAphV3BwYm1jeEN6QUpCZ05WQkFvVEFtOWtNUXd3Q2dZRFZRUUxFd052Y0hNeEVEQU9CZ05WQkFNVEIwWmxibWRaCmRXNHdIaGNOTWpFd09ESXpNRGt5TnpBd1doY05OREV3T0RFNE1Ea3lOekF3V2pCZk1Rc3dDUVlEVlFRR0V3SkQKVGpFUU1BNEdBMVVFQ0JNSFltVnBhbWx1WnpFUU1BNEdBMVVFQnhNSFltVnBhbWx1WnpFTE1Ba0dBMVVFQ2hNQwpiMlF4RERBS0JnTlZCQXNUQTI5d2N6RVJNQThHQTFVRUF4TUlhemh6TFc1dlpHVXdnZ0VpTUEwR0NTcUdTSWIzCkRRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRGNnS2M1VzNER3hjUklkeGR0OXh3aEdrWDdOVXREM21pamNqTmMKTnVYb21KYTlpMGlySU9ObVNyNm1GV05NZU14MXdmeW9aY2J4OVdSRXZHcTYwRW0xZmo3SWpyRDByVDZRa1Ivbwp2QkRpWFJ5dWdjMGZPYUVMcG01OWc3SDJXUkpFVjhOV1RXSnBpUTdNcUloYVJqOW1QODEzejlHZnJiZ0hzUnBlCkQ4RGRXem1ZNklXbjlRaHYxcnZ3U1ZDVVJsZ0tUS1pvd0VCclEwZk9BZ0U3Nm5ibDZXZktIRjB0SUpFcTY1ZU0KZngvbExJRExuRGpIWE5SMWorenBvWGZjNWlBNU9jQ1A0bGhUUlBMZW9CUUF1WWpjc3Z4SGY4UVMrSzRvTTVwNwpRWGRVc0RPUk16dW96TktCVFRKUElEZlpXOFZaNTA0eGo4L2UxOXNrS1pYRUFRYzFBZ01CQUFHamRUQnpNQTRHCkExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUEKTUIwR0ExVWREZ1FXQkJSTjJDQnM0RWl5cjJOZDAzaFBGVEwwbFdPRXNUQWZCZ05WSFNNRUdEQVdnQlRnRnhmLwo4c0ozcmYxcm0xdUptN21nZGZFVmh6QU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFBajl5elI4c1JaODQ5VHVQCmV4VDdoNzFkRGxJc0N5aFQzL0pwRHJ3R0Z6L0s1ZFFkbXcyRVh1RWVacXpPYzlRSzZQMEJ1UG9JRTZTcCtvZXMKbnlvTWFUcE96Q0NjT1k1WlRZRzNtQUdiVlB6SlpMWFVIMnNiRDJpYkI4RzYzUlpVV080TG9jS2RqRm9NMjlyNQo4UGdlMEJGU2s4ZXFzam1GMFNITHNLeTFKZTRwU1ZDLzlzVGlpRHllTitwQk02NStSeHJxSWRpSSs2NVNma29vClpBSE5NYnJLNGo1Rlo3ZCtESk0xREZTcjA1V3lQY1lSVTB5bkg2UGd2My9zRHRHWSt0THRVckRiZnlySFZiZTEKczRzc3Bpd0lGQnc4Tk40bitNQmxkd1VxTFB3MGI5anJUbVlRcGQ3K0JIWkMvOFFVcGk2b0NLT1RJQWZkOUxqUApVa3NuV2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==”|base64 -d > 123.pem

    执行命令:

    cfssl-certinfo -cert 123.pem
    
kubeconfig 文件
  1. 这是一个k8s用户的配置文件
  2. 它里面含有证书信息
  3. 证书过期或更换,需要同步替换该文件

kubectl

管理 K8S 核心资源的三种基本方法:
1. 陈述式管理方法 - 主要依赖命令行CLI工具进行管理
  1. 查看名称空间
    kubectl get namespaces
    kubectl get ns
    
  2. 查询 default 名称空间里所有的资源
    kubectl get all -n default
    
  3. 创建/删除名称空间
    kubectl create ns app
    kubectl create namespace app
    kubectl delete ns app
    kubectl delete namespace app
    
  4. 创建 deployment 资源
    kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public # 在kube-public名称空间下创建一个pod 控制器,而控制器的类型是 deployment,根据镜像,将container 运行在pod中
    kubectl get pods -n kube-public # 查看 kube-public名称空间下的 pods
    kubectl get pods -n kube-public -o wide # 详细查看
    
  5. 查看 deployment 资源
    kubectl get deployment -n kube-public  #如果不指定名称空间,使用 default 名称空间 
    
  6. 详细查看 deployment
    kubectl describe deployment nginx-dp -n kube-public
    
  7. 进入 pod
    kubectl exec -it nginx-dp-5dfc689474-tflm4 /bin/bash -n kube-public # nginx-dp-5dfc689474-tflm4 pod的名字
    
  8. 删除 deployment
    kubectl delete deploy nginx-dp -n kube-public
    
  9. 创建 service
    kubectl expose deployment nginx-dp --port=80 -n kube-public
    kubectl scale deployment nginx-dp --replicas=2 -n kube-public # 扩容deployment 
    
  10. 查看 service
    kubectl describe service nginx-dp -n kube-public
    

kubectl 是官方的CLI命令行工具,用于与 apiserver 进行通信,将用户在命令行输入的命令,组织并转化为apiserver能识别的信息,进而实现管理K8S各种资源的一种有效途径。
K8S集群管理资源的唯一入口是通过相应的方法调用apiserver的接口。
kubectl 对于增加资源,删除资源,查看资源都很方便,但是对于资源修改,比较麻烦。

2. 声明式管理方法 - 主要依赖统一资源配置清单(manifest)进行管理
  1. 查看资源配置清单
    kubectl get svc nginx-dp -o yaml -n kube-public
    
  2. 解释资源配置清单
    kubectl explain service
    
  3. 应用资源配置清单
    kubectl create/apply -f xxxx.yaml
    
  4. 在线修改资源配置清单
    kubectl edit svc service_name
    
  5. 离线修改资源配置清单,并引用
    vi xxx.yaml #修改资源配置清单
    kubectl apply -f xxx.yaml # 修改后并应用资源配置清单
    
  6. 声明式删除一个资源
    kubectl delete -f xxxx.yaml
    
3. GUI管理方法 - 主要依赖图形化操作界面进行管理

Kubernetes 网络模型

Kubernetes 设计了网络模型,但却将它的实现交给了网络插件,CNI网络插件最主要的功能就是实现POD资源能够跨宿主机进行通信。常见的CNI网络插件有:Flannel、Calico、Canal、Contiv、OpenContrail、NSX-T、Kube-router

部署K8S的CNI网络插件 - Flannel
集群规划
主机名角色ip
hdss7-21.host.comflannel10.4.7.21
hdss7-22.host.comflannel10.4.7.22
下载 flannel、解压、做软连接
  1. 解压,做软链接
    mkdir -p /opt/flannel-v0.11.0
    tar xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/
    ln -s /opt/flannel-v0.11.0/ /opt/flannel
    
  2. flannel要连接etcd,需要将hdss7-200上的证书拷贝过来
    mkdir -p /opt/flannel-v0.11.0/cert
    cd /opt/flannel-v0.11.0/cert
    scp hdss7-200:/opt/certs/ca.pem .
    scp hdss7-200:/opt/certs/client.pem .
    scp hdss7-200:/opt/certs/client-key.pem .
    
创建配置 /opt/flannel-v0.11.0/subnet.env
FLANNEL_NETWORK=172.7.0.0/16		# pod的网段	
FLANNEL_SUBNET=172.7.21.1/24		# 本机运行pod的网段	
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
创建启动脚本 /opt/flannel-v0.11.0/flanneld.sh
  1. 创建启动脚本
    #!/bin/sh
    ./flanneld \
      --public-ip=10.4.7.21 \
      --etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
      --etcd-keyfile=./cert/client-key.pem \
      --etcd-certfile=./cert/client.pem \
      --etcd-cafile=./cert/ca.pem \
      --iface=ens33 \
      --subnet-file=./subnet.env \
      --healthz-port=2401
    
  2. 增加执行权限,并创建日志目录
    chmod +x  /opt/flannel-v0.11.0/flanneld.sh
    mkdir -p /data/logs/flanneld
    
操作etcd,增加host-gw
cd /opt/etcd/
./etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'
./etcdctl member list # 查看etcd节点
./etcdctl get /coreos.com/network/config # 查看网络
创建 supervisor 配置
  1. 创建 /etc/supervisord.d/flannel.ini
    [program:flanneld-7-21]
    command=/opt/flannel/flanneld.sh                             ; the program (relative uses PATH, can take args)
    numprocs=1                                                   ; number of processes copies to start (def 1)
    directory=/opt/flannel                                       ; directory to cwd to before exec (def no cwd)
    autostart=true                                               ; start at supervisord start (default: true)
    autorestart=true                                             ; retstart at unexpected quit (default: true)
    startsecs=30                                                 ; number of secs prog must stay running (def. 1)
    startretries=3                                               ; max # of serial start failures (default 3)
    exitcodes=0,2                                                ; 'expected' exit codes for process (default 0,2)
    stopsignal=QUIT                                              ; signal used to kill process (default TERM)
    stopwaitsecs=10                                              ; max num secs to wait b4 SIGKILL (default 10)
    user=root                                                    ; setuid to this UNIX account to run the program
    redirect_stderr=true                                         ; redirect proc stderr to stdout (default false)
    stdout_logfile=/data/logs/flanneld/flanneld.stdout.log       ; stderr log path, NONE for none; default AUTO
    stdout_logfile_maxbytes=64MB                                 ; max # logfile bytes b4 rotation (default 50MB)
    stdout_logfile_backups=4                                     ; # of stdout logfile backups (default 10)
    stdout_capture_maxbytes=1MB                                  ; number of bytes in 'capturemode' (default 0)
    stdout_events_enabled=false                                  ; emit events on stdout writes (default false)
    
  2. 启动flannel 插件
    supervisorctl update 
    supervisorctl start flannel-7-21
    
Flannel 的 host-gw 模型

在这里插入图片描述

Flannel 的 VxLan 模型

在这里插入图片描述

iptables,解决集群内部的 SNAT 转换问题
  1. 安装iptables

    yum install iptables-services -y
    
  2. 启动iptables并设置开机自启动

    systemctl start iptables
    systemctl enable iptables 
    
  3. 优化 iptables 规则

    iptables-save |grep -i postrouting # 查看iptables规则
    iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE # 删除一条iptables规则 -D表示删除
    # 插入一条规则 -I 表示插入 -s 表示源地址 ! 表示否定:插入一条,源地址网络为172.7.21.0/24网络地址,不是去往172.7.0.0/16
    # 这个网络,也不是从docker0出网,那么才做网络地址转换
    iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE 
    iptables-save > /etc/sysconfig/iptables #将规则保存
    

    10.4.7.21主机上,来源是172.7.21.0/24段的docker的ip,目标ip不是172.7.0.0/16段,网络发包不从docker0网桥设备出站的,才进行SNAT转换。

    iptables-save |grep -i reject # 查看iptables的拒绝规则
    iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited # 删除 INPUT 链拒绝规则
    iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited # 删除 FORWARD 链拒绝规则
    iptables-save > /etc/sysconfig/iptables
    
服务发现
  1. 简单来说,服务发现就是服务(应用)之间相互定位的过程。
  2. 在K8S集群里,POD的IP是不断变换的,如何”以不变应万变“?
    1. 抽象出Service资源,通过标签选择器,关联一组POD。
    2. 抽象出集群网络,通过相对固定的”集群IP“,使服务接入点固定。
部署K8S的服务发现插件 - CoreDNS
  1. 部署coredns (在运维主机上hdss7-200上执行)
    docker pull coredns/coredns:1.6.1 # 从docker仓库中拉取镜像
    docker tag coredns/coredns:1.6.1 harbor.od.com/public/coredns:v1.6.1 # 给镜像打tag
    docker push harbor.od.com/public/coredns:v1.6.1 # 将镜像推送到私有仓库
    
  2. 运维主机上配置nginx(hdss7-200上执行)
    mkdir /data/k8s-yaml
    vi /etc/nginx/conf.d/k8s-yaml.od.com.conf
    server {
        listen       80;
        server_name  k8s-yaml.od.com;
    
        location / {
            autoindex on;
            default_type text/plain;
            root /data/k8s-yaml;
        }
    }
    # 保存退出后,执行下面命令
    nginx -t
    nginx -s reload
    
  3. 配置内网DNS解析 (hdss7-11上执行)
    vi /var/named/od.com.zone
    
    $ORIGIN od.com.
    $TTL 600        ; 10 minutes
    @       IN SOA  dns.od.com. dnsadmin.od.com. (
                                    2021082205 ; serial  # 注意这里的序列化增加1,每次编辑改文件,序列化增加1
                                    10800      ; refresh (3 hours)
                                    900        ; retry (15 minutes)
                                    604800     ; expire (1 week)
                                    86400      ; minimum (1 day)
                                    )
                           NS    dns.od.com.
    $TTL 60 ; 1 minute
    dns                 A     10.4.7.11
    harbor              A     10.4.7.200
    k8s-yaml            A     10.4.7.200  # 添加一条 DNS解析规则
    
    
  4. 在运维主机上 /data/k8s-yaml目录下,编辑文件rbac.yaml、ConfigMap.yaml、Deployment.yaml、Service.yaml资源文件
    1. rbac.yaml
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: coredns
        namespace: kube-system
        labels:
            kubernetes.io/cluster-service: "true"
            addonmanager.kubernetes.io/mode: Reconcile
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
          addonmanager.kubernetes.io/mode: Reconcile
        name: system:coredns
      rules:
      - apiGroups:
        - ""
        resources:
        - endpoints
        - services
        - pods
        - namespaces
        verbs:
        - list
        - watch
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        annotations:
          rbac.authorization.kubernetes.io/autoupdate: "true"
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
          addonmanager.kubernetes.io/mode: EnsureExists
        name: system:coredns
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: system:coredns
      subjects:
      - kind: ServiceAccount
        name: coredns
        namespace: kube-system
      
    2. cm.yaml
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: coredns
        namespace: kube-system
      data:
        Corefile: |
          .:53 {
              errors
              log
              health
              ready
              kubernetes cluster.local 10.254.0.0/16		# service网段
              forward . 10.4.7.11						# 物理机安装dns服务的地址		
              cache 30
              loop
              reload
              loadbalance
             }
      
    3. dp.yaml
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: coredns
        namespace: kube-system
        labels:
          k8s-app: coredns
          kubernetes.io/name: "CoreDNS"
      spec:
        replicas: 1
        selector:
          matchLabels:
            k8s-app: coredns
        template:
          metadata:
            labels:
              k8s-app: coredns
          spec:
            priorityClassName: system-cluster-critical
            serviceAccountName: coredns
            containers:
            - name: coredns
              image: harbor.od.com/public/coredns:v1.6.1
              args:
              - -conf
              - /etc/coredns/Corefile
              volumeMounts:
              - name: config-volume
                mountPath: /etc/coredns
              ports:
              - containerPort: 53
                name: dns
                protocol: UDP
              - containerPort: 53
                name: dns-tcp
                protocol: TCP
              - containerPort: 9153
                name: metrics
                protocol: TCP
              livenessProbe:
                httpGet:
                  path: /health
                  port: 8080
                  scheme: HTTP
                initialDelaySeconds: 60
                timeoutSeconds: 5
                successThreshold: 1
                failureThreshold: 5
            dnsPolicy: Default
            volumes:
              - name: config-volume
                configMap:
                  name: coredns
                  items:
                  - key: Corefile
                    path: Corefile
      
    4. svc.yaml
      apiVersion: v1
      kind: Service
      metadata:
        name: coredns
        namespace: kube-system
        labels:
          k8s-app: coredns
          kubernetes.io/cluster-service: "true"
          kubernetes.io/name: "CoreDNS"
      spec:
        selector:
          k8s-app: coredns
        clusterIP: 192.168.0.2		# dns服务的ip
        ports:
        - name: dns
          port: 53
          protocol: UDP
        - name: dns-tcp
          port: 53
        - name: metrics
          port: 9153
          protocol: TCP
      
  5. 在hdss7-21上执行,创建
    kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
    kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
    kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
    kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
    
K8S服务暴露的两种方式

K8S的DNS实现了服务在集群“内”被自动发现,那么如何使得服务在K8S集群“外”被使用和访问呢?

  1. 使用 NodePort 型的 Service,该方式无法使用 Kube-proxy 的 ipvs 模型,只能使用 iptables 模型。
  2. 使用 Ingress 资源。Ingress 只能调度并暴露7层应用,特指http和https协议。
  • Ingress 是 K8S API的标准资源类型之一,也是一种核心资源。它其实就是一组基于域名和URL路径,把用户的请求转发至指定Service 资源的规则。
  • Ingress 可以将集群外部的请求流量,转发至集群内部,从而实现“服务暴露”。
  • Ingress 控制器是能够为 Ingress 资源监听某套接字,然后根据 Ingress 规则匹配机制路由调度流量的一个组件。
部署traefik (ingress 控制器)
  1. 在运维主机hdss7-200 上,准备traefik镜像
    docker pull traefik:v1.7.2-alpine
    docker tag traefik:v1.7.2-alpine harbor.od.com/public/traefik:v1.7.2
    docker push harbor.od.com/public/traefik:v1.7.2
    
  2. 创建资源配置清单
    cd /data/k8s-yaml/
    mkdir traefik
    cd traefik/
    vim rbac.yaml
    
    rbac.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: traefik-ingress-controller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
      name: traefik-ingress-controller
    rules:
      - apiGroups:
          - ""
        resources:
          - services
          - endpoints
          - secrets
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: traefik-ingress-controller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: traefik-ingress-controller
    subjects:
    - kind: ServiceAccount
      name: traefik-ingress-controller
      namespace: kube-system
    
  3. ds.yaml文档
    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: traefik-ingress
      namespace: kube-system
      labels:
        k8s-app: traefik-ingress
    spec:
      template:
        metadata:
          labels:
            k8s-app: traefik-ingress
            name: traefik-ingress
        spec:
          serviceAccountName: traefik-ingress-controller
          terminationGracePeriodSeconds: 60
          containers:
          - image: harbor.od.com/public/traefik:v1.7.2
            name: traefik-ingress
            ports:
            - name: controller
              containerPort: 80
              hostPort: 81
            - name: admin-web
              containerPort: 8080
            securityContext:
              capabilities:
                drop:
                - ALL
                add:
                - NET_BIND_SERVICE
            args:
            - --api
            - --kubernetes
            - --logLevel=INFO
            - --insecureskipverify=true
            - --kubernetes.endpoint=https://10.4.7.10:7443		# keepalive VIP的地址
            - --accesslog
            - --accesslog.filepath=/var/log/traefik_access.log
            - --traefiklog
            - --traefiklog.filepath=/var/log/traefik.log
            - --metrics.prometheus
    
  4. svc.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: traefik-ingress-service
      namespace: kube-system
    spec:
      selector:
        k8s-app: traefik-ingress
      ports:
        - protocol: TCP
          port: 80
          name: controller
        - protocol: TCP
          port: 8080
          name: admin-web
    
  5. ingress.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: traefik-web-ui
      namespace: kube-system
      annotations:
        kubernetes.io/ingress.class: traefik
    spec:
      rules:
      - host: traefik.od.com
        http:
          paths:
    	  - path: /
            backend:
              serviceName: traefik-ingress-service
              servicePort: 8080
    
  6. 应用资源配置清单
    kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml
    kubectl apply -f http://k8s-yaml.od.com/traefik/ds.yaml
    kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml
    kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml
    
  7. 在HDSS7-11 和 HDSS7-12两台主机上(配置了keepalive的主机上)的nginx配置反向代理
    vi /etc/nginx/conf.d/od.com.conf
    
    upstream default_backend_traefik {
        server 10.4.7.21:81    max_fails=3 fail_timeout=10s;	# 此ip为node ip+81端口,每个node节点都需要加上
        server 10.4.7.22:81    max_fails=3 fail_timeout=10s;
    }
    server {
        server_name *.od.com;					# 泛域名匹配,凡是od.con的域名内的http服务。都给到ingress里面
      
        location / {
            proxy_pass http://default_backend_traefik;
            proxy_set_header Host       $http_host;
            proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
        }
    }
    
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值