k8s学习(三十二)K8s部署Loki日志收集

Loki是受Prometheus启发由Grafana Labs团队开源的水平可扩展,高度可用的多租户日志聚合系统。 开发语言: Google Go。它的设计具有很高的成本效益,并且易于操作。使用标签来作为索引,而不是对全文进行检索,也就是说,你通过这些标签既可以查询日志的内容也可以查询到监控的数据签,极大地降低了日志索引的存储。系统架构十分简单,由以下3个部分组成 :
1.Loki 是主服务器,负责存储日志和处理查询 。
2.promtail 是代理,负责收集日志并将其发送给 loki 。
3.Grafana 用于 UI 展示。

一、Loki与ELK/EFK对比
ELK
优势:
1、功能丰富,允许复杂的操作
劣势:
1、主流的ELK(全文检索)或者EFK比较重
2、ES复杂的搜索功能很多都用不上 规模复杂,资源占用高,操作苦难
大多数查询只关注一定时间范围和一些简单的参数(如host、service等)
3、Kibana和Grafana之间切换,影响用户体验
4、倒排索引的切分和共享的成本较高

Loki
1、最小化度量和日志的切换成本
有助于减少异常事件的响应时间和提高用户的体验
2、在查询语言的易操作性和复杂性之间可以达到一个权衡
3、更具成本效益

二、Loki的架构

在这里插入图片描述

promtail收集并将日志发送给loki的 Distributor 组件
Distributor会对接收到的日志流进行正确性校验,并将验证后的日志分批并行发送到Ingester
Ingester 接受日志流并构建数据块,压缩后存放到所连接的存储后端
Querier 收到HTTP查询请求,并将请求发送至Ingester 用以获取内存数据 ,Ingester 收到请求后返回符合条件的数据 ;
如果 Ingester 没有返回数据,Querier 会从后端存储加载数据并遍历去重执行查询 ,通过HTTP返回查询结果

三、Loki在K8s中部署
网上的资料一般使用Helm安装,这里使用yaml部署,更适应离线的场景
grafana的部署不在此篇的讨论中,可参见之前文章:https://blog.csdn.net/u011943534/article/details/102906030
storageclass的部署也不在此篇的讨论中,可参见之前文章:https://blog.csdn.net/u011943534/article/details/100887530

1、准备镜像

docker pull grafana/loki:2.3.0
docker tag grafana/loki:2.3.0 172.16.10.160:80/grafana/loki:2.3.0
docker push 172.16.10.160:80/grafana/loki:2.3.0
docker save -o grafana-loki-2.3.0.tar grafana/loki:2.3.0

docker pull grafana/promtail:2.3.0
docker tag grafana/promtail:2.3.0 172.16.10.160:80/grafana/promtail:2.3.0
docker push 172.16.10.160:80/grafana/promtail:2.3.0
docker save -o grafana-promtail-2.3.0.tar grafana/promtail:2.3.0

以上步骤中将依赖的镜像下载后重新打标签推送到了我的本地私服中,并将镜像打成压缩包,待其他地方使用,适合离线的场景。
(在以下步骤中用到镜像的地方使用的我的镜像地址,需要替换为自己的;storageclass也替换为自己的)

2、编写loki-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: loki
  namespace: logging
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: loki
  namespace: logging
rules:
- apiGroups:
  - extensions
  resourceNames:
  - loki
  resources:
  - podsecuritypolicies
  verbs:
  - use
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: loki
  namespace: logging
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: loki
subjects:
- kind: ServiceAccount
  name: loki

3、编写loki-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: loki
  namespace: logging
  labels:
    app: loki
data:
  loki.yaml: |
    auth_enabled: false
    ingester:
      chunk_idle_period: 3m      # 如果块没有达到最大的块大小,那么在刷新之前,块应该在内存中不更新多长时间
      chunk_block_size: 262144
      chunk_retain_period: 1m      # 块刷新后应该在内存中保留多长时间
      max_transfer_retries: 0      # Number of times to try and transfer chunks when leaving before falling back to flushing to the store. Zero = no transfers are done.
      lifecycler:       #配置ingester的生命周期,以及在哪里注册以进行发现
        ring:
          kvstore:
            store: inmemory      # 用于ring的后端存储,支持consul、etcd、inmemory
          replication_factor: 1      # 写入和读取的ingesters数量,至少为1(为了冗余和弹性,默认情况下为3)
    limits_config:
      enforce_metric_name: false
      reject_old_samples: true      # 旧样品是否会被拒绝
      reject_old_samples_max_age: 168h      # 拒绝旧样本的最大时限
    schema_config:      # 配置从特定时间段开始应该使用哪些索引模式
      configs:
      - from: 2020-10-24      # 创建索引的日期。如果这是唯一的schema_config,则使用过去的日期,否则使用希望切换模式时的日期
        store: boltdb-shipper      # 索引使用哪个存储,如:cassandra, bigtable, dynamodb,或boltdb
        object_store: filesystem      # 用于块的存储,如:gcs, s3, inmemory, filesystem, cassandra,如果省略,默认值与store相同
        schema: v11
        index:      # 配置如何更新和存储索引
          prefix: index_      # 所有周期表的前缀
          period: 24h      # 表周期
    server:
      http_listen_port: 3100
    storage_config:      # 为索引和块配置一个或多个存储
      boltdb_shipper:
        active_index_directory: /data/loki/boltdb-shipper-active
        cache_location: /data/loki/boltdb-shipper-cache
        cache_ttl: 24h         
        shared_store: filesystem
      filesystem:
        directory: /data/loki/chunks
    chunk_store_config:      # 配置如何缓存块,以及在将它们保存到存储之前等待多长时间
      max_look_back_period: 0s      #限制查询数据的时间,默认是禁用的,这个值应该小于或等于table_manager.retention_period中的值
    table_manager:
      retention_deletes_enabled: true      # 日志保留周期开关,用于表保留删除
      retention_period: 48h       # 日志保留周期,保留期必须是索引/块的倍数
    compactor:
      working_directory: /data/loki/boltdb-shipper-compactor
      shared_store: filesystem

4、 编写loki.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: loki
  namespace: logging
  labels:
    app: loki
spec:
  type: ClusterIP
  ports:
    - port: 3100
      protocol: TCP
      name: http-metrics
      targetPort: http-metrics
  selector:
    app: loki

---
apiVersion: v1
kind: Service
metadata:
  name: loki-outer
  namespace: logging
  labels:
    app: loki
spec:
  type: NodePort
  ports:
    - port: 3100
      protocol: TCP
      name: http-metrics
      targetPort: http-metrics
      nodePort: 32537
  selector:
    app: loki

 
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: loki
  namespace: logging
  labels:
    app: loki
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  selector:
    matchLabels:
      app: loki
  serviceName: loki
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: loki
    spec:
      serviceAccountName: loki
      securityContext:
          fsGroup: 10001
          runAsGroup: 10001
          runAsNonRoot: true
          runAsUser: 10001
      initContainers: []
      containers:
        - name: loki
          image: 172.16.10.160:80/grafana/loki:2.3.0
          imagePullPolicy: IfNotPresent
          args:
            - -config.file=/etc/loki/loki.yaml
          volumeMounts:
            - name: config
              mountPath: /etc/loki
            - name: storage
              mountPath: /data
          ports:
            - name: http-metrics
              containerPort: 3100
              protocol: TCP
          livenessProbe:
            httpGet: 
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 45
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet: 
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 45
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          securityContext:
            readOnlyRootFilesystem: true
      terminationGracePeriodSeconds: 4800
      volumes:
        - name: config
          configMap:
            defaultMode: 420
            name: loki
  volumeClaimTemplates:
  - metadata:
      name: storage
      labels:
        app: loki
      annotations:
        volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "2Gi"

5、 编写loki-promtail-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-promtail
  namespace: logging
  labels:
    app: promtail
data:
  promtail.yaml: |
    client:      # 配置Promtail如何连接到Loki的实例
      backoff_config:      # 配置当请求失败时如何重试请求给Loki
        max_period: 5m 
        max_retries: 10
        min_period: 500ms
      batchsize: 1048576      # 发送给Loki的最大批次大小(以字节为单位)
      batchwait: 1s      # 发送批处理前等待的最大时间(即使批次大小未达到最大值)
      external_labels: {}      # 所有发送给Loki的日志添加静态标签
      timeout: 10s      # 等待服务器响应请求的最大时间
    positions:
      filename: /run/promtail/positions.yaml
    server:
      http_listen_port: 3101
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: kubernetes-pods-name
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_label_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-app
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        source_labels:
        - __meta_kubernetes_pod_label_name
      - source_labels:
        - __meta_kubernetes_pod_label_app
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-direct-controllers
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: drop
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-indirect-controller
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: keep
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - action: replace
        regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-static
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: ''
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_label_component
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
        - __meta_kubernetes_pod_container_name
        target_label: __path__

6、 编写loki-promtail-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: loki-promtail
  labels:
    app: promtail
  namespace: logging
 
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    app: promtail
  name: promtail-clusterrole
  namespace: logging
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "watch", "list"]
 
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: promtail-clusterrolebinding
  labels:
    app: promtail
  namespace: logging
subjects:
  - kind: ServiceAccount
    name: loki-promtail
    namespace: logging
roleRef:
  kind: ClusterRole
  name: promtail-clusterrole
  apiGroup: rbac.authorization.k8s.io

7、 编写loki-promtail.yaml

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: loki-promtail
  namespace: logging
  labels:
    app: promtail
spec:
  selector:
    matchLabels:
      app: promtail
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: promtail
    spec:
      serviceAccountName: loki-promtail
      containers:
        - name: promtail
          image: 172.16.10.160:80/grafana/promtail:2.3.0
          imagePullPolicy: IfNotPresent
          args: 
          - -config.file=/etc/promtail/promtail.yaml
          - -client.url=http://loki:3100/loki/api/v1/push
          env: 
          - name: HOSTNAME
            valueFrom: 
              fieldRef: 
                apiVersion: v1
                fieldPath: spec.nodeName
          volumeMounts:
          - mountPath: /etc/promtail
            name: config
          - mountPath: /run/promtail
            name: run
          - mountPath: /var/lib/docker/containers
            name: docker
            readOnly: true
          - mountPath: /var/log/pods
            name: pods
            readOnly: true
          ports:
          - containerPort: 3101
            name: http-metrics
            protocol: TCP
          securityContext:
            readOnlyRootFilesystem: true
            runAsGroup: 0
            runAsUser: 0
          readinessProbe:
            failureThreshold: 5
            httpGet:
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
        - name: config
          configMap:
            defaultMode: 420
            name: loki-promtail
        - name: run
          hostPath:
            path: /run/promtail
            type: ""
        - name: docker
          hostPath:
            path: /var/lib/docker/containers
        - name: pods
          hostPath:
            path: /var/log/pods

8、部署

kubectl apply –f loki-rbac.yaml
kubectl apply –f loki-configmap.yaml
kubectl apply –f loki.yaml
kubectl apply –f loki-promtail-rbac.yaml
kubectl apply –f loki-promtail-configmap.yaml
kubectl apply –f loki-promtail.yaml

9、使用grafana配置Loki数据源
http的url为:http://ip:32537
在这里插入图片描述

10、查看日志
在explore中选择Loki
在这里插入图片描述

<think>好的,用户问的是如何在Kubernetes部署ArgoCD的最佳实践。首先,我得回忆一下ArgoCD的基本安装步骤,通常是通过官方的YAML清单或者Helm来部署。用户提到最佳实践,所以不能只给基础安装步骤,还要包括安全配置、Git仓库集成、RBAC设置这些。 根据引用[3],用户提到了使用kubectl apply -f jenkins-agent-argoapp.yaml,这说明用户可能已经看过一些具体的应用部署例子,不过这里可能是指部署Agent,而不是ArgoCD本身。需要确认正确的部署方法。官方文档通常推荐使用他们提供的清单文件,比如引用[5]里提到了使用特定版本的manifest,比如ref=v0.11.1,所以版本固定是一个最佳实践。 另外,引用[4]提到GitOps的Pull模式,ArgoCD作为GitOps工具,部署时需要配置与Git仓库的集成,比如设置仓库URL、凭证等。安全方面,默认的admin密码可能需要修改,或者通过Ingress配置HTTPS,这部分在最佳实践中很重要。 RBAC也是关键,根据团队的角色分配权限,比如引用[2]中提到的ArgoCD作为Kubernetes原生的GitOps工具,可能需要集成SSO或者现有的权限系统。还有高可用性,生产环境需要多副本部署,资源限制设置,这些都要考虑到。 可能还需要包括初始化配置,比如创建Application资源,设置自动同步策略,健康检查等。最后,监控和日志,比如集成Prometheus和Grafana,也是最佳实践的一部分。 总结步骤应该是:1. 安装方式选择(清单或Helm),2. 安全配置(Ingress、密码、RBAC),3. Git集成,4. 高可用和资源管理,5. 初始化应用部署,6. 监控和维护。同时要注意版本固定,使用HTTPS,定期备份等细节。</think>### 如何在 Kubernetes部署 ArgoCD 最佳实践 #### 1. 安装 ArgoCD **基础部署** 通过官方清单文件部署最新稳定版本(推荐固定版本以避免兼容性问题): ```bash kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml ``` 若需指定版本,可通过引用[5]的版本标记方法,例如 `?ref=v2.4.0`[^5]。 #### 2. 安全配置 - **暴露 ArgoCD Server** 生产环境建议通过 Ingress 配置 HTTPS: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" spec: tls: - hosts: - argocd.example.com secretName: argocd-tls rules: - host: argocd.example.com http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https ``` - **修改默认密码** 初始密码存储在 Secret 中,需通过以下命令更新: ```bash kubectl -n argocd patch secret argocd-secret \ -p '{"stringData": {"admin.password": "$2a$10$<加密后的密码>", "admin.passwordMtime": "'$(date +%FT%T%Z)'"}}' ``` #### 3. 集成 Git 仓库 在 ArgoCD 中添加 Git 仓库凭证(支持 SSH 或 HTTPS): ```bash argocd repo add git@github.com:your-org/your-repo.git \ --ssh-private-key-path ~/.ssh/id_rsa \ --insecure-ignore-host-key ``` #### 4. RBAC 配置 通过 `argocd-rbac-cm` ConfigMap 定义细粒度权限: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm namespace: argocd data: policy.csv: | g, dev-team, role:readonly g, admin, role:admin ``` #### 5. 高可用性配置 - **多副本部署** 修改 Deployment 的 `replicas` 值并配置反亲和性: ```yaml spec: replicas: 3 affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: ["argocd-server"] topologyKey: "kubernetes.io/hostname" ``` - **资源限制** 在容器规格中设置 CPU/内存请求与限制: ```yaml resources: requests: cpu: "100m" memory: "256Mi" limits: cpu: "500m" memory: "1024Mi" ``` #### 6. 初始化应用部署 通过 `Application` 资源声明式同步配置: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: production-app namespace: argocd spec: destination: server: https://kubernetes.default.svc namespace: production source: repoURL: git@github.com:your-org/production-config.git path: k8s/manifests targetRevision: HEAD syncPolicy: automated: prune: true selfHeal: true retry: limit: 3 ``` #### 7. 监控与维护 - 集成 Prometheus 监控 ArgoCD 组件健康状态 - 启用审计日志并接入 ELK 或 Loki 日志系统 - 定期备份 `argocd` 命名空间下所有资源
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值