Kubernetes 1.6 部署prometheus和grafana(数据持久)

26 篇文章 0 订阅
2 篇文章 0 订阅

kubernetes 高可用集群部署正在编写中,内容比较多。写的有些慢。敬请关注。

这篇文章主要是用于解决Grafana使用Mysql保存数据,以做到配置的持久化。

注意:本文的大部分yaml文件来自于github,因此,如果有什么问题,可以尝试在github中查找是否可以解决。当然,也可以留言一起讨论。

1.0准备工作

1.1 使用环境

  • 操作系统:
    Centos 7
  • Kubernetes版本
    1.6.0
  • Docker版本
    1.12.6

1.2 镜像准备

很多时候发现在创建包时,总是拉取镜像失败,或者安装失败,大部分原因就是连接国外网速过慢,或者是镜像太大。因此还是继承以前博客的风格,先把本文所需要的镜像都下载到所有的节点上面。

在本篇博客中会使用到的镜像有:

gcr.io/google_containers/kube-state-metrics:v0.5.0
giantswarm/tiny-tools
dockermuenster/caddy:0.9.3
prom/node-exporter:v0.14.0
prom/prometheus:v1.7.0

grafana/grafana:4.2.0
phpmyadmin/phpmyadmin@sha256:95b005cf4c5f15ff670a31f576a50db8d164c6692752bda6176af3fea0e60812

2.0 安装prometheus

2.1 创建命名空间

首先我们创建一个命名monitoring。后面的所有的内容都会安装在这个命名空间中。
namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring

创建命令:

kubectl create -f namespace.yaml

检查创建结果

[root@k8s01 ~]# kubectl get namespace | grep monitoring
monitoring      Active    1h

2.2 创建node-exporter

查看文件内容:exporter-daemonset.yaml

[root@k8s01 node-exporter]# cat exporter-daemonset.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: prometheus-node-exporter
  namespace: monitoring
  labels:
    app: prometheus
    component: node-exporter
spec:
  template:
    metadata:
      name: prometheus-node-exporter
      labels:
        app: prometheus
        component: node-exporter
    spec:
      containers:
      - image: prom/node-exporter:v0.14.0
        name: prometheus-node-exporter
        ports:
        - name: prom-node-exp
          #^ must be an IANA_SVC_NAME (at most 15 characters, ..)
          containerPort: 9100
          hostPort: 9100
      hostNetwork: true
      hostPID: true

查看文件内容:exporter-service.yaml

[root@k8s01 node-exporter]# cat exporter-service.yaml 
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  name: prometheus-node-exporter
  namespace: monitoring
  labels:
    app: prometheus
    component: node-exporter
spec:
  clusterIP: None
  ports:
    - name: prometheus-node-exporter
      port: 9100
      protocol: TCP
  selector:
    app: prometheus
    component: node-exporter
  type: ClusterIP

创建node-exporter daemonset和node-exporter service.

kubectl create -f exporter-daemonset.yaml  -f exporter-service.yaml

创建完成,确认是否成功。

kubectl -n monitoring get daemonset,svc

2.3 创建kube-state-metrics

查看文件内容:state-metrics-deployment.yaml

[root@k8s01 kube-state-metrics]# cat deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: monitoring
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: kube-state-metrics
    spec:
      serviceAccountName: kube-state-metrics
      containers:
      - name: kube-state-metrics
        image: gcr.io/google_containers/kube-state-metrics:v0.5.0
        ports:
        - containerPort: 8080

查看service.yaml

[root@k8s01 kube-state-metrics]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  name: kube-state-metrics
  namespace: monitoring
  labels:
    app: kube-state-metrics
spec:
  ports:
  - name: kube-state-metrics
    port: 8080
    protocol: TCP
  selector:
    app: kube-state-metrics

创建kube-state-metrics deployment和kube-state-metrics service.

kubectl create -f deployment.yaml  -f service.yaml 

创建完成,确认是否成功。

kubectl -n monitoring get deployment,svc

2.4 创建node-directory-size-metrics

查看文件内容:state-metrics-deployment.yaml

[root@k8s01 node-directory-size-metrics]# cat daemonset.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-directory-size-metrics
  namespace: monitoring
  annotations:
    description: |
      This `DaemonSet` provides metrics in Prometheus format about disk usage on the nodes.
      The container `read-du` reads in sizes of all directories below /mnt and writes that to `/tmp/metrics`. It only reports directories larger then `100M` for now.
      The other container `caddy` just hands out the contents of that file on request via `http` on `/metrics` at port `9102` which are the defaults for Prometheus.
      These are scheduled on every node in the Kubernetes cluster.
      To choose directories from the node to check, just mount them on the `read-du` container below `/mnt`.
spec:
  template:
    metadata:
      labels:
        app: node-directory-size-metrics
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9102'
        description: |
          This `Pod` provides metrics in Prometheus format about disk usage on the node.
          The container `read-du` reads in sizes of all directories below /mnt and writes that to `/tmp/metrics`. It only reports directories larger then `100M` for now.
          The other container `caddy` just hands out the contents of that file on request on `/metrics` at port `9102` which are the defaults for Prometheus.
          This `Pod` is scheduled on every node in the Kubernetes cluster.
          To choose directories from the node to check just mount them on `read-du` below `/mnt`.
    spec:
      containers:
      - name: read-du
        image: giantswarm/tiny-tools
        imagePullPolicy: Always
        # FIXME threshold via env var
        # The
        command:
        - fish
        - --command
        - |
          touch /tmp/metrics-temp
          while true
            for directory in (du --bytes --separate-dirs --threshold=100M /mnt)
              echo $directory | read size path
              echo "node_directory_size_bytes{path=\"$path\"} $size" \
                >> /tmp/metrics-temp
            end
            mv /tmp/metrics-temp /tmp/metrics
            sleep 300
          end
        volumeMounts:
        - name: host-fs-var
          mountPath: /mnt/var
          readOnly: true
        - name: metrics
          mountPath: /tmp
      - name: caddy
        image: dockermuenster/caddy:0.9.3
        command:
        - "caddy"
        - "-port=9102"
        - "-root=/var/www"
        ports:
        - containerPort: 9102
        volumeMounts:
        - name: metrics
          mountPath: /var/www
      volumes:
      - name: host-fs-var
        hostPath:
          path: /var
      - name: metrics
        emptyDir:
          medium: Memory

创建node-directory-size-metrics daemonset.

kubectl create -f daemonset.yaml 

创建完成,确认是否成功。

kubectl -n monitoring get daemonset

2.5 创建configmap

查看文件内容.

[root@k8s01 prometheus]# cat configmap.yaml 
apiVersion: v1
data:
  prometheus.yaml: |
    global:
      scrape_interval: 10s
      scrape_timeout: 10s
      evaluation_interval: 10s
    rule_files:
      - "/etc/prometheus-rules/*.rules"
    scrape_configs:

      # https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L37
      - job_name: 'kubernetes-nodes'
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
          - source_labels: [__address__]
            regex: '(.*):10250'
            replacement: '${1}:10255'
            target_label: __address__

      # https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L79
      - job_name: 'kubernetes-endpoints'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
            action: keep
            regex: true
          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
            action: replace
            target_label: __scheme__
            regex: (https?)
          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: (.+)
          - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
            action: replace
            target_label: __address__
            regex: (.+)(?::\d+);(\d+)
            replacement: $1:$2
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            action: replace
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_service_name]
            action: replace
            target_label: kubernetes_name

      # https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L119
      - job_name: 'kubernetes-services'
        metrics_path: /probe
        params:
          module: [http_2xx]
        kubernetes_sd_configs:
          - role: service
        relabel_configs:
          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
            action: keep
            regex: true
          - source_labels: [__address__]
            target_label: __param_target
          - target_label: __address__
            replacement: blackbox
          - source_labels: [__param_target]
            target_label: instance
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_service_name]
            target_label: kubernetes_name

      # https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L156
      - job_name: 'kubernetes-pods'
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
            action: keep
            regex: true
          - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: (.+)
          - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
            action: replace
            regex: (.+):(?:\d+);(\d+)
            replacement: ${1}:${2}
            target_label: __address__
          - action: labelmap
            regex: __meta_kubernetes_pod_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            action: replace
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_pod_name]
            action: replace
            target_label: kubernetes_pod_name
          - source_labels: [__meta_kubernetes_pod_container_port_number]
            action: keep
            regex: 9\d{3}
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: prometheus-core
  namespace: monitoring

创建configmap.

kubectl create -f configmap.yaml 

创建完成,确认是否成功。

kubectl -n monitoring get configmap

2.6 创建prometheus-core

查看文件deployment.yaml.

[root@k8s01 prometheus]# cat deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: prometheus-core
  namespace: monitoring
  labels:
    app: prometheus
    component: core
spec:
  replicas: 1
  template:
    metadata:
      name: prometheus-main
      labels:
        app: prometheus
        component: core
    spec:
      serviceAccountName: prometheus-k8s
      containers:
      - name: prometheus
        image: prom/prometheus:v1.7.0
        args:
          - '-storage.local.retention=12h'
          - '-storage.local.memory-chunks=500000'
          - '-config.file=/etc/prometheus/prometheus.yaml'
          - '-alertmanager.url=http://alertmanager:9093/'
        ports:
        - name: webui
          containerPort: 9090
        resources:
          requests:
            cpu: 500m
            memory: 500M
          limits:
            cpu: 500m
            memory: 500M
        volumeMounts:
        - name: config-volume
          mountPath: /etc/prometheus
        - name: rules-volume
          mountPath: /etc/prometheus-rules
      volumes:
      - name: config-volume
        configMap:
          name: prometheus-core
      - name: rules-volume
        configMap:
          name: prometheus-rules

创建prometheus-core.

kubectl create -f deployment.yaml 

创建完成,确认是否成功。

kubectl -n monitoring get deployment

2.7 创建 prometheus-rules

查看文件prometheus-rules.yaml 。

[root@k8s01 prometheus]# cat prometheus-rules.yaml 
apiVersion: v1
data:
  cpu-usage.rules: |
    ALERT NodeCPUUsage
      IF (100 - (avg by (instance) (irate(node_cpu{name="node-exporter",mode="idle"}[5m])) * 100)) > 75
      FOR 2m
      LABELS {
        severity="page"
      }
      ANNOTATIONS {
        SUMMARY = "{{$labels.instance}}: High CPU usage detected",
        DESCRIPTION = "{{$labels.instance}}: CPU usage is above 75% (current value is: {{ $value }})"
      }
  instance-availability.rules: |
    ALERT InstanceDown
      IF up == 0
      FOR 1m
      LABELS { severity = "page" }
      ANNOTATIONS {
        summary = "Instance {{ $labels.instance }} down",
        description = "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minute.",
      }
  low-disk-space.rules: |
    ALERT NodeLowRootDisk
      IF ((node_filesystem_size{mountpoint="/root-disk"} - node_filesystem_free{mountpoint="/root-disk"} ) / node_filesystem_size{mountpoint="/root-disk"} * 100) > 75
      FOR 2m
      LABELS {
        severity="page"
      }
      ANNOTATIONS {
        SUMMARY = "{{$labels.instance}}: Low root disk space",
        DESCRIPTION = "{{$labels.instance}}: Root disk usage is above 75% (current value is: {{ $value }})"
      }

    ALERT NodeLowDataDisk
      IF ((node_filesystem_size{mountpoint="/data-disk"} - node_filesystem_free{mountpoint="/data-disk"} ) / node_filesystem_size{mountpoint="/data-disk"} * 100) > 75
      FOR 2m
      LABELS {
        severity="page"
      }
      ANNOTATIONS {
        SUMMARY = "{{$labels.instance}}: Low data disk space",
        DESCRIPTION = "{{$labels.instance}}: Data disk usage is above 75% (current value is: {{ $value }})"
      }
  mem-usage.rules: |
    ALERT NodeSwapUsage
      IF (((node_memory_SwapTotal-node_memory_SwapFree)/node_memory_SwapTotal)*100) > 75
      FOR 2m
      LABELS {
        severity="page"
      }
      ANNOTATIONS {
        SUMMARY = "{{$labels.instance}}: Swap usage detected",
        DESCRIPTION = "{{$labels.instance}}: Swap usage usage is above 75% (current value is: {{ $value }})"
      }

    ALERT NodeMemoryUsage
      IF (((node_memory_MemTotal-node_memory_MemFree-node_memory_Cached)/(node_memory_MemTotal)*100)) > 75
      FOR 2m
      LABELS {
        severity="page"
      }
      ANNOTATIONS {
        SUMMARY = "{{$labels.instance}}: High memory usage detected",
        DESCRIPTION = "{{$labels.instance}}: Memory usage is above 75% (current value is: {{ $value }})"
      }
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: prometheus-rules
  namespace: monitoring

创建prometheus-rules.

kubectl create -f prometheus-rules.yaml 

2.7 创建 prometheus service

查看文件service.yaml

[root@k8s01 prometheus]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    app: prometheus
    component: core
  annotations:
    prometheus.io/scrape: 'true'
spec:
  type: NodePort
  ports:
    - port: 9090
      protocol: TCP
      name: webui
  selector:
    app: prometheus
    component: core

创建prometheus service.

kubectl create -f service.yaml 

创建完成,确认是否成功。

kubectl -n monitoring get svc

2.8 创建 rbac

查看文件rbac.yaml.

[root@k8s01 grafana]# cat rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kube-state-metrics
rules:
- apiGroups: [""]
  resources:
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  verbs: ["list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs: ["list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus-k8s
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus-k8s
  namespace: monitoring

创建 rbac

kubectl create -f rbac.yaml

2.9 小小结一下

当我们到达2.9这里,prometheus已经安装完成,并且prometheus已经开始采集数据了。接下来,我们会通过一些小的测试来验证一下我们的安装。

我们通过下面命令查看一下pod的运行状态。

kubectl -n monitoring get pod 

通过显示,我们可以看到kube-state-metrics,prometheus-core,prometheus-node-exporter都是处在Running的状态。

再通过下面命令查看服务的状态及节点的端口。

[root@k8s01 prometheus]# kubectl -n monitoring  get svc 
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kube-state-metrics         10.254.130.125   <none>        8080/TCP         23h
prometheus                 10.254.105.122   <nodes>       9090:30476/TCP   23h
prometheus-node-exporter   None             <none>        9100/TCP         23h

如果需要测试,必须先执行步骤 3.2.1

我们可以看到prometheus在节点开放的端口为30476。
现在我们使用30476进行访问:http://xx.xxx.xxx.xx:30476/
xx.xxx.xxx.xx这个是我们k8s集群任意节点的IP地址。
界面如下图:
prometheus

在insert metric at cursor中选择一个值。比如node_load15,点击Execute。则生成图片如下:
prometheus

休息一下

到这里,我们已经把prometheus安装完成。接下来,我们将安装Grafana,也是本文最麻烦的地放。也为这个问题纠结了几个星期。或能这不是最好的解决方法,但通过在生产环境中使用,也没有发现有任何问题。

3.0 安装Grafana(MySQL)

刚开始在安装Grafana的时候,使用的是github上面的默认配置。可是在使用过程中,发现如果pod出现问题,那么手动增加的配置都丢失了,也尝试把配置文件通过nfs挂载出来,也通过创建持久卷来保存配置,都没有成功。最后试着把配置保存在MySQL中,发现还真管用。
因此,在这一部署的文档中,我们会安装MySQL和Grafana。

如果只是为了在测试环境中安装测试,可以安装一个单节点的MySQL,因为这个可能要简单的多。我们这里想着还是为了更接近于生产环境,所以安装的是MySQL集群。

注意:此部分的Yaml文件均来自于github,在这里看唯一的好处就是中文。如果对此部分有什么疑问,可以在github上面查找相应的解决方法。也可以留言讨论。

3.1K8s中安装MySQL服务

3.1.1创建mysql命名空间

查看文件00namespace.yml。

[root@k8s01 mysql]# cat 00namespace.yml 
---
apiVersion: v1
kind: Namespace
metadata:
  name: mysql

创建命名空间mysql.

kubectl create -f 00namespace.yml

检查创建结果.

kubectl get namespace | grep mysql
3.1.2创建持久卷pv

注意一下:我们在这里使用了NFS服务

查看文件11pv.yml

[root@k8s01 mysql]# cat 11pv.yml 
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: mysql-mariadb-0
  namespace: mysql
  labels: 
    app: mariadb
    podindex: "0"
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /opt/mariadb/mariadb00
    server: 192.168.200.72

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-mariadb-1
  namespace: mysql
  labels:
    app: mariadb
    podindex: "1"
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /opt/mariadb/mariadb01
    server: 192.168.200.72

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-mariadb-2
  namespace: mysql
  labels:
    app: mariadb
    podindex: "2"
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /opt/mariadb/mariadb02
    server: 192.168.200.72

创建pv.

kubectl create -f 11pv.yml 

检查创建结果.

kubectl get pv
3.1.3创建持久卷pvc

查看文件10pvc.yml

[root@k8s01 mysql]# cat 10pvc.yml 
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-mariadb-0
  namespace: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      app: mariadb
      podindex: "0"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-mariadb-1
  namespace: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      app: mariadb
      podindex: "1"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-mariadb-2
  namespace: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      app: mariadb
      podindex: "2"

创建pvc.

kubectl create -f 10pvc.yml 

检查创建结果.

kubectl get pvc
3.1.4创建数据库服务mariadb Service

查看文件20mariadb-service.yml

[root@k8s01 mysql]# cat 20mariadb-service.yml 
# the "Headless Service, used to control the network domain"
---
apiVersion: v1
kind: Service
metadata:
  name: mariadb
  namespace: mysql
spec:
  clusterIP: None
  selector:
    app: mariadb
  ports:
    - port: 3306
      name: mysql
    - port: 4444
      name: sst
    - port: 4567
      name: replication
    - protocol: UDP
      port: 4567
      name: replicationudp
    - port: 4568
      name: ist

创建mariadb Service.

kubectl create -f 20mariadb-service.yml 

检查创建结果.

kubectl -n mysql get svc
3.1.5创建数据库服务mysql Service

查看文件30mysql-service.yml

[root@k8s01 mysql]# cat 30mysql-service.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  selector:
    app: mariadb

创建mysql service.

kubectl create -f 30mysql-service.yml

检查创建结果.

kubectl -n mysql get svc
3.1.6创建数据库配置文件

查看文件galera.cnf

[root@k8s01 conf-d]# cat galera.cnf 
# this is read by the standalone daemon and embedded servers
[server]

# this is only for the mysqld standalone daemon
[mysqld]

#
# * Galera-related settings
#
# https://mariadb.com/kb/en/mariadb/galera-cluster-system-variables/
#
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider="/usr/lib/galera/libgalera_smm.so"
wsrep_cluster_address="gcomm://mariadb-0.mariadb,mariadb-1.mariadb,mariadb-2.mariadb"
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep-sst-method=rsync

#
# Allow server to accept connections on all interfaces.
#
bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0

# this is only for embedded server
[embedded]

# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]

# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.1]

创建configmap conf-d

kubectl create configmap "conf-d" --from-file="galera.cnf" --namespace=mysql

检查创建结果.

kubectl -n mysql get configmap
3.1.7 设置MySQL root密码

注意:这里需要修改些内容
把下面命令的“XXXXX”修改成root的密码

kubectl create secret generic mysql-secret --namespace=mysql --from-literal=rootpw=XXXXX
3.1.8 创建主mariadb

在执行这一步的时候要注意一下。在创建完成后,检查pod为运行状态(running)时才可以执行下一步。
查看文件50mariadb.yml.

[root@k8s01 mysql]# cat 50mariadb.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mariadb
  namespace: mysql
spec:
  serviceName: "mariadb"
  replicas: 1
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mariadb
          image: mariadb:10.1.22@sha256:21afb9ab191aac8ced2e1490ad5ec6c0f1c5704810d73451dd124670bcacfb14
          ports:
            - containerPort: 3306
              name: mysql
            - containerPort: 4444
              name: sst
            - containerPort: 4567
              name: replication
            - containerPort: 4567
              protocol: UDP
              name: replicationudp
            - containerPort: 4568
              name: ist
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: rootpw
            - name: MYSQL_INITDB_SKIP_TZINFO
              value: "yes"
          args:
            - --character-set-server=utf8mb4
            - --collation-server=utf8mb4_unicode_ci
            # Remove after first replicas=1 create
            - --wsrep-new-cluster
          volumeMounts:
            - name: mysql
              mountPath: /var/lib/mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            - name: initdb
              mountPath: /docker-entrypoint-initdb.d
      volumes:
        - name: conf
          configMap:
            name: conf-d
        - name: initdb
          emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: mysql
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

创建mariadb

kubectl create -f 50mariadb.yml

查看创建结果.

kubectl -n mysql get statefulset
3.1.9 创建从mariadb

**在执行这一步时,需要确认上一步(3.1.8)已经执行完成。
查看文件50mariadb.yml.unbootstrap.yml .

[root@k8s01 mysql]# cat 50mariadb.yml.unbootstrap.yml 
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mariadb
  namespace: mysql
spec:
  serviceName: "mariadb"
  replicas: 3
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mariadb
          image: mariadb:10.1.22@sha256:21afb9ab191aac8ced2e1490ad5ec6c0f1c5704810d73451dd124670bcacfb14
          ports:
            - containerPort: 3306
              name: mysql
            - containerPort: 4444
              name: sst
            - containerPort: 4567
              name: replication
            - containerPort: 4567
              protocol: UDP
              name: replicationudp
            - containerPort: 4568
              name: ist
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: rootpw
            - name: MYSQL_INITDB_SKIP_TZINFO
              value: "yes"
          args:
            - --character-set-server=utf8mb4
            - --collation-server=utf8mb4_unicode_ci
            # Remove after first replicas=1 create
#            - --wsrep-new-cluster
          volumeMounts:
            - name: mysql
              mountPath: /var/lib/mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            - name: initdb
              mountPath: /docker-entrypoint-initdb.d
      volumes:
        - name: conf
          configMap:
            name: conf-d
        - name: initdb
          emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: mysql
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

创建mariadb
这里是apply不是create.如果使用create会报StatefulSet已经存在的错误.

kubectl apply -f 50mariadb.yml.unbootstrap.yml 

查看创建结果.

kubectl -n mysql get statefulset
3.1.10 创建phpmyadmin service

坚持一下,马上完成。手已经敲的累的不行了。
我们先创建phpmyadmin的服务。

查看文件内容30myadmin-service.yml .

[root@k8s01 myadmin]# cat 30myadmin-service.yml 
apiVersion: v1
kind: Service
metadata:
  name: myadmin
  namespace: mysql
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    nodePort: 30080
  selector:
    app: myadmin

创建phpmyadmin service

kubectl create -f 30myadmin-service.yml 

查看创建结果

 kubectl -n mysql get svc
3.1.11 创建phpmyadmin

查看文件内容50myadmin.yml .

[root@k8s01 myadmin]# cat 50myadmin.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: myadmin
  namespace: mysql
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myadmin
    spec:
      containers:
        - name: phpmyadmin
          image: phpmyadmin/phpmyadmin@sha256:95b005cf4c5f15ff670a31f576a50db8d164c6692752bda6176af3fea0e60812
          ports:
            - containerPort: 80
          env:
            - name: PMA_HOST
              value: mysql

创建phpmyadmin

kubectl create -f  50myadmin.yml 

查看创建结果

 kubectl -n mysql get ReplicationController
3.1.12 小小结

以上我们将MySQL及phpmyadmin创建完成。这小小结休息一下。接下来我们会登录phpmyadmin做些配置。
再接下来,用grafana来连接MySQL,将grafana的配置存放在MySQL中。然后完成。
简短的休息一下。

3.1.13配置phpmyadmin

首先,我们需要找到phpmyadmin的端口,使用下面命令查看端口。

[root@k8s01 myadmin]# kubectl get svc -n mysql
NAME      CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
mariadb   None            <none>        3306/TCP,4444/TCP,4567/TCP,4567/UDP,4568/TCP   51d
myadmin   10.254.76.61    <nodes>       80:30080/TCP                                   50d
mysql     10.254.81.246   <none>        3306/TCP                                       44d

我们可以看到,myadmin使用的节点端口为30080。现在,我们使用30080来登录。界面如下:
phpmyadmin

登录后,通过界面创建数据库granafa,并设置用户和密码.
phpmyadmin

3.1.14数据库MySQL配置结束

终于到这里,数据库只是一个附带,却占了很大的篇幅。但实无办法。就当成学习MySQL的K8s集群配置文档。
好。现在我们做准备配置Grafana,这也是我们最想要的地放。

3.2 安装Grafana

3.2.1配置rbac

在我们前面的配置中因没有配置rbac,在各个命名空间的访问受权限限制。
查看文件内容rbac.yaml

[root@k8s01 grafana]# cat rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kube-state-metrics
rules:
- apiGroups: [""]
  resources:
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  verbs: ["list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs: ["list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus-k8s
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus-k8s
  namespace: monitoring

创建rbac.yaml

kubectl create -f rbac.yaml
3.2.2创建grafana服务

查看文件grafana-svc.yaml

[root@k8s01 grafana]# cat grafana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
  labels:
    app: grafana
    component: core
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana
    component: core

创建grafana-svc.yaml

kubectl create -f grafana-svc.yaml 

查看创建结果.

kubectl -n monitoring get svc 
3.2.3创建grafana的配置文件

配置文件内容grafana.ini

[root@k8s01 production_yaml]# cat grafana.ini 
##################### Grafana Configuration Example #####################
#
# Everything has defaults so you only need to uncomment things you want to
# change

# possible values : production, development
; app_mode = production

# instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty
; instance_name = ${HOSTNAME}

#################################### Paths ####################################
[paths]
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
#
;data = /var/lib/grafana
#
# Directory where grafana can store logs
#
;logs = /var/log/grafana
#
# Directory where grafana will automatically scan and look for plugins
#
;plugins = /var/lib/grafana/plugins

#
#################################### Server ####################################
[server]
# Protocol (http or https)
;protocol = http

# The ip address to bind to, empty will bind to all interfaces
;http_addr =

# The http port  to use
;http_port = 3000

# The public facing domain name used to access grafana from a browser
;domain = localhost

# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
;enforce_domain = false

# The full public facing url you use in browser, used for redirects and emails
# If you use reverse proxy and sub path specify full url (with sub path)
;root_url = http://localhost:3000

# Log web requests
;router_logging = false

# the path relative working path
;static_root_path = public

# enable gzip
;enable_gzip = false

# https certs & key file
;cert_file =
;cert_key =

#################################### Database ####################################
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as seperate properties or as on string using the url propertie.

# Either "mysql", "postgres" or "sqlite3", it's your choice
;type = sqlite3
type = mysql
host = mariadb.mysql.svc.cluster.local:3306
name = grafana
user = root
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
password = 9EEihZ6BfyP24k1zCW3S

# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret@host:port/database
;url =

# For "postgres" only, either "disable", "require" or "verify-full"
;ssl_mode = disable

# For "sqlite3" only, path relative to data_path setting
;path = grafana.db

# Max conn setting default is 0 (mean not set)
;max_conn =
;max_idle_conn =
;max_open_conn =


#################################### Session ####################################
[session]
# Either "memory", "file", "redis", "mysql", "postgres", default is "file"
;provider = file

# Provider config options
# memory: not have any config yet
# file: session dir path, is relative to grafana data_path
# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana`
# mysql: go-sql-driver/mysql dsn config string, e.g. `user:password@tcp(127.0.0.1:3306)/database_name`
# postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable
;provider_config = sessions

# Session cookie name
;cookie_name = grafana_sess

# If you use session in https only, default is false
;cookie_secure = false

# Session life time, default is 86400
;session_life_time = 86400

#################################### Data proxy ###########################
[dataproxy]

# This enables data proxy logging, default is false
;logging = false


#################################### Analytics ####################################
[analytics]
# Server reporting, sends usage counters to stats.grafana.org every 24 hours.
# No ip addresses are being tracked, only simple counters to track
# running instances, dashboard and error counts. It is very helpful to us.
# Change this option to false to disable reporting.
;reporting_enabled = true

# Set to false to disable all checks to https://grafana.net
# for new vesions (grafana itself and plugins), check is used
# in some UI views to notify that grafana or plugin update exists
# This option does not cause any auto updates, nor send any information
# only a GET request to http://grafana.net to get latest versions
;check_for_updates = true

# Google Analytics universal tracking code, only enabled if you specify an id here
;google_analytics_ua_id =

#################################### Security ####################################
[security]
# default admin user, created on startup
;admin_user = admin

# default admin password, can be changed before first start of grafana,  or in profile settings
;admin_password = admin

# used for signing
;secret_key = SW2YcwTIb9zpOOhoPsMm

# Auto-login remember days
;login_remember_days = 7
;cookie_username = grafana_user
;cookie_remember_name = grafana_remember

# disable gravatar profile images
;disable_gravatar = false

# data source proxy whitelist (ip_or_domain:port separated by spaces)
;data_source_proxy_whitelist =

[snapshots]
# snapshot sharing options
;external_enabled = true
;external_snapshot_url = https://snapshots-origin.raintank.io
;external_snapshot_name = Publish to snapshot.raintank.io

# remove expired snapshot
;snapshot_remove_expired = true

# remove snapshots after 90 days
;snapshot_TTL_days = 90

#################################### Users ####################################
[users]
# disable user signup / registration
;allow_sign_up = true

# Allow non admin users to create organizations
;allow_org_create = true

# Set to true to automatically assign new users to the default organization (id 1)
;auto_assign_org = true

# Default role new users will be automatically assigned (if disabled above is set to true)
;auto_assign_org_role = Viewer

# Background text for the user field on the login page
;login_hint = email or username

# Default UI theme ("dark" or "light")
;default_theme = dark

[auth]
# Set to true to disable (hide) the login form, useful if you use OAuth, defaults to false
;disable_login_form = false

#################################### Anonymous Auth ##########################
[auth.anonymous]
# enable anonymous access
;enabled = false

# specify organization name that should be used for unauthenticated users
;org_name = Main Org.

# specify role for unauthenticated users
;org_role = Viewer

#################################### Github Auth ##########################
[auth.github]
;enabled = false
;allow_sign_up = true
;client_id = some_id
;client_secret = some_secret
;scopes = user:email,read:org
;auth_url = https://github.com/login/oauth/authorize
;token_url = https://github.com/login/oauth/access_token
;api_url = https://api.github.com/user
;team_ids =
;allowed_organizations =

#################################### Google Auth ##########################
[auth.google]
;enabled = false
;allow_sign_up = true
;client_id = some_client_id
;client_secret = some_client_secret
;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
;auth_url = https://accounts.google.com/o/oauth2/auth
;token_url = https://accounts.google.com/o/oauth2/token
;api_url = https://www.googleapis.com/oauth2/v1/userinfo
;allowed_domains =

#################################### Generic OAuth ##########################
[auth.generic_oauth]
;enabled = false
;name = OAuth
;allow_sign_up = true
;client_id = some_id
;client_secret = some_secret
;scopes = user:email,read:org
;auth_url = https://foo.bar/login/oauth/authorize
;token_url = https://foo.bar/login/oauth/access_token
;api_url = https://foo.bar/user
;team_ids =
;allowed_organizations =

#################################### Grafana.net Auth ####################
[auth.grafananet]
;enabled = false
;allow_sign_up = true
;client_id = some_id
;client_secret = some_secret
;scopes = user:email
;allowed_organizations =

#################################### Auth Proxy ##########################
[auth.proxy]
;enabled = false
;header_name = X-WEBAUTH-USER
;header_property = username
;auto_sign_up = true
;ldap_sync_ttl = 60
;whitelist = 192.168.1.1, 192.168.2.1

#################################### Basic Auth ##########################
[auth.basic]
;enabled = true

#################################### Auth LDAP ##########################
[auth.ldap]
;enabled = false
;config_file = /etc/grafana/ldap.toml
;allow_sign_up = true

#################################### SMTP / Emailing ##########################
[smtp]
;enabled = false
;host = localhost:25
;user =
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
;password =
;cert_file =
;key_file =
;skip_verify = false
;from_address = admin@grafana.localhost
;from_name = Grafana

[emails]
;welcome_email_on_sign_up = false

#################################### Logging ##########################
[log]
# Either "console", "file", "syslog". Default is console and  file
# Use space to separate multiple modes, e.g. "console file"
;mode = console file

# Either "trace", "debug", "info", "warn", "error", "critical", default is "info"
;level = info

# optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug
;filters =


# For "console" mode only
[log.console]
;level =

# log line format, valid options are text, console and json
;format = console

# For "file" mode only
[log.file]
;level =

# log line format, valid options are text, console and json
;format = text

# This enables automated log rotate(switch of following options), default is true
;log_rotate = true

# Max line number of single file, default is 1000000
;max_lines = 1000000

# Max size shift of single file, default is 28 means 1 << 28, 256MB
;max_size_shift = 28

# Segment log daily, default is true
;daily_rotate = true

# Expired days of log file(delete after max days), default is 7
;max_days = 7

[log.syslog]
;level =

# log line format, valid options are text, console and json
;format = text

# Syslog network type and address. This can be udp, tcp, or unix. If left blank, the default unix endpoints will be used.
;network =
;address =

# Syslog facility. user, daemon and local0 through local7 are valid.
;facility =

# Syslog tag. By default, the process' argv[0] is used.
;tag =


#################################### AMQP Event Publisher ##########################
[event_publisher]
;enabled = false
;rabbitmq_url = amqp://localhost/
;exchange = grafana_events

;#################################### Dashboard JSON files ##########################
[dashboards.json]
;enabled = false
;path = /var/lib/grafana/dashboards

#################################### Alerting ############################
[alerting]
# Disable alerting engine & UI features
;enabled = true
# Makes it possible to turn off alert rule execution but alerting UI is visible
;execute_alerts = true

#################################### Internal Grafana Metrics ##########################
# Metrics available at HTTP API Url /api/metrics
[metrics]
# Disable / Enable internal metrics
;enabled           = true

# Publish interval
;interval_seconds  = 10

# Send internal metrics to Graphite
[metrics.graphite]
# Enable by setting the address setting (ex localhost:2003)
;address =
;prefix = prod.grafana.%(instance_name)s.

#################################### Internal Grafana Metrics ##########################
# Url used to to import dashboards directly from Grafana.net
[grafana_net]
;url = https://grafana.net

#################################### External image storage ##########################
[external_image_storage]
# Used for uploading images to public servers so they can be included in slack/email messages.
# you can choose between (s3, webdav)
;provider =

[external_image_storage.s3]
;bucket_url =
;access_key =
;secret_key =

[external_image_storage.webdav]
;url =
;username =
;password =

在这个文件中注意下面几行的信息:

type = mysql
设置数据存放的类型
host = mariadb.mysql.svc.cluster.local:3306
数据库在k8s中的地址
name = grafana
我们在phpmyadmin中创建的数据库
user = root
数据库用户名
password = XXXXX
数据库密码,这个是前面我们设置的"XXXXX"

修改完配置文件,保存。然后执行下面的命令创建configmap
创建configmap grafana-etc

kubectl create configmap "grafana-etc" --from-file=grafana.ini --namespace=monitoring

完成后,查看创建结果。

kubectl -n monitoring get configmap
3.2.4创建grafana deployment

查看文件grafana-deploy.yaml

[root@k8s01 prometheus]# cat grafana-deploy.yaml 
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grafana-core
  namespace: monitoring
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
      - image: grafana/grafana:4.2.0
        name: grafana-core
        imagePullPolicy: IfNotPresent
        # env:
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
          # The following env variables set up basic auth twith the default admin user and admin password.
          - name: GF_AUTH_BASIC_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "false"
          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          #   value: Admin
          # does not really work, because of template variables in exported dashboards:
          # - name: GF_DASHBOARDS_JSON_ENABLED
          #   value: "true"
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
          # initialDelaySeconds: 30
          # timeoutSeconds: 1
        volumeMounts:
        - name: grafana-etc-volume
          mountPath: /etc/grafana/
          #readOnly: true
      volumes:
        - name: grafana-etc-volume
          configMap:
            name: grafana-etc
            items:
            - key: grafana.ini
              path: grafana.ini

创建grafana deployment

kubectl create -f grafana-deploy.yaml 
3.2.5小结

以上所有的配置已经完成。接下来的工作就是如何配置grafana。界面导入可以通过grafana的官网查看。如果实在不知道如何导入界面,可以留言。

3.2.6小结2 测试grafana

查看grafana的端口。

[root@k8s01 grafana]# kubectl -n monitoring get svc 
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
grafana                    10.254.7.31      <nodes>       3000:32550/TCP   9h
kube-state-metrics         10.254.130.125   <none>        8080/TCP         1d
prometheus                 10.254.105.122   <nodes>       9090:30476/TCP   1d
prometheus-node-exporter   None             <none>        9100/TCP         1d

我们可以看到grafana的端口为32550。通过界面登录。
这里写图片描述

默认用户: admin 默认密码:admin

查看监控数据。
这里写图片描述

写了基本一天。终于写完了。

看以下图,你知道的,打赏是支持博主的最好方式。 :)
当然,如果有什么问题,在测试过程中那里不对,可以留言,一起解决。

有你的支持,是博主最大的动力。

评论 14
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值