Docker-compose Setup for Self-hosting Development & Deployment Tools

Last week I wrote about my self-hosted Sentry install in 3 Docker containers. This week I want to bring you the rest of my self-hosted tools for developers, all rolled into a convenient docker-compose.yml.

Contents
Version Control (GitLab)
Code Analysis (SonarQube)
Email (exim4)
Code Search (Etsy Hound)
Visualization (Grafana)
User Error Monitoring (Sentry)
System Monitoring (Prometheus)
Log Monitoring (ELK (Elasticsearch, Logstash, and Kibana))
Docker Web GUI (Portainer)
All Services Rolled Up
About The Configurations Shown
The configuration files described and shown below are exactly what I’m running at this time, 2018-04-05, (with personal details removed, of course) and therefore they may need to be adjusted slightly to your own preferences. Specifically, you will likely have to change the volumes for all the containers. I have them set to the /srv/$SERVICE_NAME and /srv/configs/$SERVICE_NAME paths for bulk data and configuration files respectively. The idea is that I would be able to move my docker-compose and configs directory to a new computer and immediately have this same stack up and running. Some bulk data may be lost, but I don’t consider any of it critical. The docker-compose file is using version 3.

Version Control (GitLab)
One of the most important developer tools is a version control system. Most developers use Git and especially GitHub. These are of course great tools, but I also run my own GitLab server. As the name suggests, GitLab uses the same Git system as always, but it has some more powerful features, chief among which is CI/CD. GitLab has a fairly simple continuous integration/continuous deployment system built in. It utilizes a .gitlab-ci.yml file placed in your repository to define operations to perform. It uses a separate docker container running GitLab Runner to spawn additional containers that execute the tasks you define in the gitlab-ci file. In addition to CI/CD, since you’re running your own server you can have as many private repositories as you want and even share private repositories with selected contributors.

Compose
The docker-compose section for my GitLab setup is shown below. Some things to note are that the GitLab Runner may need some additional setup, especially getting a registration token. Here are GitLab’s instructions for the registration process.

GitLab Stack
GitLab Stack

gitlab:
image: 'gitlab/gitlab-ce:latest'
restart: always
container_name: gitlab
hostname: # YOUR HOSTNAME ex. git.example.com
links:
- smtp
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url '# YOUR URL ex. https://git.example.com #';
gitlab_rails['gitlab_email_from'] = '# YOUR EMAIL ADDRESS #';
gitlab_rails['gitlab_email_reply_to'] = '# YOUR EMAIL ADDRESS #';
gitlab_rails['smtp_enable'] = 'true';
gitlab_rails['smtp_address'] = 'smtp';
ports:
- '180:80'
volumes:
- '/srv/configs/gitlab/gitlab:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'

End GitLab
GitLab CI/CD Runner

gitlab-runner:
image: 'gitlab/gitlab-runner:latest'
restart: always
container_name: gitlab-runner
links:
- gitlab
environment:
- CI_SERVER_URL=http://gitlab/
- RUNNER_NAME=local-docker-runner
- REGISTER_NON_INTERACTIVE=true
- REGISTRATION_TOKEN=# YOUR REGISTRATION TOKEN FROM GITLAB #
- RUNNER_EXECUTOR=docker
- DOCKER_IMAGE=ubuntu:artful
- REGISTER_LOCKED=false
volumes:
- /srv/configs/gitlab/gitlab-runner:/etc/gitlab-runner
- /srv/gitlab-runner/home:/home/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock

End GitLab CI/CD Runner
End GitLab Stack

Code Analysis (SonarQube)
Code just working isn’t good enough, you ought to enforce some guidelines on code style to avoid potential problems. One tool to do this is SonarQube, which is a static analysis tool. This means that it simply looks at your source code and will run a multitude of different rulesets against it looking for issues. It has support for all the popular languages and you can customize the rules that it enforces.

Compose
To use SonarQube you will need a MySQL or other supported server (the configuration below shows MySQL/MariaDB). You can run MySQL in another container or perhaps on a separate database server. Fill in the configuration below with the database information, and that should be all the setup required.

Sonarqube Static Code Analysis

sonarqube:
container_name: sonarqube
image: 'sonarqube:latest'
restart: always
links:
- smtp
ports:
- 780:9000
environment:
- SONARQUBE_JDBC_URL=jdbc:mysql://# MYSQL HOST #:3306/# MYSQL DATABASE #?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
- SONARQUBE_JDBC_USERNAME=# MYSQL USERNAME #
- SONARQUBE_JDBC_PASSWORD=# MYSQL PASSWORD #
volumes:
- /srv/sonarqube/conf:/opt/sonarqube/conf
- /srv/sonarqube/data:/opt/sonarqube/data
- /srv/sonarqube/extensions:/opt/sonarqube/extensions
- /srv/sonarqube/bundled-plugins:/opt/sonarqube/lib/bundled-plugins/opt/sonarqube/lib/bundled-plugins

End Sonarqube Static Code Analysis

Email (exim4)
Many of the containers described in this article can take advantage of emails to alert you of issues if you set them up. A really simple way to do this if you don’t have your own mail server is to just use your existing GMail account.

Compose

SMTP Email

smtp:
image: 'tianon/exim4:latest'
restart: always
environment:
GMAIL_USER: # YOUR GMAIL USERNAME #
GMAIL_PASSWORD: # YOUR GMAIL PASSWORD #

End SMTP Email

Code Search (Etsy Hound)
Occasionally you may find that you’re writing something that you know you’ve done before, but you just can’t seem to find what project or file its in. Hound is a very simple code search tool that indexes your repositories and allows you to search them using regular expressions. Hound requires a bit of configuration in config.json which you can learn about here.

Compose

hound:
container_name: hound
image: 'etsy/hound:latest'
restart: always
ports:
- 580:6080
volumes:
- /srv/configs/hound/config.json:/data/config.json
- /srv/hound/data:/data/data

Example Hound Configuration
Here is an example configuration for Hound, just put in your GitHub URLs and project names.

{
"max-concurrent-indexers" : 5,
"dbpath" : "data",
"repos" : {
"graphPlayground" : {
"url" : "https://github.com/MikeDombo/graphPlayground.git",
"enable-push-updates" : true
}
}
}
Visualization (Grafana)
If any of your projects are generating some statistics or writing into a database, then maybe you’d like a simple dashboard to visualize them. Grafana is the best way to do this without writing it yourself. It allows you to hookup many different datasources including MySQL, Graphite, Prometheus, and more and then show data in appealing graphs, tables, etc. Grafana comes with some dashboards and there are more community generated dashboards on their site. Most of the configuration you’ll do with Grafana is to setup a dashboard the way you like it.

Compose

Grafana Dashboard

grafana:
container_name: grafana
image: 'grafana/grafana:latest'
restart: always
links:
- smtp
ports:
- 680:3000
environment:
- GF_SERVER_ENABLE_GZIP=true
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s/
- GF_SERVER_DOMAIN=# DOMAIN ex. graphs.example.com #
- GF_SMTP_ENABLED=true
- GF_SMTP_HOST=smtp
- GF_AUTH_ORG_NAME=anon_org
- GF_AUTH_ANONYMOUS_ENABLED=true
volumes:
- /srv/configs/grafana:/var/lib/grafana

End Grafana Dashboard

User Error Monitoring (Sentry)
See my post from last week to configure a self-hosted Sentry to collect errors that users encounter while using your applications.

System Monitoring (Prometheus)
Prometheus is one of several popular time-series databases. I use it to collect load, network, and other statistics from servers and Docker containers. I have it configured using Google CAdvisor and Prometheus’s node-exporter to gather stats on containers and hosts respectively. I then use Grafana to visualize the data from Prometheus.

Compose

Prometheus Monitoring Stack

prometheus:
container_name: prometheus
image: 'prom/prometheus:latest'
restart: always
links:
- grafana
- cadvisor
- node-exporter
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
volumes:
- /srv/configs/prometheus:/etc/prometheus
- /srv/prometheus:/prometheus

Monitoring for this host

node-exporter:
image: prom/node-exporter
container_name: prometheus_node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($|/)"
restart: always

Docker container monitoring

cadvisor:
image: google/cadvisor
restart: always
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro

End Prometheus Monitoring Stack

Prometheus Configuration
Here as the Prometheus configuration that I used prometheus.yml to get node-exporter and cadvisor data into my Prometheus. Put it in /srv/configs/prometheus/prometheus.yml if you’re using my docker-compose file from above.

my global config

global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).

A scrape configuration containing exactly one endpoint to scrape:

Here it's Prometheus itself.

scrape_configs:
# The job name is added as a label job=<job_name> to any timeseries scraped from this config.

  • job_name: 'prometheus'

    Override the global default and scrape targets from this job every 5 seconds.

    scrape_interval: 5s

    static_configs:
    • targets: ['localhost:9090']
  • job_name: 'node-exporter'

    Override the global default and scrape targets from this job every 5 seconds.

    scrape_interval: 5s

    static_configs:
    • targets: ['node-exporter:9100']
  • job_name: 'cadvisor'

    Override the global default and scrape targets from this job every 5 seconds.

    scrape_interval: 5s

    static_configs:
    • targets: ['cadvisor:8080']
      Log Monitoring (ELK (Elasticsearch, Logstash, and Kibana))
      To store and search through logs including Apache access and error logs and Linux system logs I use the ELK stack from Elastic.co.
      For configuration, you’ll have to setup Filebeat or some other way to get logs from your server and into ELK.

Compose

ELK Stack

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.0
container_name: elasticsearch
restart: always
volumes:
- /srv/configs/elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- /srv/elk/elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash-oss:6.3.0
container_name: logstash
restart: always
volumes:
- /srv/configs/elk/logstash/config:/usr/share/logstash/config:ro
- /srv/configs/elk/logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.3.0
container_name: kibana
restart: always
volumes:
- /srv/configs/elk/kibana/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch

End ELK Stack

networks:

ELK Stack Network

elk:
driver: bridge

End ELK Stack Network

Configuration
I use the default ElasticSearch and Logstash configuration, and the following for Kibana. I also am including here a pipeline/logstash.conf which has rules for Apache access and error, PHP error, and Syslogs.

Kibana.yml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.url: http://elasticsearch:9200
pipeline/logstash.conf
input {
beats {
port => 5000
ssl => false
}
}

PHP

filter {
if "php_error" in [tags] {
grok {
match => { "message" => "^[(?%{MONTHDAY}-%{MONTH}-%{YEAR} %{TIME} (%{TZ}|(\w+/\w+)))] ?%{GREEDYDATA:message}" }
overwrite => [ "message" ]
}

date {
    match => [ "logtime", "d-MMM-yyyy HH:mm:ss ZZZ" ]
    remove_field => [ "logtime" ]
}

}
}

Apache Access and Error

filter {
if "apache_access" in [tags] {
grok {
match => [
"message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}",
"message" , "%{COMMONAPACHELOG}+%{GREEDYDATA:extra_fields}"
]
overwrite => [ "message" ]
}
mutate {
convert => ["response", "integer"]
convert => ["bytes", "integer"]
convert => ["responsetime", "float"]
}
geoip {
source => "clientip"
target => "geoip"
add_tag => [ "apache-geoip" ]
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
remove_field => [ "timestamp" ]
}
useragent {
source => "agent"
}
}

if "apache_error" in [tags] {
grok {
match => [ "message", "%{HTTPD_ERRORLOG}" ]
overwrite => ["message"]
}

if !("_grokparsefailure" in [tags]) {
  geoip {
    source => "clientip"
  }
}

}
}

Syslog

filter {
if "syslog" in [tags] {
grok {
match => [ "message", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:message}" ]
overwrite => ["message"]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
timezone => "America/New_York"
}
}
}

Removing annoying tag

filter {
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}

Output

output {
elasticsearch {
hosts => "elasticsearch:9200"
sniffing => true
manage_template => false
document_type => "%{[@metadata][type]}"
}
}
Docker Web GUI (Portainer)
The simplest of all configurations, Portainer is a web app that enables you to manage your Docker containers.

Compose

Portainer Docker Web GUI

portainer:
container_name: portainer
image: 'portainer/portainer:latest'
restart: always
ports:
- '480:9000'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /srv/configs/portainer:/data

End Portainer Docker Web GUI

All Rolled Up
For your convenience, here’s the whole docker-compose that I described in parts above. Many of the containers won’t run as-is, instead they’ll require a bit more configuration.

Compose
version: '3'

services:
##### GitLab Stack #####
##### GitLab Stack #####
  gitlab:
    image: 'gitlab/gitlab-ce:latest'
    restart: always
    container_name: gitlab
    hostname: # YOUR HOSTNAME ex. git.example.com
    links:
      - smtp
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        external_url '# YOUR URL ex. https://git.example.com #';
        gitlab_rails['gitlab_email_from'] = '# YOUR EMAIL ADDRESS #';
        gitlab_rails['gitlab_email_reply_to'] = '# YOUR EMAIL ADDRESS #';
        gitlab_rails['smtp_enable'] = 'true';
        gitlab_rails['smtp_address'] = 'smtp';
    ports:
      - '180:80'
    volumes:
      - '/srv/configs/gitlab/gitlab:/etc/gitlab'
      - '/srv/gitlab/logs:/var/log/gitlab'
      - '/srv/gitlab/data:/var/opt/gitlab'
##### End GitLab #####
##### GitLab CI/CD Runner #####
  gitlab-runner:
    image: 'gitlab/gitlab-runner:latest'
    restart: always
    container_name: gitlab-runner
    links:
      - gitlab
    environment:
      - CI_SERVER_URL=http://gitlab/
      - RUNNER_NAME=local-docker-runner
      - REGISTER_NON_INTERACTIVE=true
      - REGISTRATION_TOKEN=# YOUR REGISTRATION TOKEN FROM GITLAB #
      - RUNNER_EXECUTOR=docker
      - DOCKER_IMAGE=ubuntu:artful
      - REGISTER_LOCKED=false
    volumes:
      - /srv/configs/gitlab/gitlab-runner:/etc/gitlab-runner
      - /srv/gitlab-runner/home:/home/gitlab-runner
      - /var/run/docker.sock:/var/run/docker.sock
##### End GitLab CI/CD Runner #####
##### End GitLab Stack #####

##### Sonarqube Static Code Analysis #####
  sonarqube:
    container_name: sonarqube
    image: 'sonarqube:latest'
    restart: always
    links:
      - smtp
    ports:
      - 780:9000
    environment:
      - SONARQUBE_JDBC_URL=jdbc:mysql://# MYSQL HOST #:3306/# MYSQL DATABASE #?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
      - SONARQUBE_JDBC_USERNAME=# MYSQL USERNAME #
      - SONARQUBE_JDBC_PASSWORD=# MYSQL PASSWORD #
    volumes:
      - /srv/sonarqube/conf:/opt/sonarqube/conf
      - /srv/sonarqube/data:/opt/sonarqube/data
      - /srv/sonarqube/extensions:/opt/sonarqube/extensions
      - /srv/sonarqube/bundled-plugins:/opt/sonarqube/lib/bundled-plugins/opt/sonarqube/lib/bundled-plugins
##### End Sonarqube Static Code Analysis #####

##### SMTP Email #####
  smtp:
    image: 'tianon/exim4:latest'
    restart: always
    environment:
      GMAIL_USER: # YOUR GMAIL USERNAME #
      GMAIL_PASSWORD: # YOUR GMAIL PASSWORD #
##### End SMTP Email #####

##### Hound Code Search #####
  hound:
    container_name: hound
    image: 'etsy/hound:latest'
    restart: always
    ports:
      - 580:6080
    volumes:
      - /srv/configs/hound/config.json:/data/config.json
      - /srv/hound/data:/data/data
##### End Hound Code Search #####

##### Grafana Dashboard #####
  grafana:
    container_name: grafana
    image: 'grafana/grafana:latest'
    restart: always
    links:
      - smtp
    ports:
      - 680:3000
    environment:
      - GF_SERVER_ENABLE_GZIP=true
      - GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s/
      - GF_SERVER_DOMAIN=# DOMAIN ex. graphs.example.com #
      - GF_SMTP_ENABLED=true
      - GF_SMTP_HOST=smtp
      - GF_AUTH_ORG_NAME=anon_org
      - GF_AUTH_ANONYMOUS_ENABLED=true
    volumes:
      - /srv/configs/grafana:/var/lib/grafana
##### End Grafana Dashboard #####

##### Sentry Stack #####
  sentry-base:
    image: 'sentry:latest'
    container_name: sentry-base
    restart: always
    depends_on:
      - sentry-redis
      - sentry-postgres
    links:
      - sentry-redis
      - sentry-postgres
    ports:
      - 880:9000
    env_file:
      - sentry.env
    volumes:
      - /srv/configs/sentry/sentry:/var/lib/sentry/files
  sentry-cron:
    image: 'sentry:latest'
    container_name: sentry-cron
    restart: always
    depends_on:
      - sentry-redis
      - sentry-postgres
    links:
      - sentry-redis
      - sentry-postgres
    command: "sentry run cron"
    env_file:
      - sentry.env
    volumes:
      - /srv/configs/sentry/sentry:/var/lib/sentry/files
  sentry-worker:
    image: 'sentry:latest'
    container_name: sentry-worker
    restart: always
    depends_on:
      - sentry-redis
      - sentry-postgres
    links:
      - sentry-redis
      - sentry-postgres
    command: "sentry run worker"
    env_file:
      - sentry.env
    volumes:
      - /srv/configs/sentry/sentry:/var/lib/sentry/files
  sentry-redis:
    image: 'redis:alpine'
    container_name: sentry-redis
    restart: always
  sentry-postgres:
    image: 'postgres:latest'
    container_name: sentry-postgres
    restart: always
    environment:
      POSTGRES_USER: sentry
      POSTGRES_PASSWORD: sentry
      POSTGRES_DB: sentry 
    volumes:
      - /srv/configs/sentry/postgres:/var/lib/postgresql/data
##### End Sentry Stack #####

##### Prometheus Monitoring Stack #####
  prometheus:
    container_name: prometheus
    image: 'prom/prometheus:latest'
    restart: always
    links:
      - grafana
      - cadvisor
      - node-exporter
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/usr/share/prometheus/console_libraries'
      - '--web.console.templates=/usr/share/prometheus/consoles'
    volumes:
      - /srv/configs/prometheus:/etc/prometheus
      - /srv/prometheus:/prometheus
# Monitoring for this host #
  node-exporter:
    image: prom/node-exporter
    container_name: prometheus_node-exporter
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command: 
      - '--path.procfs=/host/proc' 
      - '--path.sysfs=/host/sys'
      - --collector.filesystem.ignored-mount-points
      - "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($|/)"
    restart: always
# Docker container monitoring #
  cadvisor:
    image: google/cadvisor
    restart: always
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
##### End Prometheus Monitoring Stack #####

##### ELK Stack #####
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.0
    container_name: elasticsearch
    restart: always
    volumes:
      - /srv/configs/elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - /srv/elk/elasticsearch/data:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
  logstash:
    image: docker.elastic.co/logstash/logstash-oss:6.3.0
    container_name: logstash
    restart: always
    volumes:
      - /srv/configs/elk/logstash/config:/usr/share/logstash/config:ro
      - /srv/configs/elk/logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5000:5000"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch
  kibana:
    image: docker.elastic.co/kibana/kibana-oss:6.3.0
    container_name: kibana
    restart: always
    volumes:
      - /srv/configs/elk/kibana/:/usr/share/kibana/config:ro
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch
##### End ELK Stack #####

##### Portainer Docker Web GUI #####
  portainer:
    container_name: portainer
    image: 'portainer/portainer:latest'
    restart: always
    ports:
      - '480:9000'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /srv/configs/portainer:/data
##### End Portainer Docker Web GUI #####

networks:
##### ELK Stack Network #####
  elk:
    driver: bridge
##### End ELK Stack Network #####

from https://mikedombrowski.com/2018/04/docker-compose-setup-for-self-hosting-development-tools/

转载于:https://www.cnblogs.com/joe-yang/p/9278549.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值