influxdb数据库的windows安装(非docker)以及注意点

influxdb数据库的windows安装(非docker)

一、influxdb数据库官网的下载链接:
https://portal.influxdata.com/downloads
--------------------------------------------------------------
windows版本的1.8.4版本具体参考链接:

https://dl.influxdata.com/influxdb/releases/influxdb-1.8.4_windows_amd64.zip	

二、解压到安装盘,目录如下

这里写图片描述
三、修改influxdb.conf文件,代码如下,直接复制粘贴(1.8.4版本),注意修改路径,带E盘的改为你的安装路径

#路径修改配置:
dir = "E:/influxdb/influxdb-1.8.4-1/meta"
dir = "E:/influxdb/influxdb-1.8.4-1/data"
wal-dir = "E:/influxdb/influxdb-1.8.4-1/wal"

[http]
  # Determines whether HTTP endpoint is enabled.
    enabled = true
    
  # The bind address used by the HTTP service.
  #设置端口,默认是8086
    bind-address = ":8086"

  # Determines whether user authentication is enabled over HTTP/HTTPS.
  # 设置数据库访问是否需要权限
    auth-enabled = true

配置的全部代码:

### Welcome to the InfluxDB configuration file.

# The values in this file override the default values used by the system if
# a config option is not specified. The commented out lines are the configuration
# field and the default value used. Uncommenting a line and changing the value
# will change the value used at runtime when the process is restarted.

# Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
# The data includes a random ID, os, arch, version, the number of series and other
# usage data. No data from user databases is ever transmitted.
# Change this option to true to disable reporting.
# reporting-disabled = false

# Bind address to use for the RPC service for backup and restore.
# bind-address = "127.0.0.1:8088"

###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###

[meta]
  # Where the metadata/raft database is stored
  dir = "E:/influxdb/influxdb-1.8.4-1/meta"

  # Automatically create a default retention policy when creating a database.
  # retention-autocreate = true

  # If log messages are printed for the meta service
  # logging-enabled = true

###
### [data]
###
### Controls where the actual shard data for InfluxDB lives and how it is
### flushed from the WAL. "dir" may need to be changed to a suitable place
### for your system, but the WAL settings are an advanced configuration. The
### defaults should work for most systems.
###

[data]
  # The directory where the TSM storage engine stores TSM files.
  dir = "E:/influxdb/influxdb-1.8.4-1/data"

  # The directory where the TSM storage engine stores WAL files.
  wal-dir = "E:/influxdb/influxdb-1.8.4-1/wal"

  # The amount of time that a write will wait before fsyncing.  A duration
  # greater than 0 can be used to batch up multiple fsync calls.  This is useful for slower
  # disks or when WAL write contention is seen.  A value of 0s fsyncs every write to the WAL.
  # Values in the range of 0-100ms are recommended for non-SSD disks.
  # wal-fsync-delay = "0s"


  # The type of shard index to use for new shards.  The default is an in-memory index that is
  # recreated at startup.  A value of "tsi1" will use a disk based index that supports higher
  # cardinality datasets.
  # index-version = "inmem"

  # Trace logging provides more verbose output around the tsm engine. Turning
  # this on can provide more useful output for debugging tsm engine issues.
  # trace-logging-enabled = false

  # Whether queries should be logged before execution. Very useful for troubleshooting, but will
  # log any sensitive data contained within a query.
  # query-log-enabled = true

  # Validates incoming writes to ensure keys only have valid unicode characters.
  # This setting will incur a small overhead because every key must be checked.
  # validate-keys = false

  # Settings for the TSM engine

  # CacheMaxMemorySize is the maximum size a shard's cache can
  # reach before it starts rejecting writes.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # cache-max-memory-size = "1g"

  # CacheSnapshotMemorySize is the size at which the engine will
  # snapshot the cache and write it to a TSM file, freeing up memory
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # cache-snapshot-memory-size = "25m"

  # CacheSnapshotWriteColdDuration is the length of time at
  # which the engine will snapshot the cache and write it to
  # a new TSM file if the shard hasn't received writes or deletes
  # cache-snapshot-write-cold-duration = "10m"

  # CompactFullWriteColdDuration is the duration at which the engine
  # will compact all TSM files in a shard if it hasn't received a
  # write or delete
  # compact-full-write-cold-duration = "4h"

  # The maximum number of concurrent full and level compactions that can run at one time.  A
  # value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime.  Any number greater
  # than 0 limits compactions to that value.  This setting does not apply
  # to cache snapshotting.
  # max-concurrent-compactions = 0

  # CompactThroughput is the rate limit in bytes per second that we
  # will allow TSM compactions to write to disk. Note that short bursts are allowed
  # to happen at a possibly larger value, set by CompactThroughputBurst
  # compact-throughput = "48m"

  # CompactThroughputBurst is the rate limit in bytes per second that we
  # will allow TSM compactions to write to disk.
  # compact-throughput-burst = "48m"

  # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to
  # TSM files. This setting has been found to be problematic on some kernels, and defaults to off.
  # It might help users who have slow disks in some cases.
  # tsm-use-madv-willneed = false

  # Settings for the inmem index

  # The maximum series allowed per database before writes are dropped.  This limit can prevent
  # high cardinality issues at the database level.  This limit can be disabled by setting it to
  # 0.
  # max-series-per-database = 1000000

  # The maximum number of tag values per tag that are allowed before writes are dropped.  This limit
  # can prevent high cardinality tag values from being written to a measurement.  This limit can be
  # disabled by setting it to 0.
  # max-values-per-tag = 100000

  # Settings for the tsi1 index

  # The threshold, in bytes, when an index write-ahead log file will compact
  # into an index file. Lower sizes will cause log files to be compacted more
  # quickly and result in lower heap usage at the expense of write throughput.
  # Higher sizes will be compacted less frequently, store more series in-memory,
  # and provide higher write throughput.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # max-index-log-file-size = "1m"

  # The size of the internal cache used in the TSI index to store previously 
  # calculated series results. Cached results will be returned quickly from the cache rather
  # than needing to be recalculated when a subsequent query with a matching tag key/value 
  # predicate is executed. Setting this value to 0 will disable the cache, which may
  # lead to query performance issues.
  # This value should only be increased if it is known that the set of regularly used 
  # tag key/value predicates across all measurements for a database is larger than 100. An
  # increase in cache size may lead to an increase in heap usage.
  series-id-set-cache-size = 100

###
### [coordinator]
###
### Controls the clustering service configuration.
###

[coordinator]
  # The default time a write request will wait until a "timeout" error is returned to the caller.
  # write-timeout = "10s"

  # The maximum number of concurrent queries allowed to be executing at one time.  If a query is
  # executed and exceeds this limit, an error is returned to the caller.  This limit can be disabled
  # by setting it to 0.
  # max-concurrent-queries = 0

  # The maximum time a query will is allowed to execute before being killed by the system.  This limit
  # can help prevent run away queries.  Setting the value to 0 disables the limit.
  # query-timeout = "0s"

  # The time threshold when a query will be logged as a slow query.  This limit can be set to help
  # discover slow or resource intensive queries.  Setting the value to 0 disables the slow query logging.
  # log-queries-after = "0s"

  # The maximum number of points a SELECT can process.  A value of 0 will make
  # the maximum point count unlimited.  This will only be checked every second so queries will not
  # be aborted immediately when hitting the limit.
  # max-select-point = 0

  # The maximum number of series a SELECT can run.  A value of 0 will make the maximum series
  # count unlimited.
  # max-select-series = 0

  # The maximum number of group by time bucket a SELECT can create.  A value of zero will max the maximum
  # number of buckets unlimited.
  # max-select-buckets = 0

###
### [retention]
###
### Controls the enforcement of retention policies for evicting old data.
###

[retention]
  # Determines whether retention policy enforcement enabled.
  # enabled = true

  # The interval of time when retention policy enforcement checks run.
  # check-interval = "30m"

###
### [shard-precreation]
###
### Controls the precreation of shards, so they are available before data arrives.
### Only shards that, after creation, will have both a start- and end-time in the
### future, will ever be created. Shards are never precreated that would be wholly
### or partially in the past.

[shard-precreation]
  # Determines whether shard pre-creation service is enabled.
  # enabled = true

  # The interval of time when the check to pre-create new shards runs.
  # check-interval = "10m"

  # The default period ahead of the endtime of a shard group that its successor
  # group is created.
  # advance-period = "30m"

###
### Controls the system self-monitoring, statistics and diagnostics.
###
### The internal database for monitoring data is created automatically if
### if it does not already exist. The target retention within this database
### is called 'monitor' and is also created with a retention period of 7 days
### and a replication factor of 1, if it does not exist. In all cases the
### this retention policy is configured as the default for the database.

[monitor]
  # Whether to record statistics internally.
  # store-enabled = true

  # The destination database for recorded statistics
  # store-database = "_internal"

  # The interval at which to record statistics
  # store-interval = "10s"

###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###

[http]
  # Determines whether HTTP endpoint is enabled.
    enabled = true

  # Determines whether the Flux query endpoint is enabled.
  # flux-enabled = false

  # Determines whether the Flux query logging is enabled.
  # flux-log-enabled = false

  # The bind address used by the HTTP service.
    bind-address = ":8086"

  # Determines whether user authentication is enabled over HTTP/HTTPS.
    auth-enabled = true

  # The default realm sent back when issuing a basic auth challenge.
  # realm = "InfluxDB"

  # Determines whether HTTP request logging is enabled.
  # log-enabled = true

  # Determines whether the HTTP write request logs should be suppressed when the log is enabled.
  # suppress-write-log = false

  # When HTTP request logging is enabled, this option specifies the path where
  # log entries should be written. If unspecified, the default is to write to stderr, which
  # intermingles HTTP logs with internal InfluxDB logging.
  #
  # If influxd is unable to access the specified path, it will log an error and fall back to writing
  # the request log to stderr.
  # access-log-path = ""

  # Filters which requests should be logged. Each filter is of the pattern NNN, NNX, or NXX where N is
  # a number and X is a wildcard for any number. To filter all 5xx responses, use the string 5xx.
  # If multiple filters are used, then only one has to match. The default is to have no filters which
  # will cause every request to be printed.
  # access-log-status-filters = []

  # Determines whether detailed write logging is enabled.
  # write-tracing = false

  # Determines whether the pprof endpoint is enabled.  This endpoint is used for
  # troubleshooting and monitoring.
  # pprof-enabled = true

  # Enables authentication on pprof endpoints. Users will need admin permissions
  # to access the pprof endpoints when this setting is enabled. This setting has
  # no effect if either auth-enabled or pprof-enabled are set to false.
  # pprof-auth-enabled = false

  # Enables a pprof endpoint that binds to localhost:6060 immediately on startup.
  # This is only needed to debug startup issues.
  # debug-pprof-enabled = false

  # Enables authentication on the /ping, /metrics, and deprecated /status
  # endpoints. This setting has no effect if auth-enabled is set to false.
  # ping-auth-enabled = false

  # Determines whether HTTPS is enabled.
  # https-enabled = false

  # The SSL certificate to use when HTTPS is enabled.
  # https-certificate = "/etc/ssl/influxdb.pem"

  # Use a separate private key location.
  # https-private-key = ""

  # The JWT auth shared secret to validate requests using JSON web tokens.
  # shared-secret = ""

  # The default chunk size for result sets that should be chunked.
  # max-row-limit = 0

  # The maximum number of HTTP connections that may be open at once.  New connections that
  # would exceed this limit are dropped.  Setting this value to 0 disables the limit.
  # max-connection-limit = 0

  # Enable http service over unix domain socket
  # unix-socket-enabled = false

  # The path of the unix domain socket.
  # bind-socket = "/var/run/influxdb.sock"

  # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.
  # max-body-size = 25000000

  # The maximum number of writes processed concurrently.
  # Setting this to 0 disables the limit.
  # max-concurrent-write-limit = 0

  # The maximum number of writes queued for processing.
  # Setting this to 0 disables the limit.
  # max-enqueued-write-limit = 0

  # The maximum duration for a write to wait in the queue to be processed.
  # Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit.
  # enqueued-write-timeout = 0

	# User supplied HTTP response headers
	#
	# [http.headers]
	#   X-Header-1 = "Header Value 1"
	#   X-Header-2 = "Header Value 2"

###
### [logging]
###
### Controls how the logger emits logs to the output.
###

[logging]
  # Determines which log encoder to use for logs. Available options
  # are auto, logfmt, and json. auto will use a more a more user-friendly
  # output format if the output terminal is a TTY, but the format is not as
  # easily machine-readable. When the output is a non-TTY, auto will use
  # logfmt.
  # format = "auto"

  # Determines which level of logs will be emitted. The available levels
  # are error, warn, info, and debug. Logs that are equal to or above the
  # specified level will be emitted.
  # level = "info"

  # Suppresses the logo output that is printed when the program is started.
  # The logo is always suppressed if STDOUT is not a TTY.
  # suppress-logo = false

###
### [subscriber]
###
### Controls the subscriptions, which can be used to fork a copy of all data
### received by the InfluxDB host.
###

[subscriber]
  # Determines whether the subscriber service is enabled.
  # enabled = true

  # The default timeout for HTTP writes to subscribers.
  # http-timeout = "30s"

  # Allows insecure HTTPS connections to subscribers.  This is useful when testing with self-
  # signed certificates.
  # insecure-skip-verify = false

  # The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used
  # ca-certs = ""

  # The number of writer goroutines processing the write channel.
  # write-concurrency = 40

  # The number of in-flight writes buffered in the write channel.
  # write-buffer-size = 1000


###
### [[graphite]]
###
### Controls one or many listeners for Graphite data.
###

[[graphite]]
  # Determines whether the graphite endpoint is enabled.
  # enabled = false
  # database = "graphite"
  # retention-policy = ""
  # bind-address = ":2003"
  # protocol = "tcp"
  # consistency-level = "one"

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # number of batches that may be pending in memory
  # batch-pending = 10

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # udp-read-buffer = 0

  ### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
  # separator = "."

  ### Default tags that will be added to all metrics.  These can be overridden at the template level
  ### or by tags extracted from metric
  # tags = ["region=us-east", "zone=1c"]

  ### Each template line requires a template pattern.  It can have an optional
  ### filter before the template and separated by spaces.  It can also have optional extra
  ### tags following the template.  Multiple tags should be separated by commas and no spaces
  ### similar to the line protocol format.  There can be only one default template.
  # templates = [
  #   "*.app env.service.resource.measurement",
  #   # Default template
  #   "server.*",
  # ]

###
### [collectd]
###
### Controls one or many listeners for collectd data.
###

[[collectd]]
  # enabled = false
  # bind-address = ":25826"
  # database = "collectd"
  # retention-policy = ""
  #
  # The collectd service supports either scanning a directory for multiple types
  # db files, or specifying a single db file.
  # typesdb = "/usr/local/share/collectd"
  #
  # security-level = "none"
  # auth-file = "/etc/collectd/auth_file"

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # Number of batches that may be pending in memory
  # batch-pending = 10

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "10s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # read-buffer = 0

  # Multi-value plugins can be handled two ways.
  # "split" will parse and store the multi-value plugin data into separate measurements
  # "join" will parse and store the multi-value plugin as a single multi-value measurement.
  # "split" is the default behavior for backward compatibility with previous versions of influxdb.
  # parse-multivalue-plugin = "split"
###
### [opentsdb]
###
### Controls one or many listeners for OpenTSDB data.
###

[[opentsdb]]
  # enabled = false
  # bind-address = ":4242"
  # database = "opentsdb"
  # retention-policy = ""
  # consistency-level = "one"
  # tls-enabled = false
  # certificate= "/etc/ssl/influxdb.pem"

  # Log an error for every malformed point.
  # log-point-errors = true

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Only points
  # metrics received over the telnet protocol undergo batching.

  # Flush if this many points get buffered
  # batch-size = 1000

  # Number of batches that may be pending in memory
  # batch-pending = 5

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

###
### [[udp]]
###
### Controls the listeners for InfluxDB line protocol data via UDP.
###

[[udp]]
  # enabled = false
  # bind-address = ":8089"
  # database = "udp"
  # retention-policy = ""

  # InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h")
  # precision = ""

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # Number of batches that may be pending in memory
  # batch-pending = 10

  # Will flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # read-buffer = 0

###
### [continuous_queries]
###
### Controls how continuous queries are run within InfluxDB.
###

[continuous_queries]
  # Determines whether the continuous query service is enabled.
  # enabled = true

  # Controls whether queries are logged when executed by the CQ service.
  # log-enabled = true

  # Controls whether queries are logged to the self-monitoring data store.
  # query-stats-enabled = false

  # interval for how often continuous queries will be checked if they need to run
  # run-interval = "1s"

###
### [tls]
###
### Global configuration settings for TLS in InfluxDB.
###

[tls]
  # Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants
  # for a list of available ciphers, which depends on the version of Go (use the query
  # SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses
  # the default settings from Go's crypto/tls package.
  # ciphers = [
  #   "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
  #   "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
  # ]

  # Minimum version of the tls protocol that will be negotiated. If not specified, uses the
  # default settings from Go's crypto/tls package.
  # min-version = "tls1.2"

  # Maximum version of the tls protocol that will be negotiated. If not specified, uses the
  # default settings from Go's crypto/tls package.
  # max-version = "tls1.3"

四、使配置生效并打开数据库连接(两种情况),注意不能关闭influxd.exe

  1. 如果不需要权限认证可以直接访问
    设置上面配置文件的auth-enabled = false。
    双击influxd.exe直接开启(默认配置不需要认证),然后双击influx.exe对数据库进行操作,也可通过命令行操作,参考下面的权限认证。
  2. 需要权限认证
    设置上面配置文件的auth-enabled = true。
    命令行输入:
##以指定的配置文件启动(可以设置成把.bat命令文件)
influxd.exe -config influxdb.conf
##进入数据库
influx

五、创建数据库
1、influxDB传统数据库中的概念database

measurement:数据库中的表

points:表里面的一行数据

influxdb数据的构成:

Point由时间戳(time)、数据(field)、标签(tags)组成。Point属性传统数据库中的概念time每个数据记录时间,是数据库中的主索引(会自动生成)

fields各种记录值(没有索引的属性)


tags各种有索引的属性

series:所有在数据库中的数据

2、如果不需要权限认证,可忽略这一步:
关于用户的操作如下(一旦创建需要每次登录都需要认证):

#显示用户

SHOW USERS

#创建普通用户

CREATE USER "username" WITH PASSWORD 'password'

#创建管理员权限的用户(初次建议这个)

CREATE USER "username" WITH PASSWORD 'pkeassword' WITH ALL PRIVILEGES

#如果要删除指定用户,操作命令

DROP USER "username"

3、

show databases //查询当前的所有数据库



出现上述情况是因设置了权限认证,必须通过auth命令先认证:
auth
4、创建数据库命令

CREATE DATABASE “carNerMonitor”


5、在查询表或插入数据都需要先指定要操作的库

use testDB //使用某个数据库

在influxdb下没有细分的表的概念,influxdb下的表在插入数据库的时候自动会创建。可以通过SHOW measurements命令查看所有的表,这个类似于mysql下的show tables; 。

INSERT cpu,host=serverA,region=us_west value=0.64 //在cpu表中插入相关的数据

SELECT * FROM cpu ORDER BY time DESC LIMIT 3 //查询最近的三条数据

SELECT * FROM /.*/ LIMIT 1 //正则表达式查询

delete from cpu where time=1480235366557373922 //删除某条数据

DROP MEASUREMENT “measurementName” //删除表


四、Chronograf
要使用web管理需要下载Chronograf,
地址:https://portal.influxdata.com/downloads
下载完解压,双击exe程序,在浏览器输入http://localhost:8888/,进去登录要账户密码,账户和密码都是admin

查询注意点

1、查询的时区问题
indfluxdb的存储数据的时间为系统时区的UTC时间,假如东八区18:00插入数据,那么查出来的结果就是time显示为10:00
在查询语句结尾增加 tz(‘Asia/Shanghai’) 即可,例子如下:
select * from test where time > now() - 5m tz(‘Asia/Shanghai’)

注意:windows版本有些电脑上述方法不可行,因此建议influxdb在centos上安装。
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Windows安装Docker可以按照以下步骤进行操作: 1. 首先,确保你的Windows版本是Windows 10或者Windows Server 2016以上,并且开启了Hyper-V虚拟化功能。 2. 前往Docker官网(https://www.docker.com/)下载Docker Desktop for Windows安装程序。 3. 运行安装程序,按照提示进行安装安装完成后,Docker会自动启动。 4. 在系统托盘中找到Docker图标,右键击选择"Settings",进入设置界面。 5. 在设置界面中,可以配置Docker的一些选项,例如镜像加速器、资源限制等。根据需要进行配置。 6. 完成配置后,击"Apply & Restart"按钮,Docker会重新启动应用配置。 至此,你已经成功在Windows安装Docker。 接下来,如果你想在Docker安装数据库,可以按照以下步骤进行操作: 1. 打开Docker Desktop应用,确保Docker已经启动。 2. 在命令行或者终端中输入以下命令来搜索并下载数据库Docker镜像: ``` docker search 数据库名称 ``` 3. 选择一个合适的镜像,并使用以下命令来下载该镜像: ``` docker pull 镜像名称 ``` 4. 下载完成后,使用以下命令来创建并运行一个数据库容器: ``` docker run --name 容器名称 -e 环境变量 镜像名称 ``` 其中,容器名称是你给容器起的一个名字,环境变量是数据库的配置信息,镜像名称是你下载的数据库镜像的名称。 5. 容器创建成功后,你可以使用以下命令来管理容器: - 启动容器:`docker start 容器名称` - 停止容器:`docker stop 容器名称` - 进入容器:`docker exec -it 容器名称 bash` 通过以上步骤,你可以在Docker中成功安装和运行数据库

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值