监控仪表盘_创建监控仪表板

监控仪表盘

Expedia Group Technology —软件 (EXPEDIA GROUP TECHNOLOGY — SOFTWARE)

Recently our teams at Hotels.com™, part of Expedia Group™, started moving from Graphite to an internal metrics platform that is based on Prometheus. We saw this as an opportunity to improve our observability and, among others, we provided a set of simple guidelines to help with the migration.

最近,我们作为Expedia Group™一部分的Hotels.com™团队开始从Graphite迁移到基于Prometheus的内部指标平台。 我们认为这是提高可观察性的机会,除其他外,我们提供了一组简单的准则来帮助进行迁移。

We believe these guidelines would be useful to the community and hence we share them in this blog post. Some of the examples apply to our tech stack (i.e. Spring Boot, Micrometer, Kubernetes) but the idea is the same for other technologies and libraries.

我们认为这些准则对社区有用,因此我们在此博客文章中分享它们。 其中一些示例适用于我们的技术堆栈(例如,Spring Boot,Micrometer,Kubernetes),但其他技术和库的想法是相同的。

Image for post

本指南的目的 (Purpose of this Guide)

Having meaningful and carefully crafted monitoring dashboards for your services is of utmost importance. The purpose of this guide is to:

对于您的服务而言,拥有有意义且精心设计的监控仪表板至关重要。 本指南的目的是:

  • Provide you with a set of handful resources around monitoring

    为您提供一系列有关监控的资源
  • Promote best practices on monitoring metrics and dashboards

    推广监控指标和仪表板的最佳做法
  • Help you create Grafana dashboards based on Prometheus metrics

    帮助您基于Prometheus指标创建Grafana仪表板

If you want to learn more about monitoring and best practices, we suggest you to read the following resources by Google:

如果您想了解有关监控和最佳做法的更多信息,建议您阅读Google提供的以下资源:

Site Reliability Engineering, How Google runs production systems (Chapter 6 — Monitoring Distributed Systems)

网站可靠性工程,Google如何运行生产系统 ( 第6章-监视分布式系统 )

The Site Reliability Workbook, Practical ways to implement SRE (Chapter 4 — Monitoring)

站点可靠性工作手册,实施SRE的实用方法 ( 第4章-监视 )

原则 (Principles)

Below is a non-exhaustive list of principles to have in mind in the context of observability which also apply to dashboards:

以下是在可观察性方面要考虑的原则的详尽列表,这些原则也适用于仪表板:

  • Keep it simple, avoid creating complex dashboards that you will never use or alerts that can trigger false-positive notifications.

    保持简单 ,避免创建您将永远不会使用的复杂仪表板或可能触发误报的警报。

  • Keep it consistent, use consistent and meaningful names in your dashboards and alerts.

    保持一致 ,在仪表板和警报中使用一致且有意义的名称。

  • Use logs, metrics, and traces wisely and in conjunction with each other.

    明智地并相互结合使用日志指标跟踪

  • Avoid high-cardinality metrics.

    避免使用高基数指标。
  • Avoid complex and slow queries in your dashboards.

    避免在仪表板中进行复杂缓慢的查询。

监控什么 (What to Monitor)

核心指标 (Core Metrics)

As a first set of metrics, you should be looking into monitoring the 4 golden signals as these are defined by Google, or follow the RED method which is more relevant to micro-services.

作为第一组指标,您应该研究监视Google定义的4个黄金信号 ,或者遵循与微服务更相关的RED方法

延迟(持续时间) (Latency (Duration))

This could take the form of percentiles (e.g. p90, p99). Be aware of failed requests which would result in misleading calculations.

这可以采取百分位数的形式(例如p90,p99)。 请注意失败的请求,这将导致误导计算。

流量(费率) (Traffic (Rate))

An example of this would be the number of requests per second (RPS).

例如每秒请求数(RPS)。

失误 (Errors)

This will depend on what you consider as an error for your service or system. A typical metric could be the rate of non-2XX status code responses.

这将取决于您认为服务或系统的错误。 一个典型的指标可能是非2XX状态代码响应的比率。

饱和 (Saturation)

Saturation shows how overloaded your service or system is. This could be monitoring the number of elements in a queue. You may also want to look into utilisation which reflects how busy the service is. An example of that is monitoring the busy threads.

饱和度显示您的服务或系统过载的程度。 这可能是监视队列中元素的数量。 您可能还需要研究利用率,以反映服务的繁忙程度。 一个示例是监视繁忙的线程。

业务指标 (Business Metrics)

Ideally, you need to discuss and decide on this set of metrics with your product owner as they are based on business needs. Business metrics could be custom metrics reported by one or more services.

理想情况下,您需要与产品所有者讨论并确定这组指标,因为它们基于业务需求。 业务指标可以是一个或多个服务报告的自定义指标。

Indicative examples are listed below:

指示性示例如下:

  • A team responsible for sign-ins would need to report metrics for sign-in attempts, failed attempts due to invalid passwords, or even sign-ins coming from different channels but still hitting the same endpoint.

    负责登录的团队将需要报告登录尝试,由于无效密码导致的失败尝试甚至来自不同渠道但仍到达同一端点的登录的度量。
  • A team owning the autocomplete functionality across multiple brands would need to monitor the number of requests and error rates per brand.

    拥有多个品牌的自动完成功能的团队需要监控每个品牌的请求数量和错误率。

依赖性指标 (Dependencies Metrics)

In a micro-services architecture, there could be many external calls from your service to other services. These calls are usually wrapped with Hystrix or other Circuit Breaker libraries. Monitoring core metrics (traffic, latencies, errors) for these calls is very important.

在微服务体系结构中,可能有许多从您的服务到其他服务的外部调用。 这些调用通常使用Hystrix或其他断路器库包装。 监视这些调用的核心指标(流量,延迟,错误)非常重要。

连接池和线程池指标 (Connection Pools & Thread Pools Metrics)

Having a dashboard that displays metrics for Tomcat threads, Circuit Breaker thread pools and HTTP client connection pools for 3rd party calls is useful.

拥有一个仪表板来显示Tomcat线程,Circuit Breaker线程池和第三方调用的HTTP客户端连接池的指标非常有用。

JVM指标 (JVM Metrics)

Useful metrics for JVM applications include memory and CPU, GC, or even memory pools. We suggest re-using the JVM (Micrometer) Grafana dashboard.

JVM应用程序的有用指标包括内存和CPU,GC甚至内存池。 我们建议重新使用JVM(测微计)Grafana仪表板

基础架构指标 (Infrastructure Metrics)

Many services rely on infrastructures such as a cache, a database, or a queue. Even if your team does not own these components, monitoring them can help you identify the root cause of an issue. Although the 4 golden signals apply to most infrastructure systems, these systems can also have extra characteristics you need to monitor (e.g. the size of the queue or cache hits/misses).

许多服务都依赖于基础结构,例如缓存,数据库或队列。 即使您的团队不拥有这些组件,对其进行监视也可以帮助您确定问题的根本原因。 尽管这四个黄金信号适用于大多数基础结构系统,但是这些系统还可以具有您需要监视的其他特征(例如,队列大小或缓存命中/未命中)。

平台指标 (Platform Metrics)

In addition to infrastructure metrics you may need to monitor Platform metrics, such as ones provided by Kubernetes or by the Service Mesh (e.g. Istio). Usually incident response and SRE teams look into such dashboards to have the big picture and to achieve faster Mean Time to Detect (MTTD) and Mean Time To Recover (MTTR).

除了基础架构指标之外,您可能还需要监视平台指标,例如Kubernetes或Service Mesh(例如Istio)提供的指标。 通常情况下,事件响应和SRE团队会着眼于此类仪表板,以全面了解并实现更快的平均检测时间(MTTD)和平均恢复时间(MTTR)。

Image for post

普罗米修斯最佳实践 (Prometheus Best Practices)

The open-source community has come up with a set of best practices on metric names and labels which we encourage you to follow.

开源社区针对度量标准名称标签提出了一系列最佳实践,我们鼓励您遵循这些最佳实践。

Be super careful with high-cardinality metrics. As stated in the docs:

对高基数指标要格外小心。 如文档中所述:

Every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.

键值标签对的每个唯一组合都代表一个新的时间序列,这可以大大增加存储的数据量。 请勿使用标签存储具有高基数(许多不同的标签值)的尺寸,例如用户ID,电子邮件地址或其他无限制的值集。

Popular metrics libraries may have mechanisms in place to prevent this issue. For example, Micrometer provides the maximumAllowableTags method through its Meter Filters. Recent versions of Spring Boot Actuator use this by default for URI tags; they expose the management.metrics.web.client.max-uri-tags property with a default value of 100 (you may need to decrease that value though). If your library doesn't provide this out-of-the-box you will need to implement this logic.

流行指标库可能具有适当的机制来防止此问题。 例如,Micrometer通过其Meter Filters提供了maximumAllowableTags方法。 Spring Boot Actuator的最新版本默认将其用于URI标签。 它们使用默认值100公开了management.metrics.web.client.max-uri-tags属性(不过您可能需要减小该值)。 如果您的库不提供此功能,则需要实现此逻辑。

Let’s now look at practical examples you can re-use. Before we dive deep into queries, understanding the Prometheus format is crucial.

现在让我们看一下可以重用的实际示例。 在深入查询之前,了解Prometheus格式至关重要。

了解普罗米修斯格式 (Understanding the Prometheus Format)

If you hit the /prometheus endpoint under which your application exposes Prometheus metrics, you will see a set of metrics:

如果您点击/prometheus端点,应用程序将在该端点下公开Prometheus指标,您将看到一组指标:

# HELP resilience4j_circuitbreaker_calls_seconds Total number of successful calls
# TYPE resilience4j_circuitbreaker_calls_seconds summary
resilience4j_circuitbreaker_calls_seconds{app="test-app",kind="successful",name="greetings",quantile="0.95",} 0.738197504
resilience4j_circuitbreaker_calls_seconds{app="test-app",kind="successful",name="greetings",quantile="0.99",} 0.738197504
[...]
# HELP http_server_requests_seconds  
# TYPE http_server_requests_seconds summary
http_server_requests_seconds{app="test-app",client="client1",exception="None",method="GET",status="200",uri="/api/v1/hello-world",quantile="0.95",} 0.771751936
http_server_requests_seconds_count{app="test-app",client="client1",exception="None",method="GET",status="200",uri="/api/v1/hello-world",} 1.0
http_server_requests_seconds_count{app="test-app",client="client2",exception="None",method="GET",status="200",uri="/api/v1/hello-world",} 1.0
[...]

Taking the last two lines as an example, the name of the metric is http_server_requests_seconds_count and they both contain a set of labels such as the application name app, the endpoint uri, etc. In this case, the only difference is the client.

以最后两行为例,指标的名称为http_server_requests_seconds_count并且它们都包含一组标签,例如应用程序名称app ,端点uri等。在这种情况下,唯一的区别是client

This is a representation of a single metric across multiple dimensions, by using labels. Having these multiple dimensions allows us to run powerful queries that could span across multiple URLs, AWS regions, and even across different applications.

这是使用标签表示跨多个维度的单个指标的表示。 具有这些多个维度,使我们可以运行功能强大的查询,这些查询可以跨越多个URL,AWS区域甚至不同的应用程序。

查询 (Queries)

Now that we have a basic understanding of the metrics format we can look into useful queries. This section includes very basic examples but you can use them as a starting point.

现在,我们对指标格式有了基本的了解,可以研究有用的查询了。 本节包含非常基本的示例,但您可以将其用作起点。

(Rate)

RPS — Overall

RPS —总体

The following query shows the Requests Per Second (RPS) across all endpoints:

以下查询显示所有端点上的每秒请求数(RPS):

sum(rate(http_server_requests_seconds_count{app="test-app"}[1m]))
  • http_server_requests_seconds_count stores the count of HTTP requests.

    http_server_requests_seconds_count存储HTTP请求的计数。

  • app is a label that reflects the name of the application. You can use a regex and the '=~' operator for a set of applications.

    app是反映应用程序名称的标签。 您可以对一组应用程序使用正则表达式和'=〜'运算符

  • We append the time selector [1m] which translates the instant vector into a range vector (over the last minute).

    我们附加了时间选择器[1m] ,该选择器将即时向量转换为范围向量(在最后一分钟内)。

  • Up to this point, we have a range vector which we need to transform into an instant vector in order for it to be displayed. We do this by applying the rate function which shows per second increase.

    至此,我们有了一个范围向量,我们需要将其转换为即时向量才能显示出来。 我们通过应用显示每秒增加的速率函数来做到这一点。

  • Finally, we aggregate the results using the sum aggregation operator.

    最后,我们使用sum 运算符将结果汇总

A dashboard pane showing rate requests per second
Figure 4: Visualising the overall RPS.
图4:可视化整个RPS。

If you want to display a single number, you can use the Singlestat visualisation (or the Stat panel in recent versions of Grafana).

如果要显示单个数字,可以使用Singlestat可视化 (或最近版本的Grafana中的Stat面板 )。

Screenshot showing singlestat visualisation
Figure 5: Singlestat visualisation of the overall RPS.
图5:整体RPS的Singlestat可视化。

RPS — Aggregations

RPS —聚合

Often you need to aggregate results per label. For example, plot the RPS per Kubernetes pod, per endpoint, or even per client.

通常,您需要汇总每个标签的结果。 例如,绘制每个Kubernetes pod,每个端点甚至每个客户端的RPS。

To show the RPS per pod:

要显示每个吊舱的RPS:

sum by (pod_name) (rate (http_server_requests_seconds_count{app="test-app"}[1m]))
A screenshot of RPS by Kubernetes pod
Figure 7: Visualising the RPS by Kubernetes pod.
图7:通过Kubernetes pod可视化RPS

For the RPS per endpoint and client you can use the uri and client labels respectively. In these cases, as mentioned earlier in this guide, you need to be mindful of high-cardinality issues.

对于每个端点和客户端的RPS,您可以分别使用uriclient标签。 在这种情况下,如本指南前面所述,您需要注意高基数问题。

持续时间 (Duration)

To show the latency (e.g. p99) per endpoint you can use the following query:

要显示每个端点的延迟(例如p99),可以使用以下查询:

max by (uri)(http_server_requests_seconds{app="test-app", quantile="0.99"})
  • http_server_requests_seconds stores the latency of HTTP requests.

    http_server_requests_seconds存储HTTP请求的延迟。

  • quantile=0.99 gives the p99. You can read more about quantiles.

    quantile=0.99给出p99。 您可以阅读有关分位数的更多信息。

  • Finally, we aggregate the results per endpoint using the max aggregator.

    最后,我们使用max聚合器汇总每个端点的结果。

A screenshot visualising the duration/latency per uri
Figure 9: Visualising the duration/latency per uri.
图9:可视化每个uri的持续时间/延迟。

Note that you can calculate quantiles from both histograms and summaries.

请注意,您可以从直方图和摘要中计算分位数。

Quantiles may not be reported by default by your application. For example, for Spring Boot applications you need to set the management.metrics.distribution.percentiles for this. We recommend reporting the p50, p75, p90, p95, p99, p999 percentiles and defining a variable for these in your dashboards.

默认情况下,您的应用程序可能不会报告分位数。 例如,对于Spring Boot应用程序,您需要为此设置management.metrics.distribution.percentiles 。 我们建议报告p50,p75,p90,p95,p99,p999百分位数,并在仪表板上为它们定义一个变量。

If you want to include or exclude particular endpoints you can do this with the uri label. For instance uri!=~"/api/v1/.*" will only plot endpoints under the /api/v1/ path, while uri!~"/swagger.*" will exclude the Swagger endpoints.

如果要包括或排除特定的端点,可以使用uri标签进行。 例如uri!=~"/api/v1/.*"将仅绘制/api/v1/路径下的端点,而uri!~"/swagger.*"将排除Swagger端点。

Failed requests are not representative examples of latency as they could fail fast (e.g. 500) or take a lot of time to complete if a timeout is not in place or is mis-configured. We recommend visualising latencies for successful requests and, if needed, having another panel for tracking latencies for failed requests.

失败的请求不是延迟的典型代表,因为如果超时不到位或配置错误,它们可能会很快失败(例如500)或花费大量时间来完成。 我们建议可视化成功请求的等待时间,如果需要,还可以使用另一个面板来跟踪失败请求的等待时间。

失误 (Errors)

The simplest way to visualise your errors is by using a Stat panel, similar to Figure 5.

可视化错误的最简单方法是使用Stat面板,类似于图5。

sum (rate(http_server_requests_seconds_count{app="test-app", status=~"5.."}[1m]))

Success Rates

成功率

A more descriptive way would be to visualise success rates per endpoint:

一种更具描述性的方式是可视化每个端点的成功率:

100 * sum by (uri) (rate(http_server_requests_seconds_count{app="test-app",status="200"}[1h])) / sum by (uri) (rate (http_server_requests_seconds_count{app="test-app",status=~".+"}[1h]))
A screenshot visualising success rates (200s) per uri
Figure 12: Visualising success rates (200s) per uri.
图12:可视化每个uri的成功率(200s)。

依存关系 (Dependencies)

It is important to be able to identify issues with your dependencies. The same signals can be used to monitor such calls. You can get these metrics from Circuit Breaker libraries such as Hystrix or Resilience4J.

能够识别依赖项问题很重要。 相同的信号可用于监视此类呼叫。 您可以从断路器库(例如Hystrix或Resilience4J)中获取这些指标。

To check which metrics are exposed by your Circuit Breaker you can either go through the documentation or hit your /prometheus endpoint.

要检查断路器公开的度量标准,可以浏览文档或访问/prometheus端点。

Hystrix uses keys (in particular command keys and command group keys) to identify and group commands. These are available as key and group labels when using the Hystrix metrics publisher. The result of the call is stored in the event label.

Hystrix使用键(特别是命令键和命令组键)来标识和分组命令。 使用Hystrix指标发布者时,这些可用作keygroup标签。 调用结果存储在event标签中。

Resilience4J exposes the name, state, and kind labels as documented. The name is used to identify the call while the kind is the result.

Resilience4J公开namestatekind标签,作为记录 。 该name是用来识别呼叫,而kind是结果。

RPS

角色扮演游戏

The following queries return the RPS per Kubernetes pod for a selected key/name:

以下查询返回每个Kubernetes窗格中选定键/名称的RPS:

Hystrix

Hystrix

sum by (pod_name,event) (rate(hystrix_execution_total{app="test-app",key="$key"}[2m]))

Resilience4J

弹性4J

sum by (pod_name, kind) (rate(resilience4j_circuitbreaker_calls_seconds_count{app="test-app",name="$name"}[2m]))

潜伏 (Latency)

To plot the latency for a selected quantile (e.g. 0.99):

要绘制选定分位数(例如0.99)的等待时间:

Hystrix

Hystrix

max by(pod_name)(hystrix_latency_execution_seconds{app="test-app",key="$key",quantile="$quantile"})

Resilience4J

弹性4J

max by(pod_name,kind)(resilience4j_circuitbreaker_calls_seconds{app="test-app",name="$name",quantile="$quantile"})

失误 (Errors)

Finally, for errors:

最后,对于错误:

Hystrix

Hystrix

sum by (pod_name) (rate (hystrix_execution_total{app="test-app",key="$key",event!="success"}[2m])) / sum by (pod_name) (rate (hystrix_execution_total{app="test-app",key="$key"}[2m]))

Resilience4J

弹性4J

sum by (pod_name) (rate (resilience4j_circuitbreaker_failure_rate{app="test-app",name="$name"}[2m]))

Building panels for dependencies manually is time-consuming. We recommend using Grafana’s Repeat panel feature.

手动构建依赖关系面板非常耗时。 我们建议使用Grafana的“ 重复”面板功能

For this you first need to define a variable for your keys/names:

为此,您首先需要为您的键/名称定义一个变量:

A screenshot showing how to defe a variable for your Hystrix keys
Figure 19: Defining a variable for your Hystrix keys.
图19:为Hystrix键定义一个变量。

You can then select a single value for the key/name from the dropdown list, create one panel, and use the Repeating option under the General settings of your panel.

然后,您可以从下拉列表中为键/名称选择一个值 ,创建一个面板,并使用面板“ General设置下的“ Repeating选项。

Once you click on the All option Grafana will render multiple panels, one for each dependency for you.

一旦单击All选项,Grafana将渲染多个面板,每个依赖项一个。

Image for post

仪表板准则 (Dashboard Guidelines)

We encourage our teams to create their dashboards inside a folder and to use the same folder at least for dashboards related to the same service. The name of the folder could match the one of the project, or reflect the pillar, family name, etc.

我们鼓励我们的团队在一个文件夹内创建其仪表板,并至少将同一文件夹用于与同一服务相关的仪表板。 文件夹的名称可以与项目之一匹配,或反映Struts,姓氏等。

We also strongly recommend the use of tags. Tags are helpful when searching for dashboards and allow you to add links to other dashboards or URLs.

我们也强烈建议您使用标签。 标签在搜索仪表盘很有用,并允许您添加指向其他仪表盘或URL的链接

The taxonomy depends on many factors, including the structure of a company but the following categories are usually company-agnostic:

分类法取决于许多因素,包括公司的结构,但以下几类通常与公司无关:

  • Business area (for us that would be “search”, “lodging”, etc.)

    业务领域 (对我们来说是“搜索”,“住宿”等)

  • Family or tech pillar name

    家庭或技术Struts名称

  • Technology name (e.g. micrometer, dropwizard, elasticache)

    技术名称 (例如,Micrometer,下降向导,弹性疼痛)

  • Service/Infrastructure name

    服务/基础设施名称

Recent Grafana versions support Dashboard Links, Panel Links, and Data Links. These could be either links to other dashboards or links to useful URLs. They rely on tags and once links have been created they will be available on your dashboard’s page.

最新的Grafana版本支持仪表板链接,面板链接和数据链接 。 这些可以是指向其他仪表板的链接,也可以是指向有用URL的链接。 它们依靠标签,一旦创建了链接,它们就会在您的仪表板页面上可用。

A screenshot showing dashboard Links to other dashboards and external monitoring systems
Figure 20: Dashboard Links to other dashboards and external monitoring systems.
图20:仪表板链接到其他仪表板和外部监视系统。

On top of Dashboard Links we suggest using Panel Links. These could be links to monitoring systems used for logging (e.g. Splunk) or distributed tracing (e.g. Haystack) and redirect to a particular search associated with the service and the panel.

建议在“仪表板链接”的顶部使用“ 面板链接” 。 这些可以是指向用于日志记录(例如Splunk)或分布式跟踪(例如Haystack )的监视系统的链接,并重定向到与服务和面板相关联的特定搜索。

A screenshot with Panel Links to other dashboards and external monitoring systems
Figure 21: Panel Links to other dashboards and external monitoring systems.
图21:指向其他仪表板和外部监视系统的面板链接。

Templates is another key feature of Grafana which allows you to avoid duplication by using variables instead of hard-coded values. We have seen that feature in the queries we used earlier for Hystrix and Resilience4J metrics. You can define variables for the datasource, the application name, the Kubernetes pods, or even the percentiles you want to plot metrics for. The values of these variables will show up as dropdown lists, and you can use the selected values in your queries.

模板是Grafana的另一个关键功能,它使您可以通过使用变量而不是硬编码的值来避免重复。 我们在前面用于Hystrix和Resilience4J指标的查询中已经看到了该功能。 您可以为数据源,应用程序名称,Kubernetes pod甚至要为其绘制度量标准的百分位数定义变量。 这些变量的值将显示为下拉列表,您可以在查询中使用选定的值。

Last but not least, annotations enable you to mark points with events. This is handy for correlating metrics with events such as deployments or A/B tests and we highly recommend using it.

最后但并非最不重要的一点是, 注释使您能够用事件标记点。 这对于将指标与事件(例如部署或A / B测试)相关联非常方便,我们强烈建议您使用它。

Image for post

结论 (Conclusion)

In this article we went through best practices on monitoring metrics and dashboards and showed you how to create Grafana dashboards based on Prometheus metrics. These examples can be used as a starting point to craft more complex queries and more visualisations. However, always keep in mind that less is more, and simple is better than complex!

在本文中,我们介绍了有关监视指标和仪表板的最佳实践,并向您展示了如何基于Prometheus指标创建Grafana仪表板。 这些示例可以用作构建更复杂的查询和更可视化的起点。 但是,请始终记住,少即是多,简单胜于复杂!

Note: Thanks to Vinod Canumalla and Fabian Piau for reviewing the blogpost.

注意:感谢Vinod CanumallaFabian Piau审阅了博客文章。

Learn more about technology at Expedia Group

在Expedia Group上了解有关技术的更多信息

翻译自: https://medium.com/expedia-group-tech/creating-monitoring-dashboards-1f3fbe0ae1ac

监控仪表盘

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值