灯塔问题_带有匿名数据的项目灯塔第2部分测量

灯塔问题

A two part series on how we will measure discrepancies in Airbnb guest acceptance rates using anonymized demographic data. This second part looks at the framework we use to understand the impact of anonymization on the precision of our acceptance rate estimates. (Read Part 1 here)

关于如何使用匿名人口统计数据来衡量Airbnb游客接受率差异的两部分系列。 第二部分介绍了我们用来理解匿名化对接受率估算精度的影响的框架。 ( 在这里阅读第1部分 )

介绍 (Introduction)

In June, the Airbnb Anti-Discrimination product team announced Project Lighthouse, an initiative with the goal to measure and combat discrimination when booking or hosting on Airbnb. We launched this project in partnership with Color Of Change, the nation’s largest online racial justice organization with millions of members, as well as with guidance from other leading civil rights and privacy rights organizations.

6月份,Airbnb反歧视产品小组宣布了Project Lighthouse项目 ,该计划旨在衡量和打击在Airbnb上预订或托管时的歧视。 我们与美国最大的在线种族司法组织“ Color Of Change”(拥有数百万成员)以及其他领先的民权和隐私权组织的指导下合作发起了这个项目。

At the core of Project Lighthouse is a novel system to measure discrepancies in people’s experiences on the Airbnb platform that could be a result of discrimination and bias. This system is built to measure these discrepancies with perceived race data that is not linked to individual Airbnb accounts. By conducting this analysis, we can understand the state of our platform with respect to inclusion, and begin to develop and evaluate interventions that lead to more equitable outcomes on Airbnb’s platform.

Project Lighthouse的核心是一个新颖的系统,该系统可以衡量Airbnb平台上人们的体验差异,这可能是歧视和偏见的结果。 建立该系统的目的是通过未与单个Airbnb帐户关联的感知种族数据来衡量这些差异。 通过进行分析,我们可以了解平台在包容性方面的状态,并开始开发和评估可在Airbnb平台上带来更公平结果的干预措施。

In the first post of this series, we provided some broader context on Project Lighthouse and introduced the privacy model of p-sensitive k-anonymity, which is one the tools we use to protect our community’s data. In this post, we will focus on how to evaluate the effectiveness of the anonymized data derived in measuring the impact of our product team’s interventions.

在本系列的第一篇文章中,我们在Project Lighthouse上提供了更广泛的上下文,并介绍了p敏感k匿名性的隐私模型,这是我们用来保护社区数据的工具。 在本文中,我们将重点介绍如何评估在衡量产品团队干预措施的影响时获得的匿名数据的有效性。

These blog posts are intended to serve as an introduction to the methodology underlying Project Lighthouse, which is described in greater detail in our technical paper. By publicly sharing our methodology, we hope to help other technology companies systematically measure and reduce discrimination on their platforms.

这些博客文章旨在作为Project Lighthouse基础方法的介绍,我们的技术论文中对此进行了更详细的描述。 通过公开分享我们的方法,我们希望可以帮助其他技术公司系统地衡量和减少其平台上的歧视。

我们如何使用A / B测试 (How we use A/B testing)

Like most other product teams in the technology industry, the Airbnb Anti-Discrimination team runs A/B tests, where users are randomly assigned to a control or treatment group, to measure the impact of its interventions. For example, we could use an A/B test to measure the impact of promoting Instant Book on booking conversion rates. Similarly, we could also use an A/B test to understand the impact of obscuring guest profile photos until a booking is confirmed on metrics like booking and cancellation rates.

与技术行业中的其他大多数产品团队一样,Airbnb的反歧视团队进行A / B测试,将用户随机分配到对照组或治疗组,以评估其干预措施的影响。 例如,我们可以使用A / B测试来衡量推广Instant Book对预订转化率的影响。 同样,我们也可以使用A / B测试来了解模糊访客个人资料照片的影响,直到根据诸如预订和取消率之类的指标确认预订为止。

Some of the topics we are most interested in understanding are whether a gap in acceptance rates exists between demographic groups, and whether a particular intervention affects that gap. We can answer both questions by using A/B testing.

我们最想了解的一些主题是人口统计群体之间的接受率是否存在差距,以及特定的干预措施是否会影响这一差距。 我们可以通过A / B测试来回答这两个问题。

Consider a hypothetical A/B test that we analyze for impact on guests perceived as “race 1” and “race 2”.¹ In such a test all users are randomly assigned to either the control or treatment group; one’s perceived race does not affect this assignment. Perceived race only becomes relevant when we analyze the test’s impact, after the A/B test has concluded. Figure 1 shows some possible results from this hypothetical test. In this example, the acceptance rate in the control group is 67% for guests perceived as “race 1” and is 75% for guests perceived as “race 2”. The baseline difference in acceptance rates can be found by comparing acceptance rates in the control group, and is 75% — 67% = 8 percentage points.

考虑一个假设的A / B测试,我们分析它对被认为是“比赛1”和“比赛2”的客人的影响。¹在这种测试中,所有用户均被随机分配到对照组或治疗组; 一个人认为的种族不会影响这项任务。 只有在A / B测试结束后,当我们分析测试的影响时,才知道种族是相关的。 图1显示了此假设检验的一些可能结果。 在此示例中,对照组中被视为“种族1”的客人的接受率为67%,对于被视为“种族2”的客人的接受率为75%。 通过比较对照组的接受率可以发现接受率的基线差异,为75%-67%= 8个百分点。

Suppose the intervention increases the acceptance rate for guests perceived as “race 1” to 70% and leaves the acceptance rate for guests perceived as “race 2” unchanged at 75%. The gap in acceptance rates within the treatment group is calculated as 75% — 70% = 5 percentage points. Therefore, we conclude that the intervention has reduced the gap in acceptance rates between guests perceived as “race 1” and “race 2” by 8–5 = 3 percentage points.

假设干预将被认为是“竞赛1”的客人的接受率提高到70%,而将被视为“竞赛2”的客人的接受率保持不变为75%。 治疗组中接受率的差距计算为75%-70%= 5个百分点。 因此,我们得出的结论是,该干预措施已将被视为“种族1”和“种族2”的客人之间的接受率差距降低了8–5 = 3个百分点。

Image for post
Figure 1: Example of an A/B test’s potential impact on guest acceptance rates 图1:A / B测试对宾客接受率的潜在影响的示例

测量目标和统计能力 (Measurement objective and statistical power)

As discussed in our previous post, we utilize the privacy model of p-sensitive k-anonymity to protect user data while computing potential gaps in the Airbnb experience (for this example, acceptance rates) between different demographic groups. Enforcing this privacy model sometimes requires us to modify data by changing or removing values from analysis. This can diminish our ability to accurately estimate metrics, such as acceptance rates, and the impact of our A/B tests on them.

如我们之前的文章中所述 ,我们在计算不同人口群体之间的Airbnb体验差距(例如,接受率)时,利用p敏感k匿名性的隐私模型来保护用户数据。 实施此隐私模型有时需要我们通过更改或从分析中删除值来修改数据。 这会削弱我们准确估计指标(如接受率)以及A / B测试对其影响的能力。

To ensure that we can accurately measure the impact of our interventions, we conducted a study of the impact of anonymization on data utility, the usefulness of data for analysis. More precisely, we were concerned with statistical power, the probability that we observe a statistically significant change in a metric when an intervention is effective. Both larger sample sizes and effect sizes generally lead to more statistical power. Having adequate statistical power for our tests allows us to measure the impact of our interventions with confidence.

为了确保我们可以准确地衡量干预措施的影响,我们对匿名化对数据效用的影响,数据对分析的有用性进行了研究。 更准确地说,我们关注的是统计功效,即当干预有效时,我们观察到指标发生统计上显着变化的概率。 较大的样本量和效应量通常都会导致更大的统计功效。 对于测试而言,拥有足够的统计能力可以使我们自信地衡量干预措施的影响。

We focused our efforts on understanding how enforcing p-sensitive k-anonymity might affect the statistical power of our A/B tests. Our goal of this analysis is then to understand how changing certain parameters, such as the value of k chosen in enforcing k-anonymity, affects statistical power when measuring the impact of our interventions on reservation acceptance rates by demographic group.

我们将精力集中在理解强制使用p敏感的k匿名性可能如何影响A / B测试的统计能力上。 然后,我们的分析目标是,了解按人口统计群体衡量干预措施对预订接受率的影响时,改变某些参数(例如在强制执行k-匿名性时选择的k的值)如何影响统计功效。

模拟设置 (Simulation setup)

The main tool that we use to understand the impact of anonymization on measurement is a simulation-based power analysis. To better understand how such an analysis works, let’s first go over how we would statistically analyze an A/B test’s impact on differences between acceptance rates between demographic groups.

我们用来了解匿名化对测量的影响的主要工具是基于仿真的功率分析 。 为了更好地理解这种分析是如何工作的,我们首先来看一下我们如何统计分析A / B测试对人口统计群体之间的接受率差异的影响。

Suppose, for sake of discussion only, we had a dataset where each row represented a reservation request on Airbnb and the columns were:

假设仅出于讨论目的,我们有一个数据集,其中每一行代表在Airbnb上的预订请求,而各列分别是:

  • accept: 1 if the reservation request was accepted, 0 otherwise

    接受:如果预订请求被接受,则为1,否则为0
  • treatment: whether the guest was in the control or treatment group of the A/B test, we can encode this to take the value of 0 in the control group, and 1 in the treatment group²

    治疗:无论来宾是A / B测试的对照组还是治疗组,我们都可以对其进行编码,以使其在对照组中为0,在治疗组中为1
  • perceived race: the guest’s perceived race, we can encode this to take the value of 0 for guests perceived to be “race 1” and 1 for guests perceived to be “race 2”³

    感知种族:客人的感知种族,我们可以对其进行编码,以对被认为是“种族1”的来宾使用0,对于被认为是“种族2”的来宾则取1

Then, we could run a linear regression of the form:

然后,我们可以运行以下形式的线性回归:

Image for post

Here, the coefficient a would be the acceptance rate for guests perceived to be “race 1” in the control group, a + b_obs would be the acceptance rate for guests perceived to be “race 2” in the control group, and a + c_obs would be the acceptance rate for guests perceived to be “race 1” in the treatment group. For the purposes of our analysis, we are primarily concerned with the coefficient d_obs,⁴ which gives us the A/B test’s impact on the difference in acceptance rates between guests perceived as “race 1” and “race 2”.

在这里,系数a是在对照组中被视为“种族1”的客人的接受率,a + b_obs是在对照组中被视为“种族2”的客人的接受率,而a + c_obs将是在治疗组中被视为“种族1”的客人的接受率。 为了进行分析,我们主要关注系数d_obss,该系数为我们提供了A / B测试对被视为“竞赛1”和“竞赛2”的客人之间的接受率差异的影响。

Suppose we also could conduct many A/B tests where we knew the true impact of the intervention on acceptance rates. We could then estimate statistical power by running the above regression after each test and recording whether d_obs was statistically significantly different from zero. That is, we would estimate the “probability of finding a statistically significant effect” by the “fraction of tests where we found a statistically significant effect”. While we are not able to do this analysis with actual⁵ A/B tests, we can simulate data and follow a similar process. This is the idea at the heart of a simulation-based power analysis.

假设我们还可以进行许多A / B测试,从而知道干预对接受率的真正影响。 然后,我们可以通过在每次测试后运行上述回归并记录d_obs是否在统计上显着不同于零来估计统计功效。 也就是说,我们将通过“发现统计学显着效果的测试分数”来估计“发现统计学显着效果的可能性”。 尽管我们无法通过实际的A / B测试进行此分析,但我们可以模拟数据并遵循类似的过程。 这是基于仿真的功率分析的核心思想。

The core step of a simulation-based power analysis is the simulation of a single A/B test. To do this, we generate a synthetic dataset where each row represents a hypothetical reservation request. We randomly generate perceived race labels and control/treatment group assignments for each row. We model acceptance as a Bernoulli random variable, with the probability of acceptance, p, given as:

基于仿真的功率分析的核心步骤是单个A / B测试的仿真。 为此,我们生成一个综合数据集,其中每行代表一个假设的保留请求。 我们为每行随机生成感知的种族标签和控制/治疗组分配。 我们将接受度建模为伯努利随机变量,接受的概率为p,公式为:

Image for post

Continuing with the example in Figure 1, we set a_1 = 0.67, a_2 = .75 and d_true = 0.03.⁶ Here, T = 0 if the user is in the control group of the A/B test and T = 1 if they are in the treatment group. This gives us the following acceptance rates:

继续图1中的示例,我们设置a_1 = 0.67,a_2 = .75和d_true =0.03。⁶这里,如果用户在A / B测试的对照组中,则T = 0,如果他们在A / B测试的对照组中,则T = 1。在治疗组中。 这给我们以下接受率:

Image for post
Table 1: acceptance rates for one hypothetical run of a simulation 表1:一次模拟假设的接受率

We can then anonymize this dataset and analyze the anonymized data using the regression detailed above, recording our results. Repeating this process many times (at least 1,000, in our case) allows us to estimate the statistical power of our tests.

然后,我们可以匿名化此数据集,并使用上面详述的回归分析匿名化的数据,记录我们的结果。 重复此过程多次(在我们的情况下,至少为1,000)使我们能够估计测试的统计能力。

仿真结果 (Simulation results)

One of the benefits of a simulation-based power analysis is that we can vary different aspects of our hypothetical experiment setups to understand their impact on data utility. In our case, we are interested in understanding the impact of the following factors:

基于仿真的功率分析的好处之一是,我们可以改变假设实验设置的不同方面,以了解它们对数据效用的影响。 就我们而言,我们有兴趣了解以下因素的影响:

  • The value of k, used in enforcing p-sensitive k-anonymity.

    k值,用于强制使用p敏感的k匿名性。
  • The value of N, the number of reservation requests in the A/B test.

    N的值,即A / B测试中的保留请求数。
  • The intervention’s efficacy in reducing the difference in acceptance rates between guests perceived as “race 1” and “race 2”. We will call this the true effect size (d_true in the previous section).

    干预措施在减少被视为“竞赛1”和“竞赛2”的客人之间的接受率差异方面的功效。 我们将其称为真实效果大小 (上一节中的d_true)。

To this end, we can fix k, N and the true effect size and run the simulation 1,000 times to get a distribution of the observed effect size (d_obs in the previous section). We can then repeat this exercise for different values of k, N and the true effect size to study how they affect the distribution of d_obs. For example, we can compute the fraction of simulation runs where we detect a statistically significant effect, and use that as our estimate of statistical power.

为此,我们可以固定k,N和真实的效果大小,并运行1000次模拟以获得观察到的效果大小的分布(上一节中的d_obs)。 然后,我们可以针对k,N和真实效果大小的不同值重复此练习,以研究它们如何影响d_obs的分布。 例如,我们可以计算检测到统计上显着影响的模拟运行的分数,并将其用作我们的统计功效估计。

Figure 2 summarizes the main results of this analysis. The horizontal axis represents the true effect size (d_true), while the vertical axis represents our simulation-based estimate of statistical power. Each line represents the relationship between true effect size and statistical power for a specific value of k. We also shade the area in the graph where statistical power is below 80%, as it is a best practice to run A/B tests which have at least 80% power.

图2总结了此分析的主要结果。 横轴表示真实效果大小(d_true),而纵轴表示我们基于仿真的统计功效估计。 每条线代表特定k值的真实效果大小与统计功效之间的关系。 我们还会在统计图中的统计功效低于80%的区域上加阴影,因为这是运行功效至少为80%的A / B测试的最佳实践。

Image for post
Figure 2: Simulation results (statistical power) 图2:仿真结果(统计功效)

The first thing that we notice in Figure 2 is that statistical power increases with the true effect size. This is our empirical evidence that it is “easier to detect larger effects”, and a useful sanity check to do in any simulation-based power analysis. Secondly, we see that enforcing anonymity leads to a mild decrease in statistical power, depending on the value of k. For k = 5 or 10, this decrease is within 5–10 percent relative to identifiable data (k = 1). On the other hand, for k = 100, the relative decrease is 10–20 percent, depending on the true effect size.

我们在图2中注意到的第一件事是统计功效随真实效应大小而增加。 这是我们的经验证据,它“更容易发现较大的影响”,并且在任何基于仿真的功率分析中都可以进行有用的检查。 其次,我们看到,根据k的值,强制执行匿名操作会导致统计功效的轻微降低。 对于k = 5或10,相对于可识别的数据(k = 1),此下降幅度在5-10%之内。 另一方面,对于k = 100,相对减少幅度为10%到20%,具体取决于真实效果的大小。

Another way to look at these results is to analyze the minimum detectable effect, the smallest true effect size for which we have 80% power for various values of k and N. Figure 3 plots the sample size on the horizontal axis and the minimum detectable effect on the vertical axis. The different lines demarcate different values of k.

查看这些结果的另一种方法是分析最小可检测效应 ,即最小的真实效应大小,对于k和N的各种值,我们具有80%的功效。图3在水平轴上绘制样本大小,并显示最小可检测效应在垂直轴上。 不同的线划定了k的不同值。

Image for post
Figure 3: Simulation results (minimum detectable effect) 图3:仿真结果(可检测的最小效果)

Similar to Figure 2, Figure 3 shows that there is an increase in minimum detectable effect that gets larger as k increases. A higher minimum detectable effect is undesirable since it means that we can only detect a larger change with the same amount of statistical power. However, the figure shows how increasing the sample size can compensate for this. For example, we need to analyze 200,000 reservation requests in an A/B test to detect an effect size of 1.75 percentage points with identifiable data (k = 1). When we use p-sensitive k-anonymous data, with k = 5, this increases to 250,000 reservation requests. Practically speaking, this means that we can run our A/B tests for longer so that they include more reservation requests, leading them to have adequate statistical power.

与图2相似,图3显示最小可检测效果随k的增大而增大。 更高的最小可检测效果是不可取的,因为这意味着我们只能在相同数量的统计功效下才能检测到较大的变化。 但是,该图显示了增加样本大小可以如何弥补这一点。 例如,我们需要在A / B测试中分析200,000个预订请求,以通过可识别的数据(k = 1)检测到1.75个百分点的效果大小。 当我们使用p敏感的k匿名数据时,k = 5,这将增加到250,000个保留请求。 实际上,这意味着我们可以将A / B测试运行更长的时间,以便它们包含更多的保留请求,从而使它们具有足够的统计能力。

结论 (Conclusion)

In summary, our simulation-based power analysis demonstrates that we can use p-sensitive k-anonymous data to measure the impact of our interventions to reduce discrepancies in the Airbnb experience by guest perceived race. While enforcing anonymity leads to up to a 20% decrease in statistical power, depending on the value of k, running tests for longer to obtain larger sample sizes can compensate for this.

总而言之,我们基于模拟的功效分析表明,我们可以使用p敏感的k匿名数据来衡量我们的干预措施的影响,以减少宾客感知到的种族在Airbnb体验中的差异。 强制执行匿名操作会导致统计功效最多降低20%,具体取决于k的值,但为了获得更大的样本量而进行更长的测试可以弥补这一点。

It is important to note that our A/B test analysis workflow is notably different from that employed more generally in the technology industry. Each analysis we conduct now requires a significant amount of pre-work to ensure that we have p-sensitive k-anonymous data. We also run A/B tests for longer than we would have if we used identifiable data.

重要的是要注意,我们的A / B测试分析工作流程与技术行业中使用的工作流程明显不同。 现在,我们进行的每个分析都需要大量的准备工作,以确保我们拥有p敏感的k匿名数据。 与使用可识别数据相比,我们进行A / B测试的时间也更长。

Nevertheless, our findings show that it is possible to audit online platforms for large-scale gaps in the user experience while at the same time protecting our community’s privacy. We hope that our work can serve as a resource for other technology companies who would also like to systematically measure and reduce discrimination on their platforms. Our publicly-available technical paper describes the topics covered in these posts, as well as our methods for enforcing p-sensitive k-anonymity in more detail. Our landing page has a more general overview of Project Lighthouse.

但是,我们的发现表明,可以对在线平台进行审核以检查用户体验中的巨大差距,同时又可以保护我们社区的隐私。 我们希望我们的工作可以为其他也希望系统地衡量和减少平台歧视的技术公司提供资源。 我们的公开技术论文描述了这些帖子中涉及的主题,以及我们更详细地执行p敏感k匿名性的方法。 我们的目标网页对Project Lighthouse进行了更全面的概述。

Project Lighthouse represents the collaborative work of many people both within and external to Airbnb. The Airbnb Anti-Discrimination team is: Sid Basu, Ruthie Berman, Adam Bloomston, John Campbell, Anne Diaz, Nanako Era, Benjamin Evans, Sukhada Palkar, and Skyler Wharton. Within Airbnb, Project Lighthouse also represents the work of Crystal Brown, Zach Dunn, Janaye Ingram, Brendon Lynch, Margaret Richardson, Ann Staggs, Laura Rillos, and Julie Wenah. We would also like to extend a special thanks to Laura Murphy and Conrad Miller for their continuing support and guidance throughout the project.

灯塔计划代表了Airbnb内部和外部许多人的协作。 Airbnb反歧视团队是:Sid Basu,Ruthie Berman,Adam Bloomston,John Campbell,Anne Diaz,Nanako Era,Benjamin Evans,Sukhada Palkar和Skyler Wharton。 在Airbnb中,Project Lighthouse还代表了Crystal Brown,Zach Dunn,Janaye Ingram,Brendon Lynch,Margaret Richardson,Ann Staggs,Laura Rillos和Julie Wenah的作品。 我们还要特别感谢Laura Murphy和Conrad Miller在整个项目中的持续支持和指导。

We know that bias, discrimination, and systemic inequities are complex and longstanding problems. Addressing them requires continued attention, adaptation, and collaboration. We encourage our peers in the technology industry to join us in this fight, and to help push us all collectively towards a world where everyone can belong.

我们知道,偏见,歧视和系统性不平等是长期存在的复杂问题。 解决这些问题需要持续的关注,适应和协作。 我们鼓励技术行业的同行加入我们的斗争,并共同推动我们所有人迈向每个人都可以归属的世界。

This analysis is currently being conducted in our United States community. Perceived race data used in Project Lighthouse is not linked to individual Airbnb accounts. Additionally, the data collected for Project Lighthouse will be handled in a way that protects people’s privacy and will be used exclusively for anti-discrimination work. You can read more about this in the first blog post or in the Airbnb resource center.

目前正在我们的美国社区中进行此分析。 “项目灯塔”中使用的感知种族数据未链接到各个Airbnb帐户。 此外,将以保护人们隐私的方式处理为Project Lighthouse收集的数据,并将这些数据专门用于反歧视工作。 您可以在第一篇博客文章或Airbnb资源中心中阅读有关此内容的更多信息。

脚注 (Footnotes)

[1] I’m using the fictional labels “race 1” and “race 2” for the sake of exposition. The framework presented here can be extended to analyze gaps in acceptance rates between multiple (>2) perceived racial identities.

[1]为了说明起见,我使用虚构的标签“竞赛1”和“竞赛2”。 此处介绍的框架可以扩展为分析多个(> 2)感知种族身份之间的接受率差距。

[2] This encoding can be reversed to be 1 for the control group and 0 for the treatment group without loss of generality. The only effect on the analysis would be that c_obs would become (control — treatment acceptance rate, instead of treatment — control acceptance rate)

[2]可以将这种编码反转为对照组的1和治疗组的0,而不会失去一般性。 对分析的唯一影响是c_obs将变为(对照-治疗接受率,而不是对照-控制接受率)

[3] Similar to the treatment variable, this encoding can also be reversed

[3]与处理变量类似,此编码也可以反转

[4] Here obs is shorthand for observed

[4]这里obs是观察到的简写

[5] There are several reasons why we cannot do this with actual A/B tests. Firstly, to do so would require us having product interventions where we knew what the exact true impact on the difference in acceptance rates was, which we don’t. Secondly, the procedure we outline would require individual-level perceived race labels, which would violate our privacy commitments.

[5]有几个原因导致我们无法通过实际的A / B测试来做到这一点。 首先,要做到这一点,就需要我们进行产品干预,在这种情况下,我们才知道对接受率差异的确切真实影响是什么,而我们没有。 其次,我们概述的程序将需要个人感知的种族标签,这将违反我们的隐私承诺。

[6] To relate these values to the regression equation above, a + b_obs estimates a_1 and a estimates a_2

[6]要将这些值与上述回归方程相关联,a + b_obs估计a_1和估计a_2

翻译自: https://medium.com/airbnb-engineering/project-lighthouse-part-2-measurement-with-anonymized-data-69fb01eac88

灯塔问题

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值