优化Derby数据库

在matrix上的翻译文章

原文地址


Tuning Derby

优化Derby数据库

by Dejan Bosanac

 

作者Dejan Bosanac


01/31/2007

There is a big difference in the behavior of a database when it is populated with a small amount of test inputs and when it holds a large amount of data. Usually, you would not address these database performance issues early in the development process, but when the time comes, you should take some action to ensure that the application is working correctly with large amounts of data.

数据库在操作少量测试数据和大量数据的时候,表现行为上有很大的差异。通常,在开发过程前期,人们不会关注数据库性能的问题,但是随着时间的发展,人们必须采取一些措施来保证数据库在大量数据的情况下正常工作。

The all-Java open-source database Derby is no exception, so you'll have to make sure it will not be a bottleneck to your application. Although you can find comprehensive material on this topic among Derby 's manuals, I would like to focus on certain issues in more detail and give some examples from my own experience. I will focus on application performances related to selecting data from large tables.

Derby这个完全Java开发的开源的数据库也不例外,因此你必须保证它不会成为你程序的一个瓶颈。尽管人们可以在Derby的手册中找到关于这个话题全面的资料,我还是想更详尽的关注一下这些问题,基于我的经验提供一些具体的例子。本文将着重于那些由在大的数据表中选择查询数据而产生的程序性能问题。

First of all, there are various tips on how you should tune Derby properties such as page size and the size of the cache. Playing with these parameters can help you improve performance to some degree, but usually the bigger problem lies in your application and database design, so you should focus on these issues first and leave Derby properties for the end.

首先,有很多关于调整Derby属性(诸如页面大小和缓存大小等)的技巧。修改这些参数可以在一定程度上调整数据库的性能,但是在通常情况下,更主要的问题来自与你的程序和数据库的设计,因此,我们必须首先关注这些问题,最后再来考虑Derby的属性。

In the following sections, I will cover some techniques that can help you optimize problematic parts of your application. But as with all other performance-tuning activities, measure and positively identify problems before optimizing.

在接下来的段落里,我将介绍一些能够优化程序中有问题部分的技术。但是,和其他性能优化操作一样,我们需要在优化前先测量并确认问题所在。

A Simple Example

一个简单的例子

Let's start with a simple example: We have a "search"/"list" page in our web application that has to deal with a table of nearly 100,000 rows, and let's say that the table is not trivial (i.e., that it has at least 10 columns). I will write an example in plain JDBC so we can focus on database and JDBC issues. The principles explained in this article should be applicable to all Object-Relation mapping tools as well.

让我们从一个简单的例子开始:假设我们Web程序中拥有一个“search/list”的页面,要处理一个有接近100000行的表,并且那个表不是很小的(至少有10栏)。用简单的JDBC来写一个例子,这样我们可以专注在数据库和JDBC问题上来。这篇文章中介绍的所有准则对所有的面向对象的映射工具都适用。

In order to give your users the ability to list a large table, you would normally start with the simple query:

为了使得用户能够列出一个大的表,通常使用下面简单的查询语句。

select * from tbl

The resulting JDBC code snippet for this would be similar to the following:

对应的JDBC语句如下:

Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance();

Connection connection 
= DriverManager.getConnection ("jdbc:derby://localhost:1527/testDb;");

Statement stmt 
= connection.createStatement();

ResultSet rs 
= stmt.executeQuery("select * from tbl");

ArrayList allResults 
= new ArrayList();

while (rs.next()) {

        
// Object-Relation mapping code to populate your

        
// object from result set row

        DomainObject domainObject 
= populate(rs);

        allResults.add(modelObject);

}


System.out.println(
"Results Size: " + allResults.size());

Here, we encounter our first problem. Trying to execute code like this and populate 100,000 (or even more) domain objects will almost certainly lead to a java.lang.OutOfMemoryError as Java runs out of heap space. So for starters, we have to find a way to make this code just work.

在这儿,我们碰到了第一个问题。执行这样的代码,并产生100000(或更多)个domain对象将肯定会导致java用完堆栈空间,产生一个“java.lang.OutOfMemoryError”的错误。对于初学者来说,我们首先必须找到一个方法来使得这个程序工作。

Paging Result Sets

分页Result Sets

As the amount of data in your application grows, the first thing you will want to do is add paging support for certain pages (or views in general). As you saw in the introductory example, simple queries that try to fetch large result sets can easily produce OutOfMemoryErrors.

随着程序中数据量的增多,你首先想到的应该做的事就是为特定的记录(通常是视图)提供分页支持。正如你在这个介绍性的例子中看到的,简单地去获取庞大的result sets很容易导致 out of memory的错误。

Many database servers support specialized SQL constructs that can be used to retrieve a specified subset of query results. For example, in MySQL you'll find the LIMIT and OFFSET keywords, which can be used in SELECT queries. So if you execute a query like this:

许多数据库服务器支持特定的SQL结构,它们可以用于获得一个查询结果的特定的子集。例如,在MySQL中,提供了LIMITOFFSET关键字,它们可以用于select查询。因此,如果你执行类似下面的查询

select   *   from  tbl LIMIT  50  OFFSET  100

your result set will contain 50 rows starting from the 100th result, even if the original query returned 100,000 rows. Many other database vendors provide similar functionality through different constructs. Unfortunately, Derby does not provide such functionality, so you have to stay with the original select * from tbl query and implement a paging mechanism on the application level. Let's look at the following example:

你的结果集将包含从第100个结果开始的50行,即使原先的查询返回了100000行。许多其他的数据库提供商通过不同的结构提供了相似的功能。不幸的是,Derby并没有提供这样的功能,所以你必须继续使用原先的“select * from tbl”查询语句,然后在应用程序中实现一个分页的机制。让我们来看下面的例子:

Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance();

Connection connection 
= DriverManager.getConnection("jdbc:derby://localhost:1527/testDb;");

Statement stmt 
= connection.createStatement();

ResultSet rs 
= stmt.executeQuery("SELECT * FROM tbl");

ArrayList allResults 
= new ArrayList();

int i = 0;

while (rs.next()) {

        
if (i > 50 && i <= 100{

                
// O-R mapping code populate your row from result set

                DomainObject domainObject 
= populate(rs);

                allResults.add(modelObject);

        }


        i
++;

}


System.out.println(
"Results Size: " + allResults.size());

With these extra few lines, we have provided "paging" functionality. Although all result sets are being fetched from the database server, only the rows of interest are actually mapped to Java objects. Now we are safe from the OutOfMemoryError problem that we had before and can be sure that this code will actually work with large tables.

通过这些额外的语句,我们提供了“分页”的功能。尽管所有的结果都从数据库服务器中取出了,但是只有那些我们感兴趣的行才真正的映射到了Java的对象中。现在我们避免了先前碰到的“OutOfMemoryError”的问题了,这样保证了我们的程序可以真正的工作在大的数据表上。

But still, with this solution the database will scan through the whole table and return all rows, and that is certainly a time consuming task. For my example database, this operation takes up to ten seconds to execute, which is certainly not acceptable behavior for the application.

然而,通过这个解决方案,数据库仍然会扫描整个表,然后返回所有的行,这还是一个非常消耗时间的任务。对于我的事例数据库来说,这个操作的执行要花费10秒钟,这在程序中显然是不可接受的。

So, we have to come up with a solution; we do not want to retrieve all database rows but only those of our current interest (or at least the minimal possible subset of all rows). The trick we'll use here is to explicitly tell the JDBC driver how many rows we need. We can do this by using the setMaxRows() method of the java.sql.Statement interface. Let's look at this example:

因此,我们必须给出一个解决方案;我们并不需要返回所有的数据库行,而只需要那些我们感兴趣的(或者至少是所有行的最小可能子集)。我们这儿使用的技巧就是显式的告诉JDBC驱动我们需要多少行。我们可以使用java.sql.Statement接口提供的setMaxRows()函数来完成这个任务。看以下下面的例子:

Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance();

Connection connection 
= DriverManager.getConnection(

            
"jdbc:derby://localhost:1527/testDb;");

Statement stmt 
= connection.createStatement();

stmt.setMaxRows(
101);

ResultSet rs 
= stmt.executeQuery("SELECT * FROM tbl");

ArrayList allResults 
= new ArrayList();

int i = 0;

while (rs.next()) {

        
if (i > 50 && i <= 100{

                
// O-R mapping code populate your row from result set

                DomainObject domainObject 
= populate(rs);

                allResults.add(modelObject);

        }


}


System.out.println(
"Results Size: " + allResults.size());

Notice that we have set the max rows value to the last row that we need (incremented by one). So, with this solution we didn't fetch only the 50 rows that we wanted, but first fetched a hundred rows and then filter to the 50 rows of interest. Unfortunately, there is no way tell the JDBC driver to start with a certain row, so we must specify the maximum row of the page that will be displayed. This means that performance will be good for early pages and drop in performance as the user browses results. The good news is that in most cases, the user will not go far, but will usually either find what he's looking for in the first few pages or refine the search query. In my environment, execution time dropped from 8 seconds to 0.8 seconds for the above example.

值得注意的是,我们把最大行的值设置为了我们需要的最后一行(增加了1)。因此,通过这样的解决方案,我们不是仅仅取得了我们想要的50行,而是先获取了100行,然后从中筛选出我们感兴趣的50行。不幸的是,我们没有办法告诉JDBC驱动从一个具体的行开始,因此我们必须说明要显示的记录的最大行数。这就意味着返回最初的一些记录的操作的性能是很好的,但是随着用户浏览的结果的增多,性能也会下降。好消息就是在大多数的情形下,用户不会浏览的太多的记录,他们会在前几条记录重获得他们寻找的行,或者改变查询策略。在我本人的环境中,上述的例子的执行时间从8秒降到了0.8秒。

 

This was an easy example of how to browse the whole table. But when you add certain WHERE conditions and ordering instructions to your query, things can change dramatically. In the following section, I will explain why this happens and how we can ensure acceptable application behavior in those cases.

这是一个描述如何浏览整个表的简单的例子。但是当查询语句中增加了特定的where条件和排序信息时,事情又开始变化了。在接下来的部分里,我将解释为什么这种情况会发生,以后我们如何保证在那些例子中获得可接受的性能。

Make Sure Indexes Are Used (Avoid Table Scans)

确保使用索引(避免全表扫描)

Indexes are a very important concept in database design. Since the scope of this article is limited, I will not go into detail about indexing theory. Briefly though, indexes are special database structures that allow quick access to table rows. They are usually created in relation to one or more columns, and since they are much smaller then the whole table, their primary use is to enable quick searching of values in a column (or columns).

索引在数据库设计中是一个非常重要的概念。因为本文所涉及的范围有限,我并不会详细的介绍索引理论。简单来说,索引是特定的数据库结构,能够允许对表中的行进行快速访问。索引通常是在一栏或多栏上创建的,因为他们比整个表小了很多,他们的主要用处就是快速搜索一栏(多栏)中的值。

Derby automatically creates indexes for primary and foreign key columns and for columns that have unique constraints on them. For everything else, we must explicitly create indexes. In the following section, we'll go through a few examples and explain where and how indexes can be helpful.

Derby自动的为主键和外键的栏以及具有唯一性限制的栏创建索引。对于其他任何栏,我们必须显式的创建索引。在接下来的段落中,我们将研究一些例子来介绍索引在什么时候有用以及为什么有用。

But first, we have to make some preparations. Before we can start tuning performance, we need to be able to see what is going on in the database when our query is executing. For that purpose, Derby provides the derby.language.logQueryPlan parameter. When this parameter is set, Derby will log the query plan for all executed queries in the derby.log file (located in the derby.system.home folder). You can achieve this through the appropriate derby.properties file or by executing the following Java statement:

但是首先,我们必须做一些准备。在我们开始优化之前,我们需要能够了解我们执行查询操作的时候数据库中发生了什么。Derby提供了derby.language.logQueryPlan这个参数。如果设置了这个参数,Derby将会把所有执行的查询的查询计划(query plan)记录在derby.log这个文件中(这个文件在derby.system.home文件夹中)。我们可以在启动服务器之前通过合适的derby.properties文件或者执行如下的java语句来设置该参数

System.setProperty( " derby.language.logQueryPlan " " true " );

before you start the server.

By examining the query plan, we can see whether Derby uses indexing for some queries or performs a full table scan, which can be a time-consuming operation.

通过检查查询计划,我们可以观察Derby在查询中是使用了索引还是进行了全表查询,全表查询是一个很耗时间的操作。

Now that we have our environment set, we can proceed with the example. Let's say that in our previously used tbl example table, we have an unindexed column called owner. Because the sorting of the result is the usual suspect for poor query performance, I will illustrate all performance-tuning examples on problems related to sorting. Now, if we wanted to modify the previous example to sort our results by the value of this column, we would change our query to something like this:

既然我们已经设置好了环境,我们可以开始我们的例子了。假设我们先前使用的表 tb1中有一个没有索引的栏叫做owner。因为对查询结果的排序通常是查询性能低下的主要原因,我将介绍所有与排序有关的优化。现在,如果我们希望修改先前的例子来根据这一栏的值来排序我们的结果,我们需要把我们的查询语句改成如下的样子:

SELECT   *   FROM  tbl  ORDER   BY  owner

If we now run our example with this query instead of the original one, the execution time will be an order of magnitude higher then before. Despite the fact that we paginated the results and dealt carefully with the number of rows to be fetched, the total execution time will again be about 8 seconds.

如果我们用这个查询语句代替先前的语句,执行的时间将是先前的好多倍。尽管我们分页(paginated)了所有的结果,并小心的设置了要获取的行数,总的执行时间将会是8秒。

If we look at the query execution plan in the derby.log file, we can easily spot the problem:

如果我们查看derby.log文件中查询执行计划,我们可以轻易的发现问题:

Table Scan ResultSet for TBL at read committed isolation

level 
using instantaneous share row locking chosen

by the optimizer

This means that Derby performed look-up throughout the entire table in order to sort the row set. Now, what can we do to improve this situation? The answer is simple: create an index on this column. We can do that by issuing the following SQL statement:

这意味着Derby为了将记录排序,是在整个表中执行了查找这个操作。那我们可以做些什么来改善这个情况呢?答案很简单,在这一栏上创建一个索引。我们可以通过如下的SQL语句来做这件事:

CREATE   INDEX  tbl_owner  ON  tbl(owner)

If we now repeat our previous example, we should get a result similar to the one we got without ordering (under one second in my case).

如果我们重复我们先前的例子,我们将得到一个和我们没有做排序前的那个例子相似的结果(在我的机器上是不到1秒)。

Also, if you look into derby.log now, you will see a line like this (instead of a line like the previous one):

同样,如果你现在查询derby.log,你将看到下面的信息(而不是和上面的一样的):

Index Scan ResultSet for TBL using index TBL_OWNER

at read committed isolation level 
using share row locking

chosen by the optimizer

which means you can be sure that Derby used our newly created index to get the appropriate rows.

这就意味着我们可以确保Derby使用了刚创建的索引来获取合适的行。

Use Appropriate Index Order

使用合适的索引顺序

We have seen how indexes helped us improve performances of sorting data by a column value. But what would happen if we tried to reverse the order of sorting? For example, let's say that we want to sort our example data by owner column but in descending order. In that case, our original query would be something like this:

我们已经看到了索引是如何帮助我们改善了排序某一栏数据时的性能。但是如果我们尝试去反转排序的顺序的时候会发生什么呢?假设我们希望根据owner栏降序分类我们的数据。在这种情况下,我们原先的查询就会变成如下的语句:

SELECT   *   FROM  tbl  ORDER   BY  owner  DESC

Notice the added DESC keyword, which sorts our result set in descending order. If we run our example with this modified query, you'll notice that the execution time increases to the previous rate of 8 to 9 seconds. Also, in the logfile, you will notice that the full table scan was performed in this case.

注意,我们增加了DESC这个关键字,该关键字将按降序来排序我们的结果。如果我们执行这个新修改过的查询语句,将会发现整个执行的时间又增加到先前的89秒。并且,在日志文件中,你将会发现又是执行了全表扫描。

The solution is to create a descending index for the column in question. For our owner column, we can do that with the following SQL statement:

解决的方法就是为这一栏创建一个降序的索引。对于我们的owner栏,我们执行如下的SQL语句。

CREATE   INDEX  tbl_owner_desc  ON  tbl(owner  desc )

Now we have two indexes for this column (in both directions), so our query will be executed with acceptable performances this time. Notice the following line in the query log:

现在我们对这一栏有两个索引了(两个顺序),因此查询性能又恢复到了可接受的范围了。注意查询日志中这一行:

Index Scan ResultSet for TBL using index TBL_OWNER_DESC

at read committed isolation level 
using share row locking

chosen by the optimizer

which confirms that our newly created index was used. So, in case you often use queries that sort results in descending order, you may think of creating a suitable index to achieve better performances.

这使我们确信我们使用了新建的索引。因此,如果你经常要对结果进行降序排序的话,你应该考虑创建一个合适的索引来获取更高的性能。

Recreate Indexes

重建索引

Over time, index pages can fragment, which can cause serious performance degradation. For example, let's say we have an index, created some time ago, on the time_create column of our tbl table.

随着时间的流逝,索引记录将产生碎片,这将导致严重的性能下降。例如,如果我们有一个很久以前创建的索引,例如tb1表的time_create栏。

If we execute the query:

如果我们执行如下的查询

SELECT   *   FROM  tbl  ORDER   BY  time_create

we can get poor performance, much as if we didn't have an index at all. If we look into the query plan log, we can find the source of our problem. You will see that index scan has been used, but you can usually find a line similar to the following one in the log:

我们得到很差的性能,很大可能是因为我们根本没有一个索引。不过,如果我们看了以下日志,就可以发现问题所在。你会发现我们使用了索引,但是将看到和下面相似的信息。

Number of pages visited = 1210

This means the database performed a lot of IO operations during index search, which is the main bottleneck of this query execution.

这意味着数据库在索引查询过程中执行了大量的IO操作,这就是这个查询过程的瓶颈所在。

The solution in this case is to recreate the index (i.e., drop and create it again). This will make the index defragmented again and save us from a lot of IO operations. We can do this by issuing the following SQL statements:

这种情况的解决方法就是重建索引(drop然后重建它)。这将使得索引进行整理碎片,从而节省我们大量的IO操作时间。我们可以通过下面的SQL语句来重建索引:

DROP INDEX tbl_time_create

CREATE INDEX tbl_time_create ON tbl(time_create)

You'll notice that execution time drops to an acceptable value (under one second). Also, in the log file you will now find the following line:

你将发现执行时间又降到一个可接受的值(1秒以内)。

同样,你在日志文件中将发现如下的行:

Number of pages visited = 5

As you can see, the execution time dropped significantly because the database had to perform only a few IO operations.

正如你看到的,执行时间明显的下降了是因为数据库执行了很少的IO操作。

So, the general rule of thumb is to make your application recreate indexes on a regular basis. It's best to schedule a background job in your application to do this every now and then.

因此,通常的规则就是让你的程序定期重建索引。最好是在程序计划任务一个后台的工作来不时的完成这个工作。

Multiple-Column Indexes

多栏索引

Thus far, we have concentrated on simple, single-column indexes and simple queries that can be tuned in this way. Single-column indexes created on owner and time_create columns will help us with the queries we'll use to filter or sort their values. Even the following query:

到目前为止,我们专注于简单的单栏的索引和简单的查询。创建ownertime_create的单栏索引可以帮助我们进行过滤和排序。即使时下面的查询语句也具有可接受的性能。

 

SELECT * FROM tbl WHERE owner = 'dejan'

AND time_create > '2006-01-01 00:00:00'

ORDER BY time_create

will have acceptable performances. But if you try to execute this query:

但是如果你尝试执行如下的查询:

SELECT   *   FROM  tbl  WHERE  owner  =   ' dejan '   ORDER   BY  time_create

you will get a very long execution time. This is because of the extra sorting step that the database needs to perform in order to sort data.

那又会是一个漫长的执行过程。这是因为数据库为了排序数据需要执行额外的排序步骤。

The solution for these types of queries is to create an index that will cover both the owner and time_create columns. We can achieve this by executing the following query:

解决这种类型的查询的办法就是创建一个包含ownertime_create的索引。我们可以通过执行下面的查询来创建索引:

CREATE   INDEX  tbl_owner_time_create  ON  tbl(owner, time_create)

With this index in use, the query performance will dramatically improve. Now, notice the following lines in the analyzer log:

通过使用这个索引,查询的性能将会得到很大的改善。现在,注意下面的分析日志:

Index Scan ResultSet for TBL using index TBL_OWNER_TIME_CREATE

at read committed isolation level using share row locking

chosen by the optimizer

We have helped the database by letting it use a handy index to quickly find already sorted data.

我们通过使用一个便利的索引来使得数据库可以快速的找到已经排好序的数据。

The important thing to notice in this example is that column order in the CREATE INDEX statement is very important. Multiple-column indexes are optimizable by the first column defined during index creation. So, if we had created the following index:

这个例子中值得注意的是,在“create index”语句中的栏的顺序是非常重要的。多栏索引只有通过在创建索引时定义的第一个栏时才是可优化的。因此,如果我们创建了如下的索引:

CREATE   INDEX  tbl_time_create_owner  ON  tbl(time_create, owner)

instead of one we used previously, we wouldn't see any performance benefits. That is because the Derby optimizer could not consider this index as the best execution path and it would simply be ignored.

而不是先前我们使用的索引,我们将不会发现什么性能的优化。那是因为,derby的优化器不认为这个索引是最好的解决方案,从而忽略了它。

Index Drawbacks

索引的缺点

Indexes can help us improve performance when data selection is in question. But they slow down database insert, and delete and possibly update operations. Since we not only have table structure, but various index structures, it takes longer for the database to maintain all these structures when data changes.

索引可以帮助我们在选择数据的时候改善性能。当然,这也减慢了数据库插入删除以及一些更新操作。因为我们不仅仅有表结构,还有很多的索引结构,所以当数据发生变化时,维护所有的结构是很耗时间的。

For example, when we are inserting a row in a table, the database must update all indexes related to columns of that table. That means that it has to insert an indexed column value in the right place in the appropriate index, and that takes time. The same thing happens when you delete a certain row, because the index must be kept ordered. Update actions affect indexes only when you update indexed columns, since the database must relocate those entries in order to keep indexes sorted.

例如,当我们在表中插入一行数据的时候,数据库必须更新和这个表的栏有关的所有的索引。这就意味着它必须将一个已索引的栏的数据插入到合适的索引中,这将很花时间。同样的事也会在你删除一个特定的行的时候发生,因为索引必须保证顺序。对于更新操作来说,只有当你更新了已索引的栏的时候受到影响,因为数据库必须重新定位这些索性来保持索引的顺序。

So, the point is to optimize database and application design according to your needs. Don't index every column; you might not use those indexes, and you might need to optimize your database for fast inserting of data. Measure your performance early and identify bottlenecks; only then should you try to implement some of the techniques provided in this article.

因此,优化数据库和程序设计的关键在于你的需要。不要索引每一个栏,你不一定会要用到这些索引,而且你可能需要优化你的数据库来进行快速的插入。在早期就开始测试数据库的性能并发现瓶颈。只有那时你才该去应用本文中提到的技术。

Conclusion

结论

In this article we have focused on just a small subset of performance-related issues you can find in everyday development tasks. Most of the principles shown here could be used (with some modifications) to any relational database system available. There are many other techniques that can help you improve the performance of your application. Caching is certainly one of the most effective and widely used approaches. There are many caching solutions for Java developers (some of them, such as OSCache or EHCache, have open source licenses) that could serve as a buffer between the application and database and thus improve overall application performance. Also, many object-relation frameworks used in Java projects (such as Hibernate) have built-in caching capabilities, so you should consider those solutions as well, but that's the material for another discussion.

在本文中,我们研究了一些在日常开发过程中遇到的关于性能的问题。大多数的准则(或进行适当的修改)都可用于任何关系数据库系统。还有很多其他的技术可以帮助你改善你程序的性能。缓存当然是最有效和应用最广泛的方法之一了。对于Java程序员来说,许多的缓存解决方案(部分,如OSCache或者EHCache等开源许可的下的方案)都可以看作是程序和数据库之前的缓存从而提高整个程序的性能。同样,Java项目中用到的许多面向对象的框架(如Hibernate)都拥有内置的缓存能力,所以你应该考虑这些解决方案,不过那是另一个讨论文章的内容了。

Resources

资源

  • Derby home page
  • Derby 的主页

Dejan Bosanac is a software developer, technology consultant and author. He is focused on the integration and interoperability of different technologies, especially the ones related to Java and the Web.

Dejan Bosanac时一个软件开发者,技术顾问和作家。他关注不同技术的集成和互操作,尤其时与Java以及Web开发相关的。

 
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值