Debugging the kernel using Ftrace

原文网址:http://lwn.net/Articles/365835/


也可以参考印度的一篇文章,网址:http://www.linuxforu.com/2010/11/kernel-tracing-with-ftrace-part-1/



Ftrace is a tracing utility built directly into the Linuxkernel. Many distributions already have various configurations of Ftraceenabled in their most recent releases. One of the benefits that Ftracebrings to Linux is the ability to see what is happening inside the kernel. As such, thismakes finding problem areas or simply tracking down that strange bug moremanageable.

Ftrace's ability to show the events that lead up to a crash gives abetter chance of finding exactly what caused it and can help thedeveloper in creating the correct solution. This article is a two partseries that will cover various methods of using Ftrace for debugging theLinux kernel. This first part will talk briefly about setting up Ftrace,using the function tracer, writing to the Ftrace buffer from within thekernel, and various ways to stop the tracer when a problem is detected.

Ftrace was derived from two tools. One was the "latency tracer" by IngoMolnar used in the -rt tree. The other was my own "logdev" utility that hadits primary use on debugging the Linux kernel. This article will mostlydescribe features that came out of logdev, but will also look at thefunction tracer that originated in the latency tracer.

Setting up Ftrace

Currently the API to interface with Ftrace is located in the Debugfsfile system. Typically, that is mounted at/sys/kernel/debug. Foreasier accessibility, I usually create a/debug directory andmount it there. Feel free to choose your own location for Debugfs.

When Ftrace is configured, it will create its own directory calledtracing within the Debugfs file system. This article willreference those files in that directory as though the user first changeddirectory to the Debugfs tracing directory to avoid any confusion as to wherethe Debugfs file system has been mounted.

    [~]# cd /sys/kernel/debug/tracing
    [tracing]#

This article is focusing on using Ftrace as a debugging tool. Someconfigurations for Ftrace are used for other purposes, like finding latencyor analyzing the system. For the purpose of debugging, the kernelconfiguration parameters that should be enabled are:

    CONFIG_FUNCTION_TRACER
    CONFIG_FUNCTION_GRAPH_TRACER
    CONFIG_STACK_TRACER
    CONFIG_DYNAMIC_FTRACE
Function tracing - no modification necessary

One of the most powerful tracers of Ftrace is the function tracer. Ituses the -pg option of gcc to have every function in thekernel call a special function "mcount()". That function must be implemented inassembly because the call does not follow the normal C ABI.

When CONFIG_DYNAMIC_FTRACE is configured the call is converted to a NOP at boot time to keep the system running at 100% performance. During compilation the mcount() call-sites are recorded. That list is used at boot time to convert those sites to NOPs. Since NOPs are pretty useless for tracing, the list is saved to convert the call-sites back into trace calls when the function (or function graph) tracer is enabled.

It is highly recommended to enable CONFIG_DYNAMIC_FTRACE because of this performance enhancement. In addition, CONFIG_DYNAMIC_FTRACE gives the ability to filter which function should be traced. Note, even though the NOPs do not show any impact in benchmarks, the addition of frame pointers that come with the -pg option has been known to cause a slight overhead.

To find out which tracers are available, simply cat theavailable_tracers file in thetracing directory:

---

 If debugfs is not mounted, you can issue the following command:

# mount -t debugfs nodev /sys/kernel/debug
----
    [tracing]# cat available_tracers 
    function_graph function sched_switch nop

To enable the function tracer, just echo "function" into thecurrent_tracer file.

    [tracing]# echo function > current_tracer
    [tracing]# cat current_tracer
    function

    [tracing]# cat trace | head -10
    # tracer: function
    #
    #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
    #              | |       |          |         |
                bash-16939 [000]  6075.461561: mutex_unlock <-tracing_set_tracer
              <idle>-0     [001]  6075.461561: _spin_unlock_irqrestore <-hrtimer_get_next_event
              <idle>-0     [001]  6075.461562: rcu_needs_cpu <-tick_nohz_stop_sched_tick
                bash-16939 [000]  6075.461563: inotify_inode_queue_event <-vfs_write
              <idle>-0     [001]  6075.461563: mwait_idle <-cpu_idle
                bash-16939 [000]  6075.461563: __fsnotify_parent <-vfs_write

The header explains the format of the output pretty well. The first twoitems are the traced task name and PID. The CPU that the trace was executedon is within the brackets. The timestamp is the time since boot, followedby the function name. The function in this case is the functionbeing traced with its parent following the "<-" symbol.

This information is quite powerful and shows the flow of functionsnicely. But it can be a bit hard to follow. The function graph tracer,created by Frederic Weisbecker, traces both the entry and exit of afunction, which gives the tracer the ability to know the depth of functionsthat are called. The function graph tracer can make following the flow ofexecution within the kernel much easier to follow with the human eye:

    [tracing]# echo function_graph > current_tracer 
    [tracing]# cat trace | head -20
    # tracer: function_graph
    #
    # CPU  DURATION                  FUNCTION CALLS
    # |     |   |                     |   |   |   |
     1)   1.015 us    |        _spin_lock_irqsave();
     1)   0.476 us    |        internal_add_timer();
     1)   0.423 us    |        wake_up_idle_cpu();
     1)   0.461 us    |        _spin_unlock_irqrestore();
     1)   4.770 us    |      }
     1)   5.725 us    |    }
     1)   0.450 us    |    mutex_unlock();
     1) + 24.243 us   |  }
     1)   0.483 us    |  _spin_lock_irq();
     1)   0.517 us    |  _spin_unlock_irq();
     1)               |  prepare_to_wait() {
     1)   0.468 us    |    _spin_lock_irqsave();
     1)   0.502 us    |    _spin_unlock_irqrestore();
     1)   2.411 us    |  }
     1)   0.449 us    |  kthread_should_stop();
     1)               |  schedule() {

This gives the start and end of a function denoted with the C likeannotation of "{" to start a function and "}" at theend. Leaf functions, which do not call other functions, simply end with a";". The DURATION column shows the time spent in thecorresponding function. The function graph tracer records the time thefunction was entered and exited and reports the difference as theduration. These numbers only appear with the leaf functions and the"}" symbol. Note that this time also includes the overhead of allfunctions within a nested function as well as the overhead of the functiongraph tracer itself. The function graph tracer hijacks the return addressof the function in order to insert a trace callback for the functionexit. This breaks the CPU's branch prediction and causes a bit moreoverhead than the function tracer. The closest true timings only occur forthe leaf functions.

The lonely "+" that is there is an annotation marker. When theduration is greater than 10 microseconds, a "+" is shown. If theduration is greater than 100 microseconds a "!" will be displayed.

Using trace_printk()

printk() is the king of all debuggers, but it has a problem. Ifyou are debugging a high volume area such as the timer interrupt, thescheduler, or the network,printk() can lead to bogging down thesystem or can even create a live lock. It is also quite common to see a bug"disappear" when adding a fewprintk()s. This is due to the sheeroverhead thatprintk() introduces.

Ftrace introduces a new form of printk() calledtrace_printk(). It can be used just likeprintk(), andcan also be used in any context (interrupt code, NMI code, and schedulercode). What is nice abouttrace_printk() is that it does notoutput to the console. Instead it writes to the Ftrace ring buffer and canbe read via thetrace file.

Writing into the ring buffer with trace_printk() only takesaround a tenth of a microsecond or so. But usingprintk(),especially when writing to the serial console, may take severalmilliseconds per write. The performance advantage oftrace_printk() lets you record the mostsensitive areas of the kernel with very little impact.

For example you can add something like this to the kernel or module:

    trace_printk("read foo %d out of bar %p\n", bar->foo, bar);

Then by looking at the trace file, you can see your output.

    [tracing]# cat trace
    # tracer: nop
    #
    #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
    #              | |       |          |         |
               <...>-10690 [003] 17279.332920: : read foo 10 out of bar ffff880013a5bef8

The above example was done by adding a module that actually had afoo andbar construct.

trace_printk() output will appear in any tracer, even thefunction and function graph tracers.

    [tracing]# echo function_graph > current_tracer
    [tracing]# insmod ~/modules/foo.ko
    [tracing]# cat trace
    # tracer: function_graph
    #
    # CPU  DURATION                  FUNCTION CALLS
    # |     |   |                     |   |   |   |
     3) + 16.283 us   |      }
     3) + 17.364 us   |    }
     3)               |    do_one_initcall() {
     3)               |      /* read foo 10 out of bar ffff88001191bef8 */
     3)   4.221 us    |    }
     3)               |    __wake_up() {
     3)   0.633 us    |      _spin_lock_irqsave();
     3)   0.538 us    |      __wake_up_common();
     3)   0.563 us    |      _spin_unlock_irqrestore();

Yes, the trace_printk() output looks like a comment in thefunction graph tracer.

Starting and stopping the trace

Obviously there are times where you only want to trace a particularcode path. Perhaps you only want to trace what is happeningwhen you run a specific test. The filetracing_on is used to disablethe ring buffer from recording data:

    [tracing]# echo 0 > tracing_on

This will disable the Ftrace ring buffer from recording. Everythingelse still happens with the tracers and they will still incurmost of their overhead. They do notice that the ring buffer is not recording andwill not attempt to write any data, but the calls that the tracers makeare still performed.

To re-enable the ring buffer, simply write a '1' into that file:

    [tracing]# echo 1 > tracing_on

Note, it is very important that you have a space between the number andthe greater than sign ">". Otherwise you may be writingstandard input or output into that file.

    [tracing]# echo 0> tracing_on   /* this will not work! */

A common run might be:

    [tracing]# echo 0 > tracing_on
    [tracing]# echo function_graph > current_tracer
    [tracing]# echo 1 > tracing_on; run_test; echo 0 > tracing_on

The first line disables the ring buffer from recording any data. Thenext enables the function graph tracer. The overhead of the function graphtracer is still present but nothing will be recorded into the tracebuffer. The last line enables the ring buffer, runs the test program, thendisables the ring buffer. This narrows the data stored by the functiongraph tracer to include mostly just the data accumulated by therun_test program.

What's next?

The next article will continue the discussion on debugging the kernelwith Ftrace. The method above to disable the tracing may not be fastenough. The latency between the end of the programrun_test andechoing the 0 into thetracing_on file may cause the ring bufferto overflow and lose the relevant data. I will discuss other methods tostop tracing a bit more efficiently, how to debug a crash, and looking atwhat functions in the kernel are stack hogs. The best way to find out moreis to enable Ftrace and just play with it. You can learn a lot about howthe kernel works by just following the function graph tracer.


-------

The Ftrace tracing utility has many different features that will assistin tracking down Linux kernel problems. Thepreviousarticle discussed setting up Ftrace, using the function and function graphtracers, usingtrace_printk(), and a simple way to stop the recordingof a trace from user space. This installment will touch on how user spacecan interact with Ftrace, faster ways of stopping the trace, debugging acrash, and finding what kernel functions are the biggest stack hogs.

Trace Markers

Seeing what happens inside the kernel gives the user a betterunderstanding of how their system works. But sometimes there needs to becoordination between what is happening in user space and what is happeninginside the kernel. The timestamps that are shown in the traces are allrelative to what is happening within the trace, but they do not correspondwell with wall time.

To help synchronize between the actions in user space and kernel space,the trace_marker file was created. It provides a way to write into theFtrace ring buffer from user space. This marker will then appear in the traceto give a location in the trace of where a specific event occurred.

    [tracing]# echo hello world > trace_marker
    [tracing]# cat trace
    # tracer: nop
    #
    #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
    #              | |       |          |         |
               <...>-3718  [001]  5546.183420: 0: hello world

The <...> indicates that the name of the task thatwrote the marker was not recorded. Future releases may fix this.

Starting, Stopping and Recording in a Program

The tracing_on and trace_markerfiles work very well to trace the activities of an application if thesource of the application is available. If there is a problem within theapplication and you need to find out what is happening inside the kernel ata particular location of the application, these two files come inhandy.

At the start of the application, you can openthese files to have the file descriptors ready:

    int trace_fd = -1;
    int marker_fd = -1;

    int main(int argc, char *argv)
    {
	    char *debugfs;
	    char path[256];
	    [...]

	    debugfs = find_debugfs();
	    if (debugfs) {
		    strcpy(path, debugfs);
		    strcat(path,"/tracing/tracing_on");
		    trace_fd = open(path, O_WRONLY);
		    if (trace_fd >= 0)
			    write(trace_fd, "1", 1);

		    strcpy(path, debugfs);
		    strcat(path,"/tracing/trace_marker");
		    marker_fd = open(path, O_WRONLY);

Then, at some critical location in the code, markers can be placedto show where the application currently is:

    if (marker_fd >= 0)
	    write(marker_fd, "In critical area\n", 17);

    if (critical_function() < 0) {
	    /* we failed! */
	    if (trace_fd >= 0)
		    write(trace_fd, "0", 1);
    }

In looking at the example, first you see a functioncalled "find_debugfs()". The proper location to mount the debug file systemis at/sys/kernel/debug but a robust tool should not depend on thedebug file system being mounted there. An example offind_debugfs() is locatedhere.The file descriptors are initialized to -1 to allow this code to workboth with and without a tracing enabled kernel.

When the problem is detected, writing the ASCII character "0"into the trace_fd file descriptor stops tracing. As discussedin part 1, this only disables the recording into the Ftrace ring buffer,but the tracers are still incurring overhead.

When using the initialization code above, tracing will be enabledat the beginning of the application becausethe tracer runs in overwrite mode. That is, when the trace bufferfills up, it will remove the old data and replace it with the new.Since only the most recent trace information is relevant when the problemoccurs there is no need to stop and start the tracing during the normalrunning of the application. The tracer only needs to be disabled whenthe problem is detected so the trace will have the history of what ledup to the error. If interval tracing is needed within the application, it canwrite an ASCII "1" into the trace_fd to enable the tracing.

Here is an example of a simple program called simple_trace.cthat usesthe initialization process described above:

    req.tv_sec = 0;
    req.tv_nsec = 1000;
    write(marker_fd, "before nano\n", 12);
    nanosleep(&req, NULL);
    write(marker_fd, "after nano\n", 11);
    write(trace_fd, "0", 1);

(No error checking was added due to this being a simple program forexample purposes only.)

Here is the process to trace this simple program:

    [tracing]# echo 0 > tracing_on
    [tracing]# echo function_graph > current_tracer
    [tracing]# ~/simple_trace
    [tracing]# cat trace

The first line disables tracing because the program will enable it atstart up. Next the function graph tracer is selected. The program isexecuted, which results in the following trace. Note that the output canbe a little verbose so much of it has been cut and replaced with[...]:

    [...]
     0)               |      __kmalloc() {
     0)   0.528 us    |        get_slab();
     0)   2.271 us    |      }
     0)               |      /* before nano */
     0)               |      kfree() {
     0)   0.475 us    |        __phys_addr();
     0)   2.062 us    |      }
     0)   0.608 us    |      inotify_inode_queue_event();
     0)   0.485 us    |      __fsnotify_parent();
    [...]
     1)   0.523 us    |          _spin_unlock();
     0)   0.495 us    |    current_kernel_time();
     1)               |          it_real_fn() {
     0)   1.602 us    |  }
     1)   0.728 us    |            __rcu_read_lock();
     0)               |  sys_nanosleep() {
     0)               |    hrtimer_nanosleep() {
     0)   0.526 us    |      hrtimer_init();
     1)   0.418 us    |            __rcu_read_lock();
     0)               |      do_nanosleep() {
     1)   1.114 us    |            _spin_lock_irqsave();
    [...]
     0)               |      __kmalloc() {
     1)   2.760 us    |  }
     0)   0.556 us    |        get_slab();
     1)               |  mwait_idle() {
     0)   1.851 us    |      }
     0)               |      /* after nano */
     0)               |      kfree() {
     0)   0.486 us    |        __phys_addr();

Notice that the writes to trace_marker show up as comments inthe function graph tracer.

The first column here represents the CPU. When we have the CPU tracesinterleaved like this, it may become hard to read the trace. The toolgrep can easily filter this, or theper_cpu trace filesmay be used. Theper_cpu trace files are located in the debugfstracing directory underper_cpu.

    [tracing]# ls per_cpu
    cpu0  cpu1  cpu2  cpu3  cpu4  cpu5  cpu6  cpu7

There exists a trace file in each one of these CPU directories thatonly show the trace for that CPU.

To get a nice view of the function graph tracer without the interference ofother CPUs just look atper_cpu/cpu0/trace.

    [tracing]# cat per_cpu/cpu0/trace
     0)               |      __kmalloc() {
     0)   0.528 us    |        get_slab();
     0)   2.271 us    |      }
     0)               |      /* before nano */
     0)               |      kfree() {
     0)   0.475 us    |        __phys_addr();
     0)   2.062 us    |      }
     0)   0.608 us    |      inotify_inode_queue_event();
     0)   0.485 us    |      __fsnotify_parent();
     0)   0.488 us    |      inotify_dentry_parent_queue_event();
     0)   1.106 us    |      fsnotify();
    [...]
     0)   0.721 us    |    _spin_unlock_irqrestore();
     0)   3.380 us    |  }
     0)               |  audit_syscall_entry() {
     0)   0.495 us    |    current_kernel_time();
     0)   1.602 us    |  }
     0)               |  sys_nanosleep() {
     0)               |    hrtimer_nanosleep() {
     0)   0.526 us    |      hrtimer_init();
     0)               |      do_nanosleep() {
     0)               |        hrtimer_start_range_ns() {
     0)               |          __hrtimer_start_range_ns() {
     0)               |            lock_hrtimer_base() {
     0)   0.866 us    |              _spin_lock_irqsave();
    [...]
     0)               |      __kmalloc() {
     0)               |        get_slab() {
     0)   1.851 us    |      }
     0)               |      /* after nano */
     0)               |      kfree() {
     0)   0.486 us    |        __phys_addr();
Disabling the Tracer Within the Kernel

During the development of a kernel driver there may exist strangeerrors that occur during testing. Perhaps the driver gets stuck in a sleepstate and never wakes up. Trying to disable the tracer from user spacewhen a kernel event occurs is difficult and usually results in a bufferoverflow and loss of the relevant information before the user can stopthe trace.

There are two functions that work well inside the kernel:tracing_on() andtracing_off(). These two act just likeechoing "1" or "0" respectively into thetracing_on file. If there issome condition that can be checked for inside the kernel, then the tracermay be stopped by adding something like the following:

    if (test_for_error())
	    tracing_off();

Next, add several trace_printk()s (see part 1), recompile, andboot the kernel. You can then enable the function or function graph tracerand justwait for the error condition to happen. Examining thetracing_onfile will let you know when the error condition occurred. It will switchfrom "1" to "0" when the kernel callstracing_off().

After examining the trace, or saving it off in another file with:

cat trace > ~/trace.sav
you can continue the trace to examine anotherhit. To do so, just echo "1" into tracing_on, and the trace willcontinue. This is also useful if the condition that triggers the tracing_off() call can be triggered legitimately. If the condition wastriggered by normal operation, just restart the trace by echoing a "1" backinto tracing_on and hopefully the next time the condition is hitwill be because of the abnormality.
ftrace_dump_on_oops

There are times that the kernel will crash and examining the memory andstate of the crash is more of a CSI science than a program debuggingscience. Usingkdump/kexec with thecrashutility is a valuable way to examine the state of the system at the pointof the crash, but it does not let you see what has happened prior to theevent that caused the crash.

Having Ftrace configured and enabling ftrace_dump_on_oops inthe kernel boot parameters, or by echoing a "1" into/proc/sys/kernel/ftrace_dump_on_oops, will enable Ftrace to dumpto the console the entire trace buffer in ASCII format on oops or panic.Having the console output to a serial log makes debugging crashes mucheasier. You can now trace back the events that led up to the crash.

Dumping to the console may take a long time since the default Ftracering buffer is over a megabyte per CPU. To shrink the size of the ringbuffer, write the number of kilobytes you want the ring buffer to be tobuffer_size_kb. Note that the value is per CPU, not the totalsize of the ring buffer.

    [tracing]# echo 50 > buffer_size_kb
The above will shrink the Ftrace ring buffer down to 50 kilobytes perCPU.

You can also trigger a dump of the Ftrace buffer to the console withsysrq-z.

To choose a particular location for the kernel dump, the kernel may callftrace_dump() directly. Note, this may permanently disable Ftraceand a reboot may be necessary to enable it again. This is becauseftrace_dump() reads the buffer. The buffer is made to be writtento in all contexts (interrupt, NMI, scheduling) but the reading of thebuffer requires locking. To be able to performftrace_dump() thelocking is disabled and the buffer may end up being corrupted after theoutput.

    /*
     * The following code will lock up the box, so we dump out the
     * trace before we hit that location.
     */
    ftrace_dump();

    /* code that locks up */
Stack Tracing

The final topic to discuss is the ability to examine the size of thekernel stack and how much stack space each function is using. Enabling thestack tracer (CONFIG_STACK_TRACER) will show where the biggest useof the stack takes place.

The stack tracer is built from the function tracer infrastructure. Itdoes not use the Ftrace ring buffer, but it does use the function tracer tohook into every function call. Because it uses the function tracerinfrastructure, it does not add overhead when not enabled. To enable thestack tracer, echo 1 into/proc/sys/kernel/stack_tracer_enabled. To see the max stack sizeduring boot up, add "stacktrace" to the kernel boot parameters.

The stack tracer checks the size of the stack at every function call. If itis greater than the last recorded maximum, it records the stack trace andupdates the maximum with the new size. To see the current maximum, look at thestack_max_size file.

    [tracing]# echo 1 > /proc/sys/kernel/stack_tracer_enabled
    [tracing]# cat stack_max_size
    2928
    [tracing]# cat stack_trace
            Depth    Size   Location    (34 entries)
            -----    ----   --------
      0)     2952      16   mempool_alloc_slab+0x15/0x17
      1)     2936     144   mempool_alloc+0x52/0x104
      2)     2792      16   scsi_sg_alloc+0x4a/0x4c [scsi_mod]
      3)     2776     112   __sg_alloc_table+0x62/0x103
    [...]
     13)     2072      48   __elv_add_request+0x98/0x9f
     14)     2024     112   __make_request+0x43e/0x4bb
     15)     1912     224   generic_make_request+0x424/0x471
     16)     1688      80   submit_bio+0x108/0x115
     17)     1608      48   submit_bh+0xfc/0x11e
     18)     1560     112   __block_write_full_page+0x1ee/0x2e8
     19)     1448      80   block_write_full_page_endio+0xff/0x10e
     20)     1368      16   block_write_full_page+0x15/0x17
     21)     1352      16   blkdev_writepage+0x18/0x1a
     22)     1336      32   __writepage+0x1a/0x40
     23)     1304     304   write_cache_pages+0x241/0x3c1
     24)     1000      16   generic_writepages+0x27/0x29
    [...]
     30)      424      64   bdi_writeback_task+0x3f/0xb0
     31)      360      48   bdi_start_fn+0x76/0xd7
     32)      312     128   kthread+0x7f/0x87
     33)      184     184   child_rip+0xa/0x20

Not only does this give you the size of the maximum stack found, it alsoshows the breakdown of the stack sizes used by each function. Notice thatwrite_cache_pages had the biggest stack with 304 bytes being used,followed bygeneric_make_request with 224 bytes of stack.

To reset the maximum, echo "0" into the stack_max_sizefile.

    [tracing]# echo 0 > stack_max_size

Keeping this running for a while will show where thekernel is using a bit too much stack. But remember that the stack traceronly has no overhead when it is not enabled. When it is running you may notice abit of a performance degradation.

Note that the stack tracer will not trace the max stack size when thekernel is using a separate stack. Because interrupts have their own stack,it will not trace the stack usage there. The reason is that currentlythere is no easy way to quickly see what the top of the stack is when thestack is something other than the current task's stack. When using splitstacks, a process stack may be two pages but the interrupt stack may onlybe one. This may be fixed in the future, but keep this in mind when usingthe stack tracer.

Conclusion

Ftrace is a very powerful tool and easy to configure. No extra tools arenecessary. Everything that was shown it this tutorial can be used onembedded devices that only have Busybox installed. Taking advantage of theFtrace infrastructure should cut the time needed to debug that hard-to-findrace condition. I seldom use printk() any more because using thefunction and function graph tracers along withtrace_printk() andtracing_off() have become my main tools for debugging the Linuxkernel.


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值