Introducing the GPUImage framework- 给视频图片 添加 filter&effect

GPUImage做滤镜

IOS端滤镜开源代码比较少,基本上强大的就是GPUImage了,操作简单而且强大。
基本功能可以实现,相机,摄像以及静态图片的滤镜

live filter:
调用GPUImageStillCamera 作为相机,并将相机添加到一个GPUImageView中
_stillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
切记使用小一些的尺寸initWithSessionPreset:AVCaptureSessionPreset640x480。否则会出现memory warning
[_stillCamera addTarget:self.imageView];

如果需要添加fiilter。就通过其中添加
_cropFilter = [[GPUImageCropFilter alloc] initWithCropRegion:CGRectMake(0.0f, 0.0f, 1.0f
, 0.96)];
[_stillCamera addTarget:_cropFilter];
[_cropFilter addTarget:self.imageView];
如果是多个滤镜,则
[_stillCamera addTarget:_cropFilter1];
[_cropFilter1 addTarget:_cropFilter2];
[_cropFilter2 addTarget:self.imageView];

拍照后想生成图片
[_stillCamera capturePhotoAsImageProcessedUpToFilter:lastFilter
withCompletionHandler:^(UIImage *processed, NSError *error) ;

lastFilter的解释为finalFilterInChain
也就是说,需要使用最后一个addTarget进来的滤镜

static filter:
首先设置初始图片到GPUImagePicture
_staticPicture=[[GPUImagePicture alloc]initWithImage:selectedImage smoothlyScaleOutput:NO];
加filter与上面相同
需要生成新的图片时,调用
[_staticPicture processImage];
[filter imageFromCurrentlyProcessedOutput];

非常简单。附上过滤后的图

posted @ 2013-04-01 13:47 yingkong1987 阅读(68) 评论(0) 编辑

GPUImage简单说明

一、介绍

GPUImage是Brad Larson在github托管的一个开源项目,项目实现了图片滤镜、摄像头实时滤镜,该项目的优点不但在于滤镜很多,而且处理效果是基于GPU的,比使用CPU性能更高。

二、类库

1.输入源

在Sources文件夹下包含了GPUImageVideoCamera相机视频输入源,GPUImageStillCamera相机图像输入源,GPUImagePicture静态图片输入源,GPUImageMovie视频输入源。

2.通道

GPUImageFilterPipeline将输入源利用滤镜组投射到输出界面上。

3.滤镜

3.1调整颜色的滤镜

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
GPUImageBrightnessFilter//亮度
GPUImageExposureFilter//曝光
GPUImageContrastFilter//对比度
GPUImageSaturationFilter//饱和度
GPUImageGammaFilter//伽马线
GPUImageLevelsFilter
GPUImageColorMatrixFilter
GPUImageRGBFilter
GPUImageHueFilter
GPUImageToneCurveFilter
GPUImageHighlightShadowFilter
GPUImageLookupFilter
GPUImageAmatorkaFilter
GPUImageMissEtikateFilter
GPUImageSoftEleganceFilter
GPUImageColorInvertFilter
GPUImageGrayscaleFilter
GPUImageMonochromeFilter
GPUImageFalseColorFilter
GPUImageHazeFilter
GPUImageSepiaFilter
GPUImageOpacityFilter
GPUImageSolidColorGenerator
GPUImageLuminanceThresholdFilter
GPUImageAdaptiveThresholdFilter
GPUImageAverageLuminanceThresholdFilter
GPUImageHistogramFilter
GPUImageHistogramGenerator
GPUImageAverageColor
GPUImageLuminosity
GPUImageChromaKeyFilter

3.2图像处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
GPUImageTransformFilter//变化
GPUImageCropFilter//
GPUImageLanczosResamplingFilter
GPUImageSharpenFilter
GPUImageUnsharpMaskFilter
GPUImageFastBlurFilter
GPUImageSingleComponentFastBlurFilter
GPUImageGaussianBlurFilter
GPUImageSingleComponentGaussianBlurFilter
GPUImageGaussianSelectiveBlurFilter
GPUImageGaussianBlurPositionFilter
GPUImageMedianFilter
GPUImageBilateralFilter
GPUImageTiltShiftFilter
GPUImageBoxBlurFilter
GPUImage3x3ConvolutionFilter
GPUImageSobelEdgeDetectionFilter
GPUImageThresholdEdgeDetectionFilter
GPUImageCannyEdgeDetectionFilter
GPUImageHarrisCornerDetectionFilter
GPUImageNobleCornerDetectionFilter
GPUImageShiTomasiCornerDetectionFilter
GPUImageNonMaximumSuppressionFilter
GPUImageXYDerivativeFilter
GPUImageCrosshairGenerator
GPUImageDilationFilter
GPUImageRGBDilationFilter
GPUImageErosionFilter
GPUImageRGBErosionFilter
GPUImageOpeningFilter
GPUImageRGBOpeningFilter
GPUImageClosingFilter
GPUImageRGBClosingFilter
GPUImageLocalBinaryPatternFilter
GPUImageLowPassFilter
GPUImageHighPassFilter
GPUImageMotionDetector
GPUImageHoughTransformLineDetector
GPUImageLineGenerator
GPUImageMotionBlurFilter
GPUImageZoomBlurFilter

3.3混合模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
GPUImageChromaKeyBlendFilter
GPUImageDissolveBlendFilter
GPUImageMultiplyBlendFilter
GPUImageAddBlendFilter
GPUImageSubtractBlendFilter
GPUImageDivideBlendFilter
GPUImageOverlayBlendFilter
GPUImageDarkenBlendFilter
GPUImageLightenBlendFilter
GPUImageColorBurnBlendFilter
GPUImageColorDodgeBlendFilter
GPUImageScreenBlendFilter
GPUImageExclusionBlendFilter
GPUImageDifferenceBlendFilter
GPUImageHardLightBlendFilter
GPUImageSoftLightBlendFilter
GPUImageAlphaBlendFilter
GPUImageSourceOverBlendFilter
GPUImageColorBurnBlendFilter
GPUImageColorDodgeBlendFilter
GPUImageNormalBlendFilter
GPUImageColorBlendFilter
GPUImageHueBlendFilter
GPUImageSaturationBlendFilter
GPUImageLuminosityBlendFilter
GPUImageLinearBurnBlendFilter
GPUImagePoissonBlendFilter
GPUImageMaskFilter

3.4视觉效果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
GPUImagePixellateFilter
GPUImagePolarPixellateFilter
GPUImagePolkaDotFilter
GPUImageHalftoneFilter
GPUImageCrosshatchFilter
GPUImageSketchFilter
GPUImageThresholdSketchFilter
GPUImageToonFilter
GPUImageSmoothToonFilter
GPUImageEmbossFilter
GPUImagePosterizeFilter
GPUImageSwirlFilter
GPUImageBulgeDistortionFilter
GPUImagePinchDistortionFilter
GPUImageStretchDistortionFilter
GPUImageSphereRefractionFilter
GPUImageGlassSphereFilter
GPUImageVignetteFilter
GPUImageKuwaharaFilter
GPUImageKuwaharaRadius3Filter
GPUImagePerlinNoiseFilter
GPUImageCGAColorspaceFilter
GPUImageMosaicFilter
GPUImageJFAVoronoiFilter
GPUImageVoronoiConsumerFilter

3.5.输出

在Outputs文件夹下,GPUImageView常用输出view,GPUImageMovieWriter视频重编码。

三、使用

1.将GPUImage工程拖到需要图像滤镜处理的工程中,导入框架

  • CoreMedia
  • CoreVideo
  • OpenGLES
  • AVFoundation
  • QuartzCore

2.在使用GPUImage的类中,引入#import “GPUImage.h”

3.创建一个输入源。如:

1
GPUImagePicture *staticPicture = [[GPUImagePicture alloc] initWithImage:stillImagesmoothlyScaleOutput:YES];

4.创建滤镜。如:

1
GPUImageFalseColorFilter *filter = [[GPUImageFalseColorFilter alloc] init];

5.创建输出界面。如:

1
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];

也可将自己现成的view,作为输出界面,强制类型转换(GPUImageView*)self.view。
6.创建通道。如:

1
GPUImageFilterPipeline *pipeline = [[GPUImageFilterPipeline alloc]initWithOrderedFilters:arrayTempinput:staticPictureoutput:(GPUImageView*)self.view];

7.图片图像处理

1
2
[staticPicture processImage];
[videoCamera startCameraCapture];

四、下载

github下载

Introducing the GPUImage framework

SecondConf logo

I'd like to introduce a new open source framework that I've written, called GPUImage. The GPUImage framework is a BSD-licensed iOS library (for which the source code can be found on Github) that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a slightly simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.

UPDATE (4/15/2012): I've disabled comments, because they were getting out of hand. If you wish to report an issue with the project, or request a feature addition, go to its GitHub page. If you want to ask a question about it, contact me at the email address in the footer of this page, or post in the new forum I have set up for the project.

About a year and a half ago, I gave a talk at SecondConf where I demonstrated the use of OpenGL ES 2.0 shaders to process live video. The subsequent writeup and sample code that came out of that proved to be fairly popular, and I've heard from a number of people who have incorporated that video processing code into their iOS applications. However, the amount of code around the OpenGL ES 2.0 portions of that example made it difficult to customize and reuse. Since much of this code was just scaffolding for interacting with OpenGL ES, it could stand to be encapsulated in an easier to use interface.

Example of four types of video filters

Since then, Apple has ported some of their Core Image framework from the Mac to iOS. Core Image provides an interface for doing filtering of images and video on the GPU. Unfortunately, the current implementation on iOS has some limitations. The largest of these is the fact that you can't write your own custom filters based on their kernel language, like you can on the Mac. This severely restricts what you can do with the framework. Other downsides include a somewhat more complex interface and a lack of iOS 4.0 support. Others have complained about some performance overhead, but I've not benchmarked this myself.

Because of the lack of custom filters in Core Image, I decided to convert my video filtering example into a simple Objective-C image and video processing framework. The key feature of this framework is its support for completely customizable filters that you write using the OpenGL Shading Language. It also has a straightforward interface (which you can see some examples of below) and support for iOS 4.0 as a target.

Note that this framework is built around OpenGL ES 2.0, so it will only work on devices that support this API. This means that this framework will not work on the original iPhone, iPhone 3G, and 1st and 2nd generation iPod touches. All other iOS devices are supported.

The following is my first pass of documentation for this framework, an up-to-date version of which can be found within the framework repository on GitHub:

General architecture

GPUImage uses OpenGL ES 2.0 shaders to perform image and video manipulation much faster than could be done in CPU-bound routines. It hides the complexity of interacting with the OpenGL ES API in a simplified Objective-C interface. This interface lets you define input sources for images and video, attach filters in a chain, and send the resulting processed image or video to the screen, to a UIImage, or to a movie on disk.

Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include GPUImageVideoCamera (for live video from an iOS camera) and GPUImagePicture (for still images). Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.

Filters and other subsequent elements in the chain conform to the GPUImageInput protocol, which lets them take in the supplied or processed texture from the previous link in the chain and do something with it. Objects one step further down the chain are considered targets, and processing can be branched by adding multiple targets to a single output or filter.

For example, an application that takes in live video from the camera, converts that video to a sepia tone, then displays the video onscreen would set up a chain looking something like the following:

GPUImageVideoCamera -> GPUImageSepiaFilter -> GPUImageView

A small number of filters are built in:

  • GPUImageBrightnessFilter
  • GPUImageContrastFilter
  • GPUImageSaturationFilter
  • GPUImageGammaFilter
  • GPUImageColorMatrixFilter
  • GPUImageColorInvertFilter
  • GPUImageSepiaFilter: Simple sepia tone filter
  • GPUImageDissolveBlendFilter
  • GPUImageMultiplyBlendFilter
  • GPUImageOverlayBlendFilter
  • GPUImageDarkenBlendFilter
  • GPUImageLightenBlendFilter
  • GPUImageRotationFilter: This lets you rotate an image left or right by 90 degrees, or flip it horizontally or vertically
  • GPUImagePixellateFilter: Applies a pixellation effect on an image or video, with the fractionalWidthOfAPixel property controlling how large the pixels are, as a fraction of the width and height of the image
  • GPUImageSobelEdgeDetectionFilter: Performs edge detection, based on a Sobel 3x3 convolution
  • GPUImageSketchFilter: Converts video to a sketch, and is the inverse of the edge detection filter
  • GPUImageToonFilter
  • GPUImageSwirlFilter
  • GPUImageVignetteFilter
  • GPUImageKuwaharaFilter: Converts the video to an oil painting, but is very slow right now

but you can easily write your own custom filters using the C-like OpenGL Shading Language, as described below.

Adding the framework to your iOS project

Once you have the latest source code for the framework, it's fairly straightforward to add it to your application. Start by dragging the GPUImage.xcodeproj file into your application's Xcode project to embed the framework in your project. Next, go to your application's target and add GPUImage as a Target Dependency. Finally, you'll want to drag the libGPUImage.a library from the GPUImage framework's Products folder to the Link Binary With Libraries build phase in your application's target.

GPUImage needs a few other frameworks to be linked into your application, so you'll need to add the following as linked libraries in your application target:

  • CoreMedia
  • CoreVideo
  • OpenGLES
  • AVFoundation
  • QuartzCore

You'll also need to find the framework headers, so within your project's build settings set the Header Search Paths to the relative path from your application to the framework/ subdirectory within the GPUImage source directory. Make this header search path recursive.

To use the GPUImage classes within your application, simply include the core framework header using the following:

#import "GPUImage.h"

As a note: if you run into the error "Unknown class GPUImageView in Interface Builder" or the like when trying to build an interface with Interface Builder, you may need to add -ObjC to your Other Linker Flags in your project's build settings.

Performing common tasks

Filtering live video

To filter live video from an iOS device's camera, you can use code like the following:

GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
 
// Add the view somewhere so it's visible
 
[videoCamera addTarget:thresholdFilter];
[customFilter addTarget:filteredVideoView];
 
[videoCamera startCameraCapture];

This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.

Processing a still image

There are a couple of ways to process a still image and create a result. The first way you can do this is by creating a still image source object and manually creating a filter chain:

UIImage *inputImage = [UIImage imageNamed:@"Lambeau.jpg"];
 
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
 
[stillImageSource addTarget:stillImageFilter];
[stillImageSource processImage];
 
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentlyProcessedOutput];

For single filters that you wish to apply to an image, you can simply do the following:

GPUImageSepiaFilter *stillImageFilter2 = [[GPUImageSepiaFilter alloc] init];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:inputImage];

Writing a custom filter

One significant advantage of this framework over Core Image on iOS (as of iOS 5.0) is the ability to write your own custom image and video processing filters. These filters are supplied as OpenGL ES 2.0 fragment shaders, written in the C-like OpenGL Shading Language.

A custom filter is initialized with code like

GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];

where the extension used for the fragment shader is .fsh. Additionally, you can use the -initWithFragmentShaderFromString: initializer to provide the fragment shader as a string, if you would not like to ship your fragment shaders in your application bundle.

Fragment shaders perform their calculations for each pixel to be rendered at that filter stage. They do this using the OpenGL Shading Language (GLSL), a C-like language with additions specific to 2-D and 3-D graphics. An example of a fragment shader is the following sepia-tone filter:

varying highp vec2 textureCoordinate;
 
uniform sampler2D inputImageTexture;
 
void main()
{
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    lowp vec4 outputColor;
    outputColor.r = (textureColor.r * 0.393) + (textureColor.g * 0.769) + (textureColor.b * 0.189);
    outputColor.g = (textureColor.r * 0.349) + (textureColor.g * 0.686) + (textureColor.b * 0.168);    
    outputColor.b = (textureColor.r * 0.272) + (textureColor.g * 0.534) + (textureColor.b * 0.131);
 
	gl_FragColor = outputColor;
}

For an image filter to be usable within the GPUImage framework, the first two lines that take in the textureCoordinate varying (for the current coordinate within the texture, normalized to 1.0) and the inputImageTexture varying (for the actual input image frame texture) are required.

The remainder of the shader grabs the color of the pixel at this location in the passed-in texture, manipulates it in such a way as to produce a sepia tone, and writes that pixel color out to be used in the next stage of the processing pipeline.

One thing to note when adding fragment shaders to your Xcode project is that Xcode thinks they are source code files. To work around this, you'll need to manually move your shader from the Compile Sources build phase to the Copy Bundle Resources one in order to get the shader to be included in your application bundle.

Filtering and re-encoding a movie

Movies can be loaded into the framework via the GPUImageMovie class, filtered, and then written out using a GPUImageMovieWriter. GPUImageMovieWriter is also fast enough to record video in realtime from an iPhone 4's camera at 640x480, so a direct filtered video source can be fed into it.

The following is an example of how you would load a sample movie, pass it through a pixellation and rotation filter, then record the result to disk as a 480 x 640 h.264 movie:

movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
pixellateFilter = [[GPUImagePixellateFilter alloc] init];
GPUImageRotationFilter *rotationFilter = [[GPUImageRotationFilter alloc] initWithRotation:kGPUImageRotateRight];
 
[movieFile addTarget:rotationFilter];
[rotationFilter addTarget:pixellateFilter];
 
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];
unlink([pathToMovie UTF8String]);
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
 
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[pixellateFilter addTarget:movieWriter];
 
[movieWriter startRecording];
[movieFile startProcessing];

Once recording is finished, you need to remove the movie recorder from the filter chain and close off the recording using code like the following:

[pixellateFilter removeTarget:movieWriter];
[movieWriter finishRecording];

A movie won't be usable until it has been finished off, so if this is interrupted before this point, the recording will be lost.

Sample applications

Several sample applications are bundled with the framework source. Most are compatible with both iPhone and iPad-class devices. They attempt to show off various aspects of the framework and should be used as the best examples of the API while the framework is under development. These include:

ColorObjectTracking

A version of my ColorTracking example ported across to use GPUImage, this application uses color in a scene to track objects from a live camera feed. The four views you can switch between include the raw camera feed, the camera feed with pixels matching the color threshold in white, the processed video where positions are encoded as colors within the pixels passing the threshold test, and finally the live video feed with a dot that tracks the selected color. Tapping the screen changes the color to track to match the color of the pixels under your finger. Tapping and dragging on the screen makes the color threshold more or less forgiving. This is most obvious on the second, color thresholding view.

SimpleImageFilter

A bundled JPEG image is loaded into the application at launch, a filter is applied to it, and the result rendered to the screen. Additionally, this sample shows two ways of taking in an image, filtering it, and saving it to disk.

MultiViewFilterExample

From a single camera feed, four views are populated with realtime filters applied to camera. One is just the straight camera video, one is a preprogrammed sepia tone, and two are custom filters based on shader programs.

FilterShowcase

This demonstrates every filter supplied with GPUImage.

BenchmarkSuite

This is used to test the performance of the overall framework by testing it against CPU-bound routines and Core Image. Benchmarks involving still images and video are run against all three, with results displayed in-application.

Things that need work

This is just a first release, and I'll keep working on this to add more functionality. I also welcome any and all help with enhancing this. Right off the bat, these are missing elements I can think of:

  • Images that exceed 2048 pixels wide or high currently can't be processed on devices older than the iPad 2 or iPhone 4S.
  • Currently, it's difficult to create a custom filter with additional attribute inputs and a modified vertex shader.
  • Many common filters aren't built into the framework yet.
  • Video capture and processing should be done on a background GCD serial queue.
  • I'm sure that there are many optimizations that can be made on the rendering pipeline.
  • The aspect ratio of the input video is not maintained, but stretched to fill the final image.
  • Errors in shader setup and other failures need to be explained better, and the framework needs to be more robust when encountering odd situations.

Hopefully, people will find this to be helpful in doing fast image and video processing within their iOS applications.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Linux block IO(块输入输出)是Linux操作系统的IO子系统,用于管理块设备(例如硬盘和SSD)的访问。在多核系统上引入多队列SSD访问是一种优化措施。 传统上,Linux操作系统在处理块设备访问时,使用单个队列(queue)来处理所有IO请求。这种单队列设计对于单核系统来说是合适的,因为只有一个CPU核心可以处理IO请求。然而,在多核系统中,这种设计却成为了性能瓶颈,因为所有的IO请求都必须经过单个队列,即使有多个CPU核心是可用的。 为了解决这个问题,Linux引入了多队列SSD访问功能。这意味着在多核系统上,每个CPU核心都有一个独立的队列来处理IO请求。每个队列可以独立处理IO请求,而不会受到其他队列的干扰。这种设计可以提高系统的并发性和吞吐量。 多队列SSD访问还可以充分利用SSD设备的性能特点。SSD设备通常具有多个通道(channel)和多个闪存芯片(chip),每个通道和芯片都可以同时处理IO请求。通过将IO请求分配给多个队列,可以同时利用多个通道和芯片,从而提高SSD的性能。 在Linux中实现多队列SSD访问需要对内核进行相应的修改和配置。用户可以通过命令和配置文件来设置每个队列的属性和参数,以满足特定场景下的需求。 总之,通过引入多队列SSD访问,Linux在多核系统上可以更好地利用硬件资源,提高系统的性能和吞吐量。这是一个重要的优化措施,可以提高块设备访问的效率和响应速度。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值