ios7毛玻璃效果实现

首先看效果:

     


核心代码:

//加模糊效果,image是图片,blur是模糊度
- (UIImage *)blurryImage:(UIImage *)image withBlurLevel:(CGFloat)blur {
    //模糊度,
    if ((blur < 0.1f) || (blur > 2.0f)) {
        blur = 0.5f;
    }
    
    //boxSize必须大于0
    int boxSize = (int)(blur * 100);
    boxSize -= (boxSize % 2) + 1;
    NSLog(@"boxSize:%i",boxSize);
    //图像处理
    CGImageRef img = image.CGImage;
    //需要引入#import <Accelerate/Accelerate.h>
    /*
     This document describes the Accelerate Framework, which contains C APIs for vector and matrix math, digital signal processing, large number handling, and image processing.
     本文档介绍了Accelerate Framework,其中包含C语言应用程序接口(API)的向量和矩阵数学,数字信号处理,大量处理和图像处理。
     */
    
    //图像缓存,输入缓存,输出缓存
    vImage_Buffer inBuffer, outBuffer;
    vImage_Error error;
    //像素缓存
    void *pixelBuffer;
    
    //数据源提供者,Defines an opaque type that supplies Quartz with data.
    CGDataProviderRef inProvider = CGImageGetDataProvider(img);
    // provider’s data.
    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
    
    //宽,高,字节/行,data
    inBuffer.width = CGImageGetWidth(img);
    inBuffer.height = CGImageGetHeight(img);
    inBuffer.rowBytes = CGImageGetBytesPerRow(img);
    inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
    
    //像数缓存,字节行*图片高
    pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
    
    outBuffer.data = pixelBuffer;
    outBuffer.width = CGImageGetWidth(img);
    outBuffer.height = CGImageGetHeight(img);
    outBuffer.rowBytes = CGImageGetBytesPerRow(img);
    
    
    // 第三个中间的缓存区,抗锯齿的效果
    void *pixelBuffer2 = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
    vImage_Buffer outBuffer2;
    outBuffer2.data = pixelBuffer2;
    outBuffer2.width = CGImageGetWidth(img);
    outBuffer2.height = CGImageGetHeight(img);
    outBuffer2.rowBytes = CGImageGetBytesPerRow(img);
    
    //Convolves a region of interest within an ARGB8888 source image by an implicit M x N kernel that has the effect of a box filter.
    error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer2, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
    error = vImageBoxConvolve_ARGB8888(&outBuffer2, &inBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
    error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
    
    
    if (error) {
        NSLog(@"error from convolution %ld", error);
    }
    
//    NSLog(@"字节组成部分:%zu",CGImageGetBitsPerComponent(img));
    //颜色空间DeviceRGB
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    //用图片创建上下文,CGImageGetBitsPerComponent(img),7,8
    CGContextRef ctx = CGBitmapContextCreate(
                                             outBuffer.data,
                                             outBuffer.width,
                                             outBuffer.height,
                                             8,
                                             outBuffer.rowBytes,
                                             colorSpace,
                                             CGImageGetBitmapInfo(image.CGImage));
    
    //根据上下文,处理过的图片,重新组件
    CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
    UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
    
    //clean up
    CGContextRelease(ctx);
    CGColorSpaceRelease(colorSpace);
    
    free(pixelBuffer);
    free(pixelBuffer2);
    CFRelease(inBitmapData);
    
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(imageRef);
    
    return returnImage;
}

Demo代码下载:http://download.csdn.net/detail/rhljiayou/6000293

以下更新于:2015-1-29

有同学反映Demo会崩溃,看了一下,确实,因为多释放了,感谢这位同学的提出了:

请把CGColorSpaceRelease(colorSpace);去掉。

崩溃log:Assertion failed: (!space->is_singleton), function color_space_dealloc, file ColorSpaces/color-space

还有一位同学的另外实现方式:

- (UIImage *)applyBlurRadius:(CGFloat)radius toImage:(UIImage *)image

{

    if (radius < 0) radius = 0;

    

    CIContext *context = [CIContextcontextWithOptions:nil];

    CIImage *inputImage = [CIImageimageWithCGImage:image.CGImage];

    

    // Setting up gaussian blur

    CIFilter *filter = [CIFilterfilterWithName:@"CIGaussianBlur"];

    [filter setValue:inputImageforKey:kCIInputImageKey];

    [filter setValue:[NSNumbernumberWithFloat:radius] forKey:@"inputRadius"];

    CIImage *result = [filtervalueForKey:kCIOutputImageKey];

    

    CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];

    

    UIImage *returnImage = [UIImageimageWithCGImage:cgImage];

    CGImageRelease(cgImage);

    return returnImage;

}

也是可行的,不过有些问题

coreImage是IOS5中新加入的一个Objective-c的框架,CoreImage在IOS上有很高的效率,但是滤镜和渲染操作也会对主线程造成影响。

比如我的Demo中用的滤镜太多,改变速率太快,根据滑块来产生,一定要注意在使用滑条值前要首先判断更改的滤镜当前是否正在起作用,如果该滤镜正在生成新的渲染图片,则应该更新这次的滑块。这一点也是很重要的,弄的不好常常导致程序崩溃,出现内存泄露问题。

大家可查阅CoreImage资料解决此问题。ok。

转载于:https://my.oschina.net/u/139546/blog/374097

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值