IOS Dev Intro - Image Resizing

103 篇文章 0 订阅


http://nshipster.com/image-resizing/


Image Resizing Techniques

Written by Mattt Thompson — September 15th, 2014 (revised)

Since time immemorial, iOS developers have been perplexed by a singular question: “How do you resize an image?”. It is a question of beguiling clarity, spurred on by a mutual mistrust of developer and platform. A thousand code samples litter web search results, each claiming to be the One True Solution, and all the others false prophets.

It’s embarrassing, really.

This week’s article endeavors to provide a clear explanation of the various approaches to image resizing on iOS (and OS X, making the appropriate UIImage → NSImage conversions), using empirical evidence to offer insights into the performance characteristics of each approach, rather than simply prescribing any one way for all situations.

Before reading any further, please note the following:

When setting a UIImage on a UIImageView, manual resizing is unnecessary for the vast majority of use cases. Instead, one can simply set the contentMode property to either .ScaleAspectFit to ensure that the entire image is visible within the frame of the image view, or .ScaleAspectFill to have the entire image view filled by the image, cropping as necessary from the center.

imageView.contentMode = .ScaleAspectFit
imageView.image = image

Determining Scaled Size

Before doing any image resizing, one must first determine the target size to scale to.

Scaling by Factor

The simplest way to scale an image is by a constant factor. Generally, this involves dividing by a whole number to reduce the original size (rather than multiplying by a whole number to magnify).

A new CGSize can be computed by scaling the width and height components individually:

let size = CGSize(width: image.size.width / 2, height: image.size.height / 2)

…or by applying a CGAffineTransform:

let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.5, 0.5))

Scaling by Aspect Ratio

It’s often useful to scale the original size in such a way that fits within a rectangle without changing the original aspect ratio. AVMakeRectWithAspectRatioInsideRect is a useful function found in the AVFoundation framework that takes care of that calculation for you:

import AVFoundation
let rect = AVMakeRectWithAspectRatioInsideRect(image.size, imageView.bounds)

Resizing Images

There are a number of different approaches to resizing an image, each with different capabilities and performance characteristics.

UIGraphicsBeginImageContextWithOptions & UIImage -drawInRect:

The highest-level APIs for image resizing can be found in the UIKit framework. Given a UIImage, a temporary graphics context can be used to render a scaled version, usingUIGraphicsBeginImageContextWithOptions() and UIGraphicsGetImageFromCurrentImageContext():

let image = UIImage(contentsOfFile: self.URL.absoluteString!)

let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.5, 0.5))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen

UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))

let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

UIGraphicsBeginImageContextWithOptions() creates a temporary rendering context into which the original is drawn. The first argument, size, is the target size of the scaled image. The second argument, isOpaque is used to determine whether an alpha channel is rendered. Setting this to falsefor images without transparency (i.e. an alpha channel) may result in an image with a pink hue. The third argument scale is the display scale factor. When set to 0.0, the scale factor of the main screen is used, which for Retina displays is 2.0 or higher (3.0 on the iPhone 6 Plus).

CGBitmapContextCreate & CGContextDrawImage

Core Graphics / Quartz 2D offers a lower-level set of APIs that allow for more advanced configuration. Given a CGImage, a temporary bitmap context is used to render the scaled image, using CGBitmapContextCreate() and CGBitmapContextCreateImage():

let cgImage = UIImage(contentsOfFile: self.URL.absoluteString!).CGImage

let width = CGImageGetWidth(cgImage) / 2
let height = CGImageGetHeight(cgImage) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)

let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)

CGContextSetInterpolationQuality(context, kCGInterpolationHigh)

CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)

let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }

CGBitmapContextCreate takes several arguments to construct a context with desired dimensions and amount of memory for each channel within a given colorspace. In the example, these values are fetched from the CGImage. Next, CGContextSetInterpolationQuality allows for the context to interpolate pixels at various levels of fidelity. In this case, kCGInterpolationHigh is passed for best results. CGContextDrawImage allows for the image to be drawn at a given size and position, allowing for the image to be cropped on a particular edge or to fit a set of image features, such as faces. Finally, CGBitmapContextCreateImage creates a CGImage from the context.

CGImageSourceCreateThumbnailAtIndex

Image I/O is a powerful, yet lesser-known framework for working with images. Independent of Core Graphics, it can read and write between many different formats, access photo metadata, and perform common image processing operations. The framework offers the fastest image encoders and decoders on the platform, with advanced caching mechanisms and even the ability to load images incrementally.

CGImageSourceCreateThumbnailAtIndex offers a concise API with different options than found in equivalent Core Graphics calls:

import ImageIO

if let imageSource = CGImageSourceCreateWithURL(self.URL, nil) {
    let options: [NSString: NSObject] = [
        kCGImageSourceThumbnailMaxPixelSize: max(size.width, size.height) / 2.0,
        kCGImageSourceCreateThumbnailFromImageAlways: true
    ]

    let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options).flatMap { UIImage(CGImage: $0) }
}

Given a CGImageSource and set of options, CGImageSourceCreateThumbnailAtIndex creates a thumbnail image. Resizing is accomplished by the kCGImageSourceThumbnailMaxPixelSize. Specifying the maximum dimension divided by a constant factor scales the image while maintaining the original aspect ratio. By specifying either kCGImageSourceCreateThumbnailFromImageIfAbsent orkCGImageSourceCreateThumbnailFromImageAlways, Image I/O will automatically cache the scaled result for subsequent calls.

Lanczos Resampling with Core Image

Core Image provides a built-in Lanczos resampling functionality with the CILanczosScaleTransformfilter. Although arguably a higher-level API than UIKit, the pervasive use of key-value coding in Core Image makes it unwieldy.

That said, at least the pattern is consistent. The process of creating a transform filter, configuring it, and rendering an output image is just like any other Core Image workflow:

let image = CIImage(contentsOfURL: self.URL)

let filter = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(image, forKey: "inputImage")
filter.setValue(0.5, forKey: "inputScale")
filter.setValue(1.0, forKey: "inputAspectRatio")
let outputImage = filter.valueForKey("outputImage") as! CIImage

let context = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let scaledImage = UIImage(CGImage: self.context.createCGImage(outputImage, fromRect: outputImage.extent()))

CILanczosScaleTransform accepts an inputImageinputScale, and inputAspectRatio, all of which are pretty self-explanatory. A CIContext is used to create a UIImage by way of a CGImageRefintermediary representation, since UIImage(CIImage:) doesn’t often work as expected.

Creating a CIContext is an expensive operation, so a cached context should always be used for repeated resizing. A CIContext can be created using either the GPU or the CPU (much slower) for rendering—use the kCIContextUseSoftwareRenderer key in the options dictionary to specify which.

vImage in Accelerate

The Accelerate framework includes a suite of vImage image-processing functions, with a set of functions that scale an image buffer. These lower-level APIs promise high performance with low power consumption, but at the cost of managing the buffers yourself. The following is a Swift version of a method suggested by Nyx0uf on GitHub:

let cgImage = UIImage(contentsOfFile: self.URL.absoluteString!).CGImage

// create a source buffer
var format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: nil, 
    bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.First.rawValue), 
    version: 0, decode: nil, renderingIntent: CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer = vImage_Buffer()
defer {
    sourceBuffer.data.dealloc(Int(sourceBuffer.height) * Int(sourceBuffer.height) * 4)
}

var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else { return nil }

// create a destination buffer
let scale = UIScreen.mainScreen().scale
let destWidth = Int(image.size.width * 0.5 * scale)
let destHeight = Int(image.size.height * 0.5 * scale)
let bytesPerPixel = CGImageGetBitsPerPixel(image.CGImage) / 8
let destBytesPerRow = destWidth * bytesPerPixel
let destData = UnsafeMutablePointer<UInt8>.alloc(destHeight * destBytesPerRow)
defer {
    destData.dealloc(destHeight * destBytesPerRow)
}
var destBuffer = vImage_Buffer(data: destData, height: vImagePixelCount(destHeight), width: vImagePixelCount(destWidth), rowBytes: destBytesPerRow)

// scale the image
error = vImageScale_ARGB8888(&sourceBuffer, &destBuffer, nil, numericCast(kvImageHighQualityResampling))
guard error == kvImageNoError else { return nil }

// create a CGImage from vImage_Buffer
let destCGImage = vImageCreateCGImageFromBuffer(&destBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else { return nil }

// create a UIImage
let scaledImage = destCGImage.flatMap { UIImage(CGImage: $0, scale: 0.0, orientation: image.imageOrientation) }

The Accelerate APIs used here clearly operate at a lower-level than the other resizing methods. To use this method, you first create a source buffer from your CGImage using a vImage_CGImageFormatwith vImageBuffer_InitWithCGImage(). The destination buffer is allocated at the desired image resolution, then vImageScale_ARGB8888 does the actual work of resizing the image. Managing your own buffers when operating on images larger than your app’s memory limit is left as an exercise for the reader.


Performance Benchmarks

So how do these various approaches stack up to one another?

Here are the results of a set of performance benchmarks done on an iPhone 6 running iOS 8.4, viathis project:

JPEG

Loading, scaling, and displaying a large, high-resolution (12000 ⨉ 12000 px 20 MB JPEG) source image from NASA Visible Earth at 1/10th the size:

Operation Time (sec) σ
UIKit 0.612 14%
Core Graphics 1 0.266 3%
Image I/O 0.255 2%
Core Image 2 3.703 33%
vImage 3

PNG

Loading, scaling, and displaying a reasonably large (1024 ⨉ 1024 px 1MB PNG) rendering of thePostgres.app Icon at 1/10th the size:

Operation Time (sec) σ
UIKit 0.044 30%
Core Graphics 4 0.036 10%
Image I/O 0.038 11%
Core Image 5 0.053 68%
vImage 0.050 25%

1, 4 Results were consistent across different values of CGInterpolationQuality, with negligible differences in performance benchmarks.

3 The size of the NASA Visible Earth image was larger than could be processed in a single pass on the device.

2, 5 Setting kCIContextUseSoftwareRenderer to true on the options passed on CIContext creation yielded results an order of magnitude slower than base results.

Conclusions

  • UIKitCore Graphics, and Image I/O all perform well for scaling operations on most images.
  • Core Image is outperformed for image scaling operations. In fact, it is specifically recommended in the Performance Best Practices section of the Core Image Programming Guide to use Core Graphics or Image I/O functions to crop or downsample images beforehand.
  • For general image scaling without any additional functionality,UIGraphicsBeginImageContextWithOptions is probably the best option.
  • If image quality is a consideration, consider using CGBitmapContextCreate in combination with CGContextSetInterpolationQuality.
  • When scaling images with the intent purpose of displaying thumbnails,CGImageSourceCreateThumbnailAtIndex offers a compelling solution for both rendering and caching.
  • Unless you’re already working with vImage, the extra work to use the low-level Accelerate framework for resizing doesn’t pay off.
nsmutablehipster

Questions? Corrections? Issues and pull requests are always welcome — NSHipster is made better by readers like you.

This article uses Swift version 2.0 and was last reviewed on September 30, 2015. Find status information for all articles on the status page.

September 15 th, 2014

Original publication.

September 30 th, 2015

Revised for Swift 2.0, vImage method added.


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
/* 全局样式 */ body { font-family: Arial, sans-serif; font-size: 16px; color: #333; margin: 0; } a { color: #333; text-decoration: none; } a:hover { color: #555; } ul, ol { margin-top: 0; margin-bottom: 10px; } ul li, ol li { margin-left: 20px; } /* 头部样式 */ header { color: #fff; padding:0 0 0 0; } .container { max-width: 1660px; margin: 0 auto; padding: 0 20px; } #hero { background-image: url(QMZYWY/images/wy.jpg); background-size: cover; background-position: center; color: #fff; text-align: center; padding: 100px 0; } h1 { margin: 0; font-size: 32px; } nav { display: flex; justify-content: flex-end; } nav ul { list-style: none; margin: 0; padding: 0; display: flex; } nav li { margin-right: 20px; } nav a { color: #fff; text-decoration: none; padding: 5px; border-radius: 5px; transition: background-color 0.2s ease; } nav a:hover { background-color: #555; } /* 英雄介绍样式 */ .hero-intro { background-color: #fff; padding: 40px 0; } .hero-intro-text { margin-bottom: 20px; } .hero-intro-image { text-align: center; } .hero-intro-image img { max-width: 100%; height: auto; } /* 游戏攻略样式 */ .game-strategy { background-color: #f5f5f5; padding: 40px 0; } .game-strategy p { margin-bottom: 20px; } /* 页脚样式 */ footer { background-color: #222; color: #fff; padding: 10px 0; } footer p { margin: 0; text-align: center; } /* 响应式样式 */ @media screen and (max-width: 768px) { .container { padding: 0 10px; } h1 { font-size: 24px; } nav { justify-content: center; } nav li { margin-right: 10px; } .hero-intro { padding: 20px 0; } .hero-intro-text { text-align: center; } .hero-intro-image { margin-top: 20px; } } @media screen and (min-width: 768px) { .hero { background-image: url('QMZYWY/images/wy.jpg'); } }在此代码中加入网页背景图片响应式
06-12

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值