一.读取图片
1.从资源(resource)读取
- UIImage* image=[UIImage imageNamed:@"1.jpg"];
2.从网络读取
- NSURL *url=[NSURL URLWithString:@"http://www.sinaimg.cn/qc/photo_auto/chezhan/2012/50/00/15/80046_950.jpg"];
- UIImage *imgFromUrl =[[UIImage alloc]initWithData:[NSData dataWithContentsOfURL:url]];
3.从手机本地读取
- //读取本地图片非resource
- NSString *aPath3=[NSString stringWithFormat:@"%@/Documents/%@.jpg",NSHomeDirectory(),@"test"];
- UIImage *imgFromUrl3=[[UIImage alloc]initWithContentsOfFile:aPath3];
- UIImageView* imageView3=[[UIImageView alloc]initWithImage:imgFromUrl3];
- //add ImageIO.framework and #import <ImageIO/ImageIO.h>
- CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
- CGImageRef img= CGImageSourceCreateImageAtIndex(source,0,NULL);
- CGContextRef ctx=UIGraphicsGetCurrentContext();
- CGContextSaveGState(ctx);
- //transformCTM的2种方式
- //CGContextConcatCTM(ctx, CGAffineTransformMakeScale(.2, -0.2));
- //CGContextScaleCTM(ctx,1,-1);
- //注意坐标要反下,用ctx来作为图片源
- CGImageRef capture=CGBitmapContextCreateImage(ctx);
- CGContextDrawImage(ctx, CGRectMake(160, 0, 160, 230), [image CGImage]);
- CGContextDrawImage(ctx, CGRectMake(160, 230, 160, 230), img);
- CGImageRef capture2=CGBitmapContextCreateImage(ctx);
5.用Quartz的CGImageSourceRef来读取图片
- CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
- CGImageRef img= CGImageSourceCreateImageAtIndex(source,0,NULL);
二.保存图片
1.转换成NSData来保存图片(imgFromUrl是UIImage)
- //保存图片 2种获取路径都可以
- //NSArray*paths=NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
- //NSString*documentsDirectory=[paths objectAtIndex:0];
- //NSString*aPath=[documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"%@.jpg",@"test"]];
- NSString *aPath=[NSString stringWithFormat:@"%@/Documents/%@.jpg",NSHomeDirectory(),@"test"];
- NSData *imgData = UIImageJPEGRepresentation(imgFromUrl,0);
- [imgData writeToFile:aPath atomically:YES];
2.用Quartz的CGImageDestinationRef来输出图片,这个方式不常见,所以不做介绍,详细可以看apple文档Quartz 2D Programming Guide
三.绘制图(draw|painting)
1.UIImageView方式加入到UIView层
- UIImageView* imageView=[[UIImageView alloc]initWithImage:image];
- imageView.frame=CGRectMake(0, 0, 320, 480);
- [self addSubview:imageView];
- [imageView release];
- [image4 drawAtPoint:CGPointMake(100, 0)];
- CGContextDrawImage(ctx, CGRectMake(160, 0, 160, 230), [image CGImage]);
4.CGLayer
这个是apple推荐的一种offscreen的绘制方法,相比bitmapContext更好,因为它似乎会利用iphone硬件(drawing-card)加速
- CGLayerRef cg=CGLayerCreateWithContext(ctx, CGSizeMake(320, 480), NULL);
- //需要将CGLayerContext来作为缓存context,这个是必须的
- CGContextRef layerContext=CGLayerGetContext(cg);
- CGContextDrawImage(layerContext, CGRectMake(160, 230, 160, 230), img);
- CGContextDrawLayerAtPoint(ctx, CGPointMake(0, 0), cg);
5.CALayer的contents
- UIImage* image=[UIImage imageNamed:@"1.jpg"];
- CALayer *ly=[CALayer layer];
- ly.frame=CGRectMake(0, 0, 320, 460);
- ly.contents=[image CGImage];
- [self.layer addSublayer:ly];
四.其它
1.CGImage和UIImage互换
这样就可以随时切换UIKit和Quartz之间类型,并且选择您熟悉的方式来处理图片.
CGImage cgImage=[uiImage CGImage];
UIImage* uiImage=[UIImage imageWithCGImage:cgImage];
2.UIImage resizableImageWithCapInsets的问题
假设一张44x29的图片,同样的Insets=UIEdgeInsetsMake(10,10,10,10)在@2x情况和非@2x情况下,表现会有不同,非@2x是OK正常的,但是如果同样尺寸的图片变成@2x,则导致在切换过渡的时候会很卡,应该是在不同的重绘导致的,表面原因是因为Insets设置的是点,在@2x情况下拉伸,其实拉升的像素是上面20,下面也是20,但是图片其实只有29,所以导致不正确,只要将insets设置成=UIEdgeInsetsMake(5,10,5,10)就正常了,所以以后要注意了。
3.动画图片使用注意
animationImage 设置完毕以后要startAnimation.不会自动启动动画图片。
此外在读取大量动画图片的时候不太适合用这个方法,因为一下子那么多图片容易爆掉。可以用这个方法替代,具体我也没试,方法就是手动切换图片,并非直接使用系统方法而已。
- imgV=[[UIImageView alloc]initWithFrame:CGRectMake(40, 40, 128, 128)];
- [self.window addSubview:imgV];
- [self performSelectorInBackground:@selector(playAnim)withObject:nil];
- [imgV release];
- -(void)playAnim{
- for (int i=0;i<101;){
- usleep(100000);
- UIImage *image=[[UIImage alloc]initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:@"%d",i+1 ] ofType:@"tiff"]];
- [self performSelectorOnMainThread:@selector(changeImage:) withObject:image waitUntilDone:YES];
- i++;
- }
- }
- -(void)changeImage:(UIImage*)image{
- imgV.image=image;
- }
4.UIControl设置UIImage
问题描述主要是有一个很小的叉按钮,需要响应很大的点击区域,这个其实很简单,代码如下:
- UIImage *bg=[UIImage imageNamed:@"heizi1.jpg"];
- //图片大于点及区域,缩小下就行
- bg=[self scaleImage:bg ToSize:(CGSize){100,100}];
- UIButton* button = [[UIButton alloc]initWithFrame:CGRectMake(0, 0, 200, 200)];
- //图片大于button,则会被拉伸,如果小于button则居中显示
- [button setImage:bg forState:UIControlStateNormal];
此外多说一句,这个icon图片如果要准备2套图,缩放毕竟消耗效率
缩放图片代码
- -(UIImage *)scaleImage:(UIImage *)img ToSize:(CGSize)itemSize{
- UIImage *i;
- // CGSize itemSize=CGSizeMake(30, 30);
- UIGraphicsBeginImageContext(itemSize);
- CGRect imageRect=CGRectMake(0, 0, itemSize.width, itemSize.height);
- [img drawInRect:imageRect];
- i=UIGraphicsGetImageFromCurrentImageContext();
- UIGraphicsEndImageContext();
- return i;
从view截图出来
- #import <QuartzCore/QuartzCore.h>
- -(UIImage *)getImageFromView:(UIView *)orgView{
- UIGraphicsBeginImageContext(orgView.bounds.size);
- [orgView.layer renderInContext:UIGraphicsGetCurrentContext()];
- UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
- UIGraphicsEndImageContext();
- return image;
- }
截屏的几种方式
1.
UIGraphicsBeginImageContextWithOptions(pageView.page.bounds.size, YES, zoomScale);
[pageView.page.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *uiImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
2.
- (UIImage *) glToUIImage {
DWScrollView *pageView = [self getActivePageView];
pageView.page.backgroundColor = [UIColor clearColor];
// self.backgroundColor=[UIColor clearColor];
NSInteger myDataLength = 320 * 308 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 308, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <308; y++)
{
for(int x = 0; x <320 * 4; x++)
{
if(buffer[y* 4 * 320 + x]==0)
buffer2[(307 - y) * 320 * 4 + x]=1;
else
buffer2[(307 - y) * 320 * 4 + x] = buffer[y* 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef
imageRef = CGImageCreate(320, 308, bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO,
renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(myImage, nil, nil, nil);
return myImage;
}
3.
// get screen
- (void)grabScreen {
unsigned char buffer[320*480*4];
glReadPixels(0,0,320,480,GL_RGBA,GL_UNSIGNED_BYTE,&buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320*480*4, NULL);
CGImageRef
iref =
CGImageCreate(320,480,8,32,320*4,CGColorSpaceCreateDeviceRGB(),kCGBitmapByteOrderDefault,ref,NULL,true,kCGRenderingIntentDefault);
CGFloat width = CGImageGetWidth(iref);
CGFloat height = CGImageGetHeight(iref);
size_t length = width*height*4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef
context = CGBitmapContextCreate(pixels, width, height, 8, 320*4,
CGImageGetColorSpace(iref), kCGImageAlphaLast |
kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:outputRef];
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
CGContextRelease(context);
CGImageRelease(iref);
CGDataProviderRelease(ref);
}
4.
CGImageRef UIGetScreenImage();
void SaveScreenImage(NSString *path)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGImageRef cgImage = UIGetScreenImage();
void *imageBytes = NULL;
if (cgImage == NULL) {
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
imageBytes = malloc(320 * 480 * 4);
CGContextRef
context = CGBitmapContextCreate(imageBytes, 320, 480, 8, 320 * 4,
colorspace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorspace);
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
CGRect bounds = [window bounds];
CALayer *layer = [window layer];
CGContextSaveGState(context);
if ([layer contentsAreFlipped]) {
CGContextTranslateCTM(context, 0.0f, bounds.size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
}
[layer renderInContext:(CGContextRef)context];
CGContextRestoreGState(context);
}
cgImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
}
NSData *pngData = UIImagePNGRepresentation([UIImage imageWithCGImage:cgImage]);
CGImageRelease(cgImage);
if (imageBytes)
free(imageBytes);
[pngData writeToFile:path atomically:YES];
[pool release];
}
5.
+ (UIImage *)imageWithScreenContents
{
CGImageRef cgScreen = UIGetScreenImage();
if (cgScreen) {
UIImage *result = [UIImage imageWithCGImage:cgScreen];
CGImageRelease(cgScreen);
return result;
}
return nil;
}
在程序中如何把两张图片合成为一张图片
- (UIImage *)addImage:(UIImage *)image1 toImage:(UIImage *)image2 {
UIGraphicsBeginImageContext(image1.size);
// Draw image1
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
// Draw image2
[image2 drawInRect:CGRectMake(0, 0, image2.size.width, image2.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
UIGraphicsBeginImageContextWithOptions(pageView.page.bounds.size, YES, zoomScale);
[pageView.page.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *uiImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
2.
- (UIImage *) glToUIImage {
DWScrollView *pageView = [self getActivePageView];
pageView.page.backgroundColor = [UIColor clearColor];
// self.backgroundColor=[UIColor clearColor];
NSInteger myDataLength = 320 * 308 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 308, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <308; y++)
{
for(int x = 0; x <320 * 4; x++)
{
if(buffer[y* 4 * 320 + x]==0)
buffer2[(307 - y) * 320 * 4 + x]=1;
else
buffer2[(307 - y) * 320 * 4 + x] = buffer[y* 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef
imageRef = CGImageCreate(320, 308, bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO,
renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(myImage, nil, nil, nil);
return myImage;
}
3.
// get screen
- (void)grabScreen {
unsigned char buffer[320*480*4];
glReadPixels(0,0,320,480,GL_RGBA,GL_UNSIGNED_BYTE,&buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320*480*4, NULL);
CGImageRef
iref =
CGImageCreate(320,480,8,32,320*4,CGColorSpaceCreateDeviceRGB(),kCGBitmapByteOrderDefault,ref,NULL,true,kCGRenderingIntentDefault);
CGFloat width = CGImageGetWidth(iref);
CGFloat height = CGImageGetHeight(iref);
size_t length = width*height*4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef
context = CGBitmapContextCreate(pixels, width, height, 8, 320*4,
CGImageGetColorSpace(iref), kCGImageAlphaLast |
kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:outputRef];
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
CGContextRelease(context);
CGImageRelease(iref);
CGDataProviderRelease(ref);
}
4.
CGImageRef UIGetScreenImage();
void SaveScreenImage(NSString *path)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGImageRef cgImage = UIGetScreenImage();
void *imageBytes = NULL;
if (cgImage == NULL) {
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
imageBytes = malloc(320 * 480 * 4);
CGContextRef
context = CGBitmapContextCreate(imageBytes, 320, 480, 8, 320 * 4,
colorspace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorspace);
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
CGRect bounds = [window bounds];
CALayer *layer = [window layer];
CGContextSaveGState(context);
if ([layer contentsAreFlipped]) {
CGContextTranslateCTM(context, 0.0f, bounds.size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
}
[layer renderInContext:(CGContextRef)context];
CGContextRestoreGState(context);
}
cgImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
}
NSData *pngData = UIImagePNGRepresentation([UIImage imageWithCGImage:cgImage]);
CGImageRelease(cgImage);
if (imageBytes)
free(imageBytes);
[pngData writeToFile:path atomically:YES];
[pool release];
}
5.
+ (UIImage *)imageWithScreenContents
{
CGImageRef cgScreen = UIGetScreenImage();
if (cgScreen) {
UIImage *result = [UIImage imageWithCGImage:cgScreen];
CGImageRelease(cgScreen);
return result;
}
return nil;
}
在程序中如何把两张图片合成为一张图片
- (UIImage *)addImage:(UIImage *)image1 toImage:(UIImage *)image2 {
UIGraphicsBeginImageContext(image1.size);
// Draw image1
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
// Draw image2
[image2 drawInRect:CGRectMake(0, 0, image2.size.width, image2.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
ios6的截屏会黑修改方式如下
EAGLView *glView = [EAGLView viewWithFrame:[window bounds] pixelFormat:kEAGLColorFormatRGB565 // kEAGLColorFormatRGBA8 depthFormat:0 // GL_DEPTH_COMPONENT16_OES ];
EAGLView *glView = [EAGLView viewWithFrame:[window bounds] pixelFormat:kEAGLColorFormatRGB565 // kEAGLColorFormatRGBA8 depthFormat:0 // GL_DEPTH_COMPONENT16_OES preserveBackbuffer:YES sharegroup:nil multiSampling:NO numberOfSamples:0 ];
EAGLView *glView = [EAGLView viewWithFrame:[window bounds] pixelFormat:kEAGLColorFormatRGB565 // kEAGLColorFormatRGBA8 depthFormat:0 // GL_DEPTH_COMPONENT16_OES preserveBackbuffer:YES sharegroup:nil multiSampling:NO numberOfSamples:0 ];
+(UIImage*) makeaShot { [CCDirector sharedDirector].nextDeltaTimeZero = YES; CGSize winSize = [CCDirector sharedDirector].winSize; CCLayerColor* whitePage = [CCLayerColor layerWithColor:ccc4(255, 255, 255, 0) width:winSize.width height:winSize.height]; whitePage.position = ccp(winSize.width/2, winSize.height/2); CCRenderTexture* rtx = [CCRenderTexture renderTextureWithWidth:winSize.width height:winSize.height]; [rtx begin]; [whitePage visit]; [[[CCDirector sharedDirector] runningScene] visit]; [rtx end]; return [rtx getUIImageF
rom
Buffer]; } -(UIImage*) screenshotWithStartNode:(CCNode*)startNode { [CCDirector sharedDirector].nextDeltaTimeZero = YES; CGSize winSize = [CCDirector sharedDirector].winSize; CCRenderTexture* rtx = [CCRenderTexture renderTextureWithWidth:winSize.width height:winSize.height]; [rtx begin]; [startNode visit]; [rtx end]; return [rtx getUIImage]; } CCScene *scene = [[CCDirector sharedDirector] runningScene]; CCNode *n = [scene.children objectAtIndex:0]; UIImage *img = [self screenshotWithStartNode:n];