# Processing Images（处理图像）

## Processing Images（处理图像）

  Core Image has three classes thatsupport image processing on iOS and OS X:（在IOS和OS X上 Core Image 有三个类支持图像的处理）


· CIFilter is a mutable object that represents an effect. A filter object hasat least one input parameter and produces an outputimage.（CIFilter是一个可变对象,代表了一种效果。一个过滤器对象至少有一个输入参数,产生一个输出图像。CIFilter是个滤镜类，Core Image框架中对图片属性进行细节处理的类。它对所有的像素进行操作，用一些键-值设置来决定具体操作的程度）

· CIImage is an immutable object that represents an image. You can synthesize image data orprovide it from a file or the output of anotherCIFilter object.（CIImage是一个不可变对象,代表了一种形象。你可以从一个文件或合成图像数据或提供另一个CIFilter对象的输出）

   CIContext is an object through which Core Image    draws the results produced by a filter. ACore Image

context can be based on the CPU or the GPU.（CIContext是物体通过其核心图像绘制的滤波器产生的结果。  CIContext可以基于CPU或GPU）

The remainder of this chapterprovides all the details you need to use Core Image filters and theCIFilter,CIImage, andCIContext classes on iOS and OS X.（本章提供了所有的细节，你需要使用的Core Image滤镜和CIFilter，CIImage和CIContext的所有细节，在iOS和OS X上）


## Overview（概要）

   Processing an image isstraightforward as shown in Listing 1-1. Thisexample uses Core Image methods specific to iOS; see below for thecorresponding OS X methods. Each numbered step in the listing is described inmore detail following the listing.（处理简单的图像，如清单1-1所示。这个例子使用核心图像的方法具体到iOS；看下面相应的OS X的方法。在清单中的每一个编号的步骤在上市后更详细地描述）


Note: To use Core Image in your app, youshould add the framework to your Xcode project (CoreImage.framework on iOS orQuartzCore.framework on OS X) and import the corresponding header inyour source code files (

CIContext *context = [CIContext  contextWithOptions:nil];     //1

CIImage *image = [CIImage imageWithContentsOfURL:myURL];   // 2

CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];   // 3 CISepiaTone是过滤器滤镜效果的滤镜名字

[filter  setValue: image  forKey:kCIInputImageKey];//标识需要修改的原图像

[filter setValue:@0.8f  forKey: kCIInputIntensityKey];//滤镜的参数名和设置的参数值

CIImage *result = [filter valueForKey:kCIOutputImageKey];  // 4滤镜后的效果

CGRect extent = [result  extent];

CGImageRef cgImage = [context   createCGImage: result  fromRect: extent]; // 5


Here’swhat the code does:（

1. Create aCIContext object.ThecontextWithOptions: method isonly available on iOS. For details on other methods for iOS and OS X seeTable 1-2.（创建CIContext对象， contextWithOptions的方法只适用于IOS上，其他适用于IOS和 OS X上的具体的方法，参考下图Table1-2.）

2. Create aCIImage object.You can create aCIImage from a variety ofsources, such as a URL. SeeCreating a CIImage Object for moreoptions.（创建一个CIImage对象。您可以创建一个CIImage从各种各样的来源,如一个URL。参考下面的Creating a CIImage Object查看更多创建一个CIImage对象的方法）

3. .Create the filter andset values for its input parameters. There are more compact ways to set valuesthan shown here. SeeCreating a CIFilter Object and SettingValues.（创建过滤器，并设置输入的参数Values值，有很多

4. Get the output image. The output image is arecipe for how to produce the image. The image is not yet rendered. SeeGetting the Output Image.(得到一个输出的CIImage对象，这个输出对象是为如何产生一个image做准备的材料，可参考下面的Getting the Output Image详细查看)

5. Render theCIImage to a CoreGraphics image that is ready for display or saving to a file.（渲染一个CIImage成核心图形图像，为显示或者保存一个文件做准备）

Important: SomeCore Image filters produce images of infinite extent, such as those in theCICategoryTileEffect category. Prior to rendering, infinite images must eitherbe cropped (CICrop filter) or you must specify a rectangle of finite dimensionsfor rendering the image.(一些核心图像过滤器产生无限范围的图像,比如那些CICategoryTileEffect类别。渲染之前,无限的图片必须是剪裁(CICrop过滤器)或者你必须指定一个矩形的有限维渲染图像放大)

## The Built-in Filters(内置的过滤器)

  Core Image comes with dozens ofbuilt-in filters ready to support image processing in your app.Core Image Filter Reference liststhese filters, their characteristics, their iOS and OS X availability, andshows a sample image produced by the filter.The list ofbuilt-in filters can change, so for that reason,Core Image provides methods that let you query the system for theavailable filters(seeQuerying the System for Filters).


   A filter category specifies the type of effect—blur,distortion, generator, and so forth—or its intended use—still images, video,nonsquare pixels, and so on. A filter can be a member of more than onecategory. A filter also has a display name,which is the name to show to users and afiltername, which is the name you must use to access the filterprogrammatically.


   Most filters have one or more input parameters that let you control how processing is done. Each input parameter has anattribute class that specifies its data type, such asNSNumber. An input parameter can optionally have other attributes, such as its default value, the allowable minimum and maximum values, the display name for the parameter, and any other attributes that are described inCIFilter Class Reference.


  For example, the CIColorMonochromefilter has three input parameters—the image to process, a monochrome color, andthe color intensity. You supply the image and have the option to set a colorand its intensity. Most filters, including the CIColorMonochrome filter, havedefault values for each nonimage input parameter. Core Image uses the defaultvalues to process your image if you choose not to supply your own values for theinput parameters.


   Filter attributes are stored askey-value pairs. The key is a constant that identifies the attribute and thevalue is the setting associated with the key. Core Image attribute values aretypically one of the data types listed in Table 1-1.


  Core Image uses key-value coding, which means you can get and set values for the attributes of a filter by using the methods provided by theNSKeyValueCoding protocol. (For more information, seeKey-Value Coding Programming Guide.）


## Creating a Core ImageContext

  To renderthe image, you need to create a Core Image context and then use that context todraw the output image. A Core Image context represents a drawing destination.The destination determines whether Core Image uses the GPU or the CPU forrendering. Table 1-2 lists the various methodsyou can use for specific platforms and renderers


 Creating a Core ImageContext on iOS When You Don’t Need Real-TimePerformance

If your app doesn’t require real-time display, you can create aCIContext object as follows:

CIContext *context = [CIContext contextWithOptions:nil];
  This method can use either the CPUor GPU for rendering. To specify which to use, set up an options dictionary andadd the keykCIContextUseSoftwareRenderer with the appropriate Boolean value foryour app. CPU rendering isslower than GPU rendering.But in the case of GPU rendering, the resultingimage is not displayed untilafter it is copied back to CPU memory and converted to another imagetype such as aUIImage object.


## Creating a Core Image Contexton iOS When You Need Real-Time Performance

  If your app supports real-timeimage processing you should create aCIContext object from an EAGL context ratherthan usingcontextWithOptions: and specifying the GPU. The advantage isthat the rendered image stays on the GPU and never gets copied back to CPUmemory. First you need to create an EAGL context:


EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
Then use the method contextWithEAGLContext: as shown in Listing1-2 to create aCIContext object.


  You should turn off color management by supplyingnull for the working color space. Color managementslows down performance. You’ll want to use colormanagement for situations that require color fidelity. But in a real-time app,color fidelity is often not a concern. (SeeDoes Your App Need Color Management?.)


Listing 1-2 Creatinga CIContext on iOS for real-time performance

NSDictionary *options = @{ kCIContextWorkingColorSpace : [NSNull null] };

CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];

————————以下是OSX内容,不翻译—————————-

## Creating a Core Image Context from aCGContext on OS X

   You can create a Core Imagecontext from a Quartz 2D graphics context using code similar to that shown in Listing 1-3, which is an excerpt from thedrawRect: method in a Cocoa app. You get the currentNSGraphicsContext, convert that to a Quartz 2D graphicscontext (CGContextRef), and then provide the Quartz 2D graphics context asan argument to thecontextWithCGContext:options: method of theCIContext class. For information on Quartz 2D graphicscontexts, seeQuartz 2D Programming Guide.


Listing 1-3 Creatinga Core Image context from a Quartz 2D graphics context

context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil]

Creatinga Core Image Context from an OpenGL Context on OS X

  The code in Listing 1-4 shows how to set up a Core Image contextfrom the current OpenGL graphics context. It’s important that the pixel formatfor the context includes theNSOpenGLPFANoRecovery constant as an attribute. Otherwise Core Image may not be able tocreate another context that shares textures with this one. You must also makesure that you pass a pixel format whose data type isCGLPixelFormatObj, as shown in Listing1-4. For more information on pixel formats and OpenGL, seeOpenGL Programming Guide for Mac.


Listing 1-4 Creatinga Core Image context from an OpenGL graphics context

const NSOpenGLPixelFormatAttribute attr[] = {
NSOpenGLPFAAccelerated,
NSOpenGLPFANoRecovery,
NSOpenGLPFAColorSize, 32, 0
};

NSOpenGLPixelFormat *pf = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void *)&attr];

CIContext *myCIContext = [CIContext contextWithCGLContext: CGLGetCurrentContext()
pixelFormat: [pf CGLPixelFormatObj]
options: nil];

Creatinga Core Image Context from an NSGraphicsContext on OS X

   The CIContext method of theNSGraphicsContext class returns aCIContext object that you can use to render into theNSGraphicsContext object. TheCIContext object is created on demand and remains in existencefor the lifetime of its owningNSGraphicsContextobject. You create the Core Image context using a line of code similar to thefollowing:

[[NSGraphicsContextcurrentContext] CIContext]；

——————————-以上是OSX内容,不翻译————————–

## Creating a CIImageObject

   Core Image filters process CoreImage images (CIImage objects). Table 1-3lists the methods that create aCIImage object. The method you usedependsonthe source of the image. Keep in mind that aCIImage object is really an image recipe; Core Image doesn’tactually produce any pixels until it’s called on to render results to adestination.


Note: In OS X v10.5 and later, you can supply RAW image data directly to a filter. See “RAW Image Options” inCIFilter Class Reference.

## Creating a CIFilterObject and Setting Values

  The filterWithName: method creates a filter whose type is specified by the name argument. Thename argument is a string whose value must match exactly the filter name of a built-in filter (seeThe Built-in Filters). You can obtain a list of filter names by following the instructions inQuerying the System for Filters or you can look up a filter name in Core Image Filter Reference..


  On iOS, the input values for afilter are set to default values when you call thefilterWithName: method.


  On OS X, the input values for afilter are undefined when you first create it, which is why you either need tocall thesetDefaults method to set the default values or supply values forall input parameters at the time you create the filter by calling the methodfilterWithName:withInputParameters:. If you callsetDefaults, you can callsetValue:forKey: later to change the input parameter values.（无翻译）

If you don’t know the inputparameters for a filter, you can get an array of them using the methodinputKeys. (Or,you can look up theinput parameters for most of the built-in filters inCore Image Filter Reference.)Filters,except forgenerator filters, requirean input image. Some require two or more images or textures. Set a value foreach input parameter whose default value you want to change by calling themethodsetValue:forKey:.


   Let’s look at an example ofsetting up a filter to adjust the hue of an image. The filter’s name isCIHueAdjust.As long as you enter the namecorrectly, you’ll be able to create the filter with this line of code:


hueAdjust = [CIFilter filterWithName:@"CIHueAdjust"];
  Defaults are set for you on iOSbutnot on OS X. When youcreate a filter on OS X, it’s advisable to immediately set the input values. Inthis case, set the defaults:

[hueAdjust setDefaults];
  This filter has two inputparameters: the input image and the input angle. The input angle for the hueadjustment filter refers to the location of the hue in the HSV and HLS color spaces.This is an angular measurement that can vary from 0.0 to 2 pi. A value of 0indicates the color red; the color green corresponds to 2/3 pi radians, and thecolor blue is 4/3 pi radians.


  Next you’ll want to specify aninput image and an angle. The image can be one created from any of the methodslisted inCreating a CIImage Object. Figure 1-1shows the unprocessed image.


The floating-point value in thiscode specifies a rose-colored hue:（在这段代码中的浮点值指定一个玫瑰色的色调:）

[hueAdjust setValue: myCIImage forKey: kCIInputImageKey];

[hueAdjust setValue: @2.094f forKey: kCIInputAngleKey];
  Tip: A filter’s documentation is built-in.You can find out programmatically its input parameters as well as the minimumand maximum values for each input parameter. SeeQuerying the System for Filters.


  The following code shows a morecompact way to create a filter and set values for input parameters:


hueAdjust = [CIFilter filterWithName:@"CIHueAdjust"] withInputParameters:@{

kCIInputImageKey: myCIImage,

kCIInputAngleKey: @2.094f,

}];
  You can supply as many inputparameters a you’d like, but you must end the list withnil.


## Getting the OutputImage

  You get the output image byretrieving the value for theoutputImage key:

CIImage *result = [hueAdjust valueForKey: kCIOutputImageKey];
  Core Image does not perform any image processing until you call a method that actually renders the image (seeRendering the Resulting Output Image). When you request the output image, Core Image assembles the calculations that it needs to produce an output image and stores those calculations (that is, image “recipe”) in aCIImage object. The actual image is only rendered (and hence, the calculations performed) if there is an explicit call to one of the image-drawing methods. SeeRendering the Resulting Output Image.）


  Deferring processing untilrendering time makes Core Image fast and efficient. At rendering time, CoreImage can see if more than one filter needs to be applied to an image. If so,it automatically concatenates multiple “recipes” into one operation, whichmeans each pixel is processed only once rather than many times. Figure 1-2 illustrates a multiple-operations workflowthat Core Image can make more efficient. The final image is a scaled-down versionof the original. For the case of a large image, applying color adjustmentbefore scaling down the image requires more processing power than scaling downthe image and then applying color adjustment. By waiting until render time toapply filters, Core Image can determine that it is more efficient to performthese operations in reverse order.


## Rendering the Resulting Output Image（渲染所得到的输出图像）

  Rendering the resulting output image triggers the processor-intensive operations—either GPU or CPU, depending on the context you set up. The following methods are available for rendering:


  To render the image discussed in Creating a CIFilter Object and Setting Values, you can use this line of code on OS X to draw the result onscreen:

[myContext drawImage:result inRect:destinationRect fromRect:contextRect];

  The original image from this example (shown in Figure 1-1) now appears in its processed form, as shown in Figure 1-3.


## 维护线程安全

  CIContext and CIImage objects are immutable, which means each can be shared safely among threads. Multiple threads can use the same GPU or CPU CIContext object to render CIImage objects.However, this is not the case for CIFilter objects, which are mutable. A CIFilter object cannot be shared safely among threads. If your app is multithreaded, each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly.


CIContext和CIImage对象是不可变的，这意味着每一个可以在线程上安全的共享。多个线程可以使用相同的GPU或CPUCIContext对象呈现CIImage对象。然而，这不是CIFilter对象的情况下，这是可变的。一个CIFilter对象不能在线程安全的共享。如果你的应用程序是多线程，每个线程必须创建自己的CIFilter对象。否则，你的应用程序可能会表现得出乎意料。

## Chaining Filters（链接器）

  You can create amazing effects by chaining filters—that is, using the output image from one filter as input to another filter. Let’s see how to apply two more filters to the image shown inFigure 1-3—gloom (CIGloom) and bump distortion (CIBumpDistortion).


  The gloom filter does just that; it makes an image gloomy by dulling its highlights. Notice that the code in Listing 1-5 is very similar to that shown in Creating a CIFilter Object and Setting Values. It creates a filter and sets default values for the gloom filter. This time, the input image is the output image from the hue adjustment filter. It’s that easy to chain filters together!


Listing 1-5 Creating, setting up, and applying a gloom filter

CIFilter *gloom = [CIFilter filterWithName:@"CIGloom"];

[gloom setDefaults];                                        // 1

[gloom setValue: result forKey: kCIInputImageKey];

[gloom setValue: @25.0f forKey: kCIInputRadiusKey];         // 2

[gloom setValue: @0.75f forKey: kCIInputIntensityKey];      // 3

result = [gloom valueForKey: kCIOutputImageKey];            // 4

Here’s what the code does:

  Sets default values. You must set defaults on OS X.On iOS you do not need to set default values because they are set automatically.

Sets the input radius to 25. The input radius specifies the extent of the effect, and can vary from 0 to 100 with a default value of 10. Recall that you can find the minimum, maximum, and default values for a filter programmatically by retrieving the attribute dictionary for the filter.  (输入半径设置为25。输入指定半径的影响程度,并且可以从0到100的默认值10。回想一下,你可以找到的最小、最大,和一个过滤器以编程方式通过检索属性的默认值字典的过滤器)

Sets the input intensity to 0.75. The input intensity is a scalar value that specifies a linear blend between the filter output and the original image. The minimum is 0.0, the maximum is 1.0, and the default value is 1.0.(设置输入强度为0.75。输入强度是一个标量值，它指定滤波器输出和原始图像之间的线性混合。最小值为0，最大值为1，默认值为1)

Requests the output image, but does not draw the image.(请求输出图像，但不绘制图像)

The code requests the output image but does not draw the image.Figure 1-4shows what the image would look like if you drew it at this point after processing it with both the hue adjustment and gloom filters.


Figure 1-4 The image after applying the hue adjustment and gloom filters

The image after applying the hue adjustment and gloom filters

The bump distortion filter (CIBumpDistortion) creates a bulge in an image that originates at a specified point.Listing 1-6 shows how to create, set up, and apply this filter to the output image from the previous filter, the gloom filter. The bump distortion takes three parameters: a location that specifies the center of the effect, the radius of the effect, and the input scale.


Listing 1-6 Creating, setting up, and applying the bump distortion filter

CIFilter *bumpDistortion = [CIFilter filterWithName:@"CIBumpDistortion"];    // 1

[bumpDistortion setDefaults];                                                // 2

[bumpDistortion setValue: result forKey: kCIInputImageKey];

[bumpDistortion setValue: [CIVector vectorWithX:200 Y:150] forKey: kCIInputCenterKey];                              // 3

[bumpDistortion setValue: @100.0f forKey: kCIInputRadiusKey];                // 4

[bumpDistortion setValue: @3.0f forKey: kCIInputScaleKey];                   // 5

result = [bumpDistortion valueForKey: kCIOutputImageKey];


Here’s what the code does:

   Creates the filter by providing its name.

On OS X, sets the default values (not necessary on iOS).

Sets the center of the effect to the center of the image.（将效果的中心设置为图像的中心，也可以不将效果的中心点设置为图像的中心点）

Sets the radius of the bump to 100 pixels.（设置凸点的半径为100像素）

Sets the input scale to 3. The input scale specifies the direction and the amount of the effect. The default value is –0.5. The range is –10.0 through 10.0. A value of 0 specifies no effect. A negative value creates an outward bump; a positive value creates an inward bump.（将输入规模设置为3。输入量表指定了效果的方向和数量。默认值是0.5。范围是-10到10。值为0指定没有效果。一个负值创建一个向外的凹凸，一个正值创建一个向内的凸点）


Figure 1-5 shows the final rendered image.

Figure 1-5 The image after applying the hue adjustment along with the gloom and bump distortion filters

The image after applying the hue adjustment, gloom, and bump distortion filters

## Using Transition Effects（使用过渡效果）

  Transitions are typically used between images in a slide show or to switch from one scene to another in video. These effects are rendered over time and require that you set up a timer. The purpose of this section is to show how to set up the timer. You’ll learn how to do this by setting up and applying the copy machine transition filter (CICopyMachine) to two still images. The copy machine transition creates a light bar similar to what you see in a copy machine or image scanner. The light bar sweeps from left to right across the initial image to reveal the target image.Figure 1-6 shows what this filter looks like before, partway through, and after the transition from an image of ski boots to an image of a skier. (To learn more about specific input parameter of the CICopyMachine filter, seeCore Image Filter Reference.)


Figure 1-6 A copy machine transition from ski boots to a skier

A copy machine transition from ski boots to a skier

Transition filters require the following tasks:（过渡过滤器需要以下任务）

Create Core Image images (CIImage objects) to use for the transition.（创建一个过滤器需要的图片对象CIImage）

Set up and schedule a timer.（创建一个定时器）

Create a CIContext object.（创建一个CIContext对象）

Create a CIFilter object for the filter to apply to the image.（创建需要的过滤器，注意名字不要写错了）

On OS X, set the default values for the filter.（仅针对OS X）

Set the filter parameters.（设置过滤器参数）

Set the source and the target images to process.（设置源和目标图像的处理）

Calculate the time.（计算时间）

Apply the filter.Apply the filter（应用过滤器）

Draw the result.（画出结果）

Repeat steps 8–10 until the transition is complete.（重复步骤8 - 10,直到完成过渡）

You’ll notice that many of these tasks are the same as those required to process an image using a filter other than a transition filter. The difference, however, is the timer used to repeatedly draw the effect at various intervals throughout the transition.


  The awakeFromNib method, shown in Listing 1-7, gets two images (boots.jpg andskier.jpg) and sets them as the source and target images. Using theNSTimer class, a timer is set to repeat every 1/30 second. Note the variables thumbnailWidth andthumbnailHeight. These are used to constrain the rendered images to the view set up in Interface Builder.


Listing 1 - 7所示的awakeFromNib方法,得到两个图像(靴子.jpg 和 skier.jpg )，并设置源和目标图像。使用NSTimer类,创建一个计时器，设置其重复时间为1/30秒。注意 两个变量 thumbnailWidth 和 thumbnailHeight。这些都是用来约束的图像呈现在视图中设置界面构建器

  Note: The NSAnimation class, introduced in OS X v10.4, implements timing for animation on OS X. TheNSAnimation class allows you to set up multiple slide shows whose transitions are synchronized to the same timing device. For more information see the documentsNSAnimation Class Reference and Animation Programming Guide for Cocoa.


Listing 1-7 Getting images and setting up a timer

- (void)awakeFromNib

{

NSTimer    *timer;//定时器

NSURL      *url;

thumbnailWidth  = 340.0;//设置的图像宽度

thumbnailHeight = 240.0;//设置的图像高度

url   = [NSURL fileURLWithPath: [[NSBundle mainBundle]
pathForResource: @"boots" ofType: @"jpg"]];

[self setSourceImage: [CIImage imageWithContentsOfURL: url]];//设置源图像

url   = [NSURL fileURLWithPath: [[NSBundle mainBundle]
pathForResource: @"skier" ofType: @"jpg"]];

[self setTargetImage: [CIImage imageWithContentsOfURL: url]];//设置目标图像

timer = [NSTimer scheduledTimerWithTimeInterval: 1.0/30.0  target: self  selector: @selector(timerFired:)  userInfo:nil repeats: YES];

base = [NSDate timeIntervalSinceReferenceDate];

forMode: NSDefaultRunLoopMode];

forMode: NSEventTrackingRunLoopMode];
}
  You set up a transition filter just as you’d set up any other filter.Listing 1-8 uses thefilterWithName: method to create the filter. It then calls setDefaults to initialize all input parameters. The code sets the extent to correspond with the thumbnail width and height declared in theawakeFromNib: method, shown inListing 1-7.

The routine uses the thumbnail variables to specify the center of the effect. For this example, the center of the effect is the center of the image, but it doesn’t have to be.


Listing 1-8 Setting up the transition filter （设置过渡效果的）

- (void)setupTransition

{

CGFloat w = thumbnailWidth;

CGFloat h = thumbnailHeight;

CIVector *extent = [CIVector vectorWithX: 0  Y: 0  Z: w  W: h];

transition  = [CIFilter filterWithName: @"CICopyMachineTransition"];

// Set defaults on OS X; not necessary on iOS.

[transition setDefaults];

[transition setValue: extent forKey: kCIInputExtentKey];

}

The drawRect: method for the copy machine transition effect is shown in Listing 1-9. This method sets up a rectangle that’s the same size as the view and then sets up a floating-point value for the rendering time. If the CIContext object hasn’t already been created, the method creates one. If the transition is not yet set up, the method calls thesetupTransition method (seeListing 1-8). Finally, the method calls the drawImage:inRect:fromRect: method, passing the image that should be shown for the rendering time. The imageForTransition: method, shown in Listing 1-10, applies the filter and returns the appropriate image for the rendering time.

Listing 1-9  The drawRect: method for the copy machine transition effect

- (void)drawRect: (NSRect)rectangle

{
CGRect  cg = CGRectMake(NSMinX(rectangle),NSMinY(rectangle),NSWidth(rectangle), NSHeight(rectangle));

CGFloat t = 0.4 * ([NSDate timeIntervalSinceReferenceDate] - base);

if (context == nil) {

context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil];

}

if (transition == nil) {
[self setupTransition];//参考清单 1-8
}

[context drawImage: [self imageForTransition: t + 0.1] inRect: cg  fromRect: cg];
}

  The imageForTransition: method figures out, based on the rendering time, which is the source image and which is the target image. It’s set up to allow a transition to repeatedlyloop back and forth. If your app applies a transition that doesn’t loop, it would not need the if-else construction shown inListing 1-10.


imageForTransition方法基于渲染时间，指出哪个是源图像，哪个是目标图像。它的设置允许过渡反复来回循环。如果你的app应用程序 应用过渡不循环,那在imageForTransition方法中的 就把 if-else 的代码移除

  The routine sets the inputTime value based on the rendering time passed to the imageForTransition: method. It applies the transition, passing the output image from the transition to the crop filter (CICrop).Cropping ensures the output image fits in the view rectangle. The routine returns the cropped transition image to the drawRect: method, which then draws the image.


Listing 1-10 Applying the transition filter

- (CIImage *)imageForTransition: (float)t

{
// Remove the if-else construct if you don't want the transition to loop（如果你不想过渡循环，就把下面的if-else代码删除）

if (fmodf(t, 2.0) < 1.0f) {

[transition setValue: sourceImage  forKey: kCIInputImageKey];

[transition setValue: targetImage  forKey: kCIInputTargetImageKey];

} else {

[transition setValue: targetImage  forKey: kCIInputImageKey];

[transition setValue: sourceImage  forKey: kCIInputTargetImageKey];

}

[transition setValue: @( 0.5 * (1 - cos(fmodf(t, 1.0f) * M_PI)) )  forKey: kCIInputTimeKey];//设置渲染时间

CIFilter  *crop = [CIFilter filterWithName: @"CICrop"    withInputParameters:@{kCIInputImageKey: [transition valueForKey: kCIOutputImageKey],@"inputRectangle": [CIVector vectorWithX:0  Y:0  Z: thumbnailWidth  W: thumbnailHeight],}];

return [crop valueForKey: kCIOutputImageKey];
}
  Each time the timer that you set up fires, the display must be updated.Listing 1-11 shows a timerFired: routine that does just that.


Listing 1-11 Using the timer to update the display（定时器要反复做的图像过渡绘制,1/30秒绘制一次）

- (void)timerFired: (id)sender
{
[self setNeedsDisplay: YES];
}
  Finally, Listing 1-12 shows the housekeeping that needs to be performed if your app switches the source and target images, as the example inListing 1-10 does.


Listing 1-12 Setting source and target images（设置源图像和目标图像）

- (void)setSourceImage: (CIImage *)source
{
sourceImage = source;
}

- (void)setTargetImage: (CIImage *)target
{
targetImage = target;
}


## Applying a Filter to Video（将过滤器应用到视频）

  Core Image and Core Video can work together to achieve a variety of effects. For example, you can use a color correction filter on a video shot under water to correct for the fact that water absorbs red light faster than green and blue light. There are many more ways you can use these technologies together.


Follow these steps to apply a Core Image filter to a video displayedusing Core Video on OS X:

  When you subclass NSView to create a view for the video, declare aCIFilter object in the interface, similar to what’s shown in this code:

@interface MyVideoView : NSView

{

NSRecursiveLock     *lock;

QTMovie             *qtMovie;

QTVisualContextRef  qtVisualContext;

CVImageBufferRef    currentFrame;

CIFilter            *effectFilter;

id                  delegate;

}
  When you initialize the view with a frame, you create aCIFilter object for the filter and set the default values using code similar to the following:

    effectFilter = [CIFilter filterWithName:@"CILineScreen"];

[effectFilter setDefaults];
  This example uses the Core Image filter CILineScreen, but you’d use whatever is appropriate for your app.

Set the filter input parameters, except for the input image.

Each time you render a frame, you need to set the input image and draw the output image. YourrenderCurrentFrame routine would look similar to the following. To avoid interpolation, this example uses integral coordinates when it draws the output.

- (void)renderCurrentFrame
{
NSRect frame = [self frame];

if (currentFrame) {

CIImage *inputImage = [CIImage imageWithCVImageBuffer:currentFrame];

CGRect imageRect = [inputImage extent];

CGFloat x = (frame.size.width - imageRect.size.width) * 0.5;

CGFloat y = (frame.size.height - imageRect.size.height) * 0.5;

[effectFilter setValue:inputImage forKey:kCIInputImageKey];

[[[NSGraphicsContext currentContext] CIContext]  drawImage:[effectFilter valueForKey:kCIOutputImageKey]        atPoint:CGPointMake(floor(x), floor(y))
fromRect:imageRect];
}

}