Sk
” prefixes and capitalization) I've used the Android terminology.
How and Why I Wrote This
I wrote this overview because I've been doing some Android development recently, and I was getting frustrated by the fact that the documentation forandroid.graphics
, particularly when it comes to all of the things that can be set in aPaint
object, is extremely sparse. I Googled, and I asked a question on Stack Overflow but I couldn't find anything that explained this stuff to my satisfaction.
This overview is based on reading what little documentation exists (often “between the lines”), doing lots of experiments to see how fringe cases work, poring over the code, and doing even more experiments to verify that I was reading the code correctly. I started writing it as notes for myself, but I figured others might benefit as well so I decided to post it here.
Caveats
I say this is a “conceptual” overview because it does not always explain the actual implementation. The implementation is riddled with special cases that attempt to avoid doing work that isn't necessary. (I remember hearing some quote along the lines of “the fastest way to do something is to not do it at all”.) Understanding the implementation details of all of these special cases is unnecessary to understanding the actual end-result, so I've focused on the most general path through the pipeline. I actually avoided looking at the details of a lot of the special-case code, so if this code contains behavioral inconsistencies I won't have seen them.
Also, there are cases, particularly in the Shading and Transfer sections, where the algorithm described here is far less efficient but easier to visualize (and, I hope, understand) than the actual implementation. For example, I describe Shading as a separate phase that produces an image containing the source colors and Transfer as a phase producing an image with intermediate colors. In reality these two “phases” are interleaved such that only a small set (often just one) of the pixels from each of these virtual images actually “exists” at any instant in time. There is also short-circuiting in this code such that the source and intermediate colors aren't computed at all for pixels where the mask is fully transparent (0x00
).
This does mean that this overview can't give one an entirely accurate understanding of the performance (speed and/or memory) of various operations in the pipeline. For that it would be better to performing experiments and profile.
Also keep in mind that because this is documenting what is arguably “undocumented behavior” it's hard to say how much of what is described here is stuff that's guaranteed versus implementation detail, or even outright bugs. I've used some judgement in determining where to put the boundaries between phases (all of that optimization blurs the lines) based on what I think is a “reasonable API” and I've also tried to point out when I think a particular behavior I've discovered looks more like a bug than a feature to rely on.
There are still a number of cases where I'd like to do some more experimentation to verify that my reading of the code is correct and I've tried to indicate those below.
Entering the Pipeline
The pipeline is invoked each time a Canvas.drawSomething
method that takes aPaint
object is called.
Most of these drawing operations start at the first phase, Path Generation. There are two exceptions, however:
-
drawPaint
skips Path Generation entirely and Rasterization consists of producing a solid opaque mask. -
drawBitmap
has different behavior depending on the suppliedBitmap
's configuration.In the case of an
ALPHA_8
Bitmap
, Path Generation and Rasterization are both skipped and the suppliedBitmap
is used as the mask.For other
Bitmap
configurations theShader
is temporarily replaced with aBitmapShader
inCLAMP
mode. This means that setting aShader
to be used with adrawBitmap
call with a non-ALPHA_8
Bitmap
is pointless. The pipeline is then executed as thoughdrawRect
had been called with a rectangle equal to the bounding box of theBitmap
.The large inconsistency between these two behaviors seems like a bug. I want to test this more to be sure my reading of the code is correct.
Overall Structure
At the top of the diagram are the two main inputs to the pipeline: the parameters to the draw method that was called (really multiple inputs) and the “destination” image — theBitmap
connected to the Canvas
.
There are four main phases in the pipeline. The details of these will be covered below. While there are exceptions, all of the phases (mostly) follow this pattern: There are two or more sub-phases, the first of which computes an intermediate result, while the later ones “massage” this intermediate result. These later sub-phases often default to the identity function. ie: they usually leave the intermediate result alone unless explicitly told to do otherwise by setting properties on thePaint
.
Path Generation
The output of the first phase is a Path
.
This phase has three sub-phases:
-
An initial
Path
is constructed based on thedraw*
method that was called. In the case ofdrawPath
, this is simply thePath
supplied by the client. In the case ofdrawOval
ordrawRect
, the output is aPath
containing the corresponding primitive. -
If the
Paint
has aPathEffect
, it is used to produce a new path based on the initalPath
. ThePathEffect
is essentially a function that takes aPath
as its input and returns aPath
.If no
PathEffect
is set then the initialPath
is passed on to the next phase unmodified. That is, the defaultPathEffect
is the identity function.PathEffect
implementations includeCornerPathEffect
, which rounds the corners of thePath
, andDashPathEffect
which converts thePath
into a series of “dashes”.One interesting quirk: if the
Paint
object's style isFILL_AND_STROKE
thePathEffect
is “lied to” and told that it'sFILL
. This matters becausePathEffect
implementations may alter their behavior depending on settings in thePaint
. For example,DashPathEffect
won't do anything if it is told the style isFILL
. - The final sub-phase is “stroking”. If the
Paint.Style
isPath
this does nothing to thePath
. If the style isSTROKE
then a new “stroked”Path
is generated. This strokedPath
is aPath
that encloses the boundary of the inputPath
, respecting the various stroke properties of thePaint
(strokeCap
,strokeJoin
,strokeMiter
,strokeWidth
). The idea is that later phases of the pipeline will always fill the Path they are given, and so the stroking process converts Paths into their filled equivalents. If the style isFILL_AND_STROKE
the resultPath
is the strokedPath
concatenated to the originalPath
.
The method Paint.getFillPath()
can be used to run the later sub-phases of this phase on aPath
object. As far as I can tell this is the only significant part of the pipeline that can be run in isolation.
Rasterization
Rasterization is the process of determining the set of pixels that will be drawn to. This is accomplished by generating a “mask”, which is a alpha-channel image. Opaque (0xFF
) pixels on this mask indicate areas we want to draw to at “full strength”, transparent (0x00
) areas are areas we don't want to draw to at all, and partially transparent areas will be drawn to at “partial strength”. This is explained more at the end of the final phase. (When visualizing this process I find that it helps to think of opaque as white and transparent as black.)
Rasterization has two completely different behaviors depending on whether a Rasterizer
has been set on thePaint
.
If no Rasterizer
has been set then the default rasterization process is used:
- The
Path
is scan-converted based on parameters from thePaint
(eg: thestyle
property) and thePath
(eg: thefillType
property) to produce an initial mask.Pixels “inside” the
Path
will become opaque, those “ outside” will be left transparent, and those on the boundary may become partially transparent (for anti-aliasing). The mask will end up containing an opaque silhouette of the object.The
Path
object'sfillType
determines the rule used to determine which pixels are inside versus outside. SeeWikipedia's article on the non-zero winding rule for a good explanation if these different rules. - If there is a
MaskFilter
set, then the initial mask is transformed by theMaskFilter
. TheMaskFilter
is essentially a function that takes a mask (anALPHA_8
Bitmap
) as input and returns a mask as output. For example, aBlurMaskFilter
will blur the mask image.If no
MaskFilter
is set then the initial mask is passed on to the next phase unmodified. That is, the defaultMaskFilter
is the identity function.
If a Rasterizer
is set on the Paint
then, instead of the above two steps, theRasterizer
creates the mask from the Path
. The MaskFilter
isnot invoked after the Rasterizer
. (This seems like a bug, but I've verified this behavior experimentally.)
The only Rasterizer
implementation in Android is LayerRasterizer
.LayerRasterizer
makes it possible to create multiple “layers”, each with its ownPaint
and offset (translation). This means that when n LayerRasterizer
layers are present there are n + 1Paint
objects in use: the “top-level” Paint
(passed to the draw* method) and an additionaln Paint
objects, one for each Layer.
LayerRasterizer
takes the Path
and for each layer runs thePath
through the pipeline of that layer's Paint
starting at thePathEffects
step and rendering to the mask. This has some interesting consequences:
-
Each layer can have its own
PathEffect
. These are applied to thePath
that was generated by the top-levelPathEffect
(if one was set). So if thePathEffect
of the top-level'sPaint
is set to aCornerPathEffect
and a layer'sPathEffect
set toDashPathEffect
that layer will render a dashed shape with rounded corners. -
Each layer can have its own
Rasterizer
. Recursive rasterization is recursive. -
Each layer can have its own
MaskFilter
. ThisMaskFilter
applies to a separate mask in the sub-pipeline. Remember, the entire pipeline is being run again. For example, if there are two layers and one has aBlurMaskFilter
the output of the other layer will not be blurred regardless of the order of the layers. -
The destination
Bitmap
of this sub-pipeline is an alpha bitmap, so only the alpha-channel component of the Shading and Transfer phases have any relevance.
Also note that LayerRasterizer
does not make use of the MaskFilter
in the top-levelPaint
. Since the top-levelMaskFilter
is not invoked after invoking theRasterizer
, there is no point in setting a MaskFilter
on aPaint
if theRasterizer
has been set to a LayerRasterizer
. (Perhaps otherRasterizer
implementations could make use of the top-level MaskFilter
, butLayerRasterizer
is the only implementation included with Android.)
Shading
Shading is the process of determining the “source colors” for each pixel. A color (can) consist of red, green, blue, and alpha components each of which ranges from 0 to 1. (In reality these are generally represented as bytes from0x00
to 0xFF
.)
Like the previous phases, Shading also has two sub-phases:
-
An initial “source” image is generated by the
Shader
. If noShader
has been set it's as if aShader
that produced a single solid color (the Paint's Color) was used.The
Shader
does not get the mask, thePath
, or the destination image as inputs. -
If a
ColorFilter
has been set then the colors in the source color image are transformed by thisColorFilter
.The only input to the
ColorFilter
during the pipeline are ARGB colors. TheColorFilter
does not get the mask, thePath
, the destination image, or the coordinates of the pixel whose color it is transforming, as inputs.
Transfer
Transfer is the process of actually transferring color to the destination Bitmap
. The transfer phase has the following inputs:
-
The mask generated by Rasterization.
-
The “source color” for each pixel as determined by Shading.
-
The destination bitmap, which tells us the “destination color” for each pixel.
-
The transfer mode (
XferMode
).
Once again, there are two sub-phases:
- An intermediate image is generated from the source image and destination image. For each each (x,y) coordinate the corresponding source and destination colors are passed to a function determined by the
XferMode
. This function takes the source color and destination color and returns the color for the intermediate image's pixel at (x,y).Note that the mask is not used in this sub-phase. In particular, the source-alpha comes from the
Shader
, and the destination alpha comes from the destination image.If an
XferMode
hasn't been set on thePaint
then the behavior is as though it was set toPorterDuffXferMode(SRC_OVER)
. -
The second sub-phase takes the intermediate image, the destination image, and the mask as inputs and modifies the destination image. It doesnot use the
XferMode
.The intermediate image is blended with the destination image through the mask. Blending means that each pixel in the destination image will become a weighted average (or equivalently, linear interpolation) of that pixel's original color and the corresponding pixel in the intermediate image. The opacity of the corresponding mask pixel is the weight of the intermediate color, and its transparency is the weight of the original destination color.
In other words, a pixel that is transparent (
0x00
) in the mask will be left unaltered in the destination, a pixel that is opaque (0xFF
) in the mask will completely overwritten by the corresponding pixel in the intermediate image, and pixels that are partially transparent will result in a destination pixel color that is proportionately between its original color and the color of the corresponding intermediate image pixel.
This is the final phase. The pipeline is now complete.
More on Porter Duff Transfer Modes
The most commonly used transfer modes are instances of PorterDuffXferMode
. The behavior of aPorterDuffXferMode
is determined by its PorterDuff.Mode
. The documentation for eachPorterDuff.Mode
(except OVERLAY
) shows the function that is applied to the source and destination colors to obtain the intermediate color. For example,SRC_OVER
is documented as:
- [Sa + (1 - Sa)*Da, Rc = Sc + (1 - Sa)*Dc]
This means:
- Ra = Sa + (1 - Sa) * Da
- Rr = Sr + (1 - Sa) * Dr
- Rg = Sg + (1 - Sa) * Dg
- Rb = Sb + (1 - Sa) * Db
Where Rx
, Sx
, and Dx
are the intermediate (result), source and destination values of thex color component.
Some additional notes on the PorterDuff.Mode
documentation:
-
The documentation uses “
Sc
” and “Dc
” rather than describing each red, green, and blue component separately. This is because Porter Duff transfer modes always treat the non-alpha channels the same way and each of these channels is unaffected by all other channels except for alpha. -
SRC_OVER
andDST_OVER
are the only two modes that have the left hand side of this equation, “Rc
”, in their documentation. I'm guessing this inconsistency is a copy-and-paste error. -
The alpha channel is always unaffected by non-alpha channels. That is,
Ra
is always a function of onlySa
andDa
. -
The documentation for
ADD
refers to a “Saturate
” function. This is just clamping to the range [0,1]. (I don't know why they use such an odd name for clamping, especially “saturation” usually refers to an entirely unrelated concept when talking about colors.) -
The definition of many of these modes, including
OVERLAY
, can be found in theSVG Compositing Specification. The Skia code actually links to (an older version of) this document. It has some good diagrams, too.
References
- The
android.graphics
documentation. - This answer to “Android Edit Bitmap Channels” on Stack Overflow. Seeing this answer motivated me to learn more about how the pipeline actually works.
- The Android codebase. Since the documentation was so sparse and there didn't seem to be much information I looked to the source. My initial look stopped short when I realized everything was just a wrapper around “native” code.
- Skia documentation, particularly
SkPaint
. Skia is the vast bulk of “native” (C++) code involved. - “Stack Overflow: How do the pieces of Android's (2D) Canvas drawing pipeline fit together?”, a question I asked on Stack Overflow. One member of the Android team actually responded, but didn't really provide the details I was looking for.
- The Skia codebase. The code for
SkCanvas::drawPath
is a good place to start. - SVG Compositing Specification: W3C Working Draft 30 April 2009. This document is mentioned in the Skia code.
- SVG Compositing Specification: W3C Working Draft 15 March 2011. This document supercedes the one mentioned in the Skia code. I believe the relevant bits still apply, but there's more detailed explanation and some good diagrams.