come from http://lwn.net/Articles/218798/


This is the fifth article in the irregular LWN series on writing video drivers for Linux. Those who have not yet read the introductory article may want to start there.

Before any application can work with a video device, it must come to an understanding with the driver about how video data will be formatted. This negotiation can be a rather complex process, resulting from the facts that (1) video hardware varies widely in the formats it can handle, and (2) performing format transformations in the kernel is frowned upon. So the application must be able to find out what formats are supported by the hardware and set up a configuration which is workable for everybody involved. This article will cover the basics of how formats are described; the next installment will get into the API implemented by V4L2 drivers to negotiate formats with applications.


Colorspaces

A colorspace is, in broad terms, the coordinate system used to describe colors. There are several of them defined by the V4L2 specification, but only two are used in any broad way. They are:


  • V4L2_COLORSPACE_SRGB. The [red, green, blue] tuples familiar to many developers are covered under this colorspace. They provide a simple intensity value for each of the primary colors which, when mixed together, create the illusion of a wide range of colors. There are a number of ways of representing RGB values, as we will see below.

    This colorspace also covers the set of YUV and YCbCr representations. This representation derives from the need for early color television signals to be displayable on monochrome TV sets. So the Y (or "luminance") value is a simple brightness value; when displayed alone, it yields a grayscale p_w_picpath. The U and V (or Cb and Cr) "chrominance" values describe the blue and red components of the color; green can be derived by subtracting those components from the luminance. Conversion between YUV and RGB is not entirely straightforward, however; there are several formulas to choose from.

    Note that YUV and YCbCr are not exactly the same thing, though the terms are often used interchangeably.


  • V4L2_COLORSPACE_SMPTE170M is for analog color representations used in NTSC or PAL television signals. TV tuners will often produce data in this colorspace.

Quite a few other colorspaces exist; most of them are variants of television-related standards. See this page from the V4L2 specification for the full list.


Packed and planar

As we have seen, pixel values are expressed as tuples, usually consisting of RGB or YUV values. There are two commonly-used ways of organizing those tuples into an p_w_picpath:


  • Packed formats store all of the values for one pixel together in memory.


  • Planar formats separate each component out into a separate array. Thus a planar YUV format will have all of the Y values stored contiguously in one array, the U values in another, and the V values in a third. The planes are usually stored contiguously in a single buffer, but it does not have to be that way.

Packed formats might be more commonly used, especially with RGB formats, but both types can be generated by hardware and requested by applications. If the video device supports both packed and planar formats, the driver should make them both available to user space.


Fourcc codes

Color formats are described within the V4L2 API using the venerable "fourcc" code mechanism. These codes are 32-bit values, generated from four ASCII characters. As such, they have the advantages of being easily passed around and being human-readable. When a color format code reads, for example, 'RGB4', there is no need to go look it up in a table.

Note that fourcc codes are used in a lot of different settings, some of which predate Linux. The MPlayer application uses them internally. fourcc refers only to the coding mechanism, however, and says nothing about which codes are actually used - MPlayer has a translation function for converting between its fourcc codes and those used by V4L2.

RGB formats

In the format descriptions shown below, bytes are always listed in memory order - least significant bytes first on a little-endian machine. The least significant bit of each byte is on the right; for each color field, the lighter-shaded bit is the most significant.

wKioL1O2RDawbDfGAAKVKlx3nbM539.jpg

When formats with empty space (shown in gray, above) are used, applications may use that space for an alpha (transparency) value.

The final format above is the "Bayer" format, which is generally something very close to the real data from the sensor found in most cameras. There are green values for every pixel, but blue and red only for every other pixel. Essentially, green carries the more important intensity information, with red and blue being interpolated across the pixels where they are missing. This is a pattern we will see again with the YUV formats.


YUV formats

The packed YUV formats will be shown first. The key for reading this table is:

wKioL1O2RNvhbPSMAAF3MDR9bzc065.jpg

There are several planar YUV formats in use as well. Drawing them all out does not help much, so we'll go with one example. The commonly-used "YUV 4:2:2" format (V4L2_PIX_FMT_YUV422, fourcc 422P) uses three separate arrays. A 4x4 p_w_picpath would be represented like this:

wKioL1O2RPCjsOSiAAEip1bSNLM444.jpg

As with the Bayer format, YUV 4:2:2 has one U and one V value for every other Y value; displaying the p_w_picpath requires interpolating across the missing values. The other planar YUV formats are:


  • V4L2_PIX_FMT_YUV420: the YUV 4:2:0 format, with one U and one V value for every four Y values. U and V must be interpolated in both the horizontal and vertical directions. The planes are stored in Y-U-V order, as with the example above.


  • V4L2_PIX_FMT_YVU420: like YUV 4:2:0, except that the positions of the U and V arrays are swapped.


  • V4L2_PIX_FMT_YUV410: A single U and V value for each sixteen Y values. The arrays are in the order Y-U-V.


  • V4L2_PIX_FMT_YVU410: A single U and V value for each sixteen Y values. The arrays are in the order Y-V-U.

A few other YUV formats exist, but they are rarely used; see this page for the full list.


Other formats

A couple of formats which might be useful for some drivers are:


  • V4L2_PIX_FMT_JPEG: a vaguely-defined JPEG stream; a little more information can be found here.


  • V4L2_PIX_FMT_MPEG: an MPEG stream. There are a few variants on the MPEG stream format; controlling these streams will be discussed in a future installment.

There are a number of other, miscellaneous formats, some of them proprietary; this page has a list of them.


Describing formats

Now that we have an understanding of color formats, we can take a look at how the V4L2 API describes p_w_picpath formats in general. The key structure here is struct v4l2_pix_format (defined in <linux/videodev2.h>, which contains these fields:


  • __u32 width: the width of the p_w_picpath in pixels.


  • __u32 height: the height of the p_w_picpath in pixels.


  • __u32 pixelformat: the fourcc code describing the p_w_picpath format.


  • enum v4l2_field field: many p_w_picpath sources will interlace the data - transferring all of the even scan lines first, followed by the odd lines. Real camera devices normally do not do interlacing. The V4L2 API allows the application to work with interlaced fields in a surprising number of ways. Common values include V4L2_FIELD_NONE (fields are not interlaced), V4l2_FIELD_TOP (top field only), orV4L2_FIELD_ANY (don't care). See this page for a full list.


  • __u32 bytesperline: the number of bytes between two adjacent scan lines. It includes any padding the device may require. For planar formats, this value describes the largest (Y) plane.


  • __u32 sizep_w_picpath: the size of the buffer required to hold the full p_w_picpath.


  • enum v4l2_colorspace colorspace: the colorspace being used.

All together, these parameters describe a buffer of video data in a reasonably complete manner. An application can fill out a v4l2_pix_formatstructure asking for just about any sort of format that a user-space developer can imagine. On the driver side, however, things have to be restrained to the formats the hardware can work with. So every V4L2 application must go through a negotiation process with the driver in an attempt to arrive at an p_w_picpath format that is both supported by the hardware and adequate for the application's needs. The next installment in this series will describe how this negotiation works from the device driver's point of view.