wiki:

colorspace

Colorspace support in FFmpeg

What is a colorspace and why should you care

A colorspace describes how you display an array of pixel values on screen. For example, a colorspace tells us what type of color components exist within the array of pixels (e.g. RGB or YUV), and how to translate the pixel values in each color component to photons on the screen.

The difference between YUV and RGB should be relatively obvious. RGB distinguishes color pixel values into red, green and blue color component pixel values. YUV is an orthogonal representation that allows representing color pixel values in luminance (brightness) and chroma (color differences) components.

The conversion between YUV pixel buffer representation and visual representation depends on the type of YUV represented in the pixel buffers, which are essentially device-dependent. Examples are Bt-601 (“Standard-definition” or SD), Bt-709 (“High-definition” or HD) or Bt-2020 (“Ultra-high-definition” or UHD). These standards describe not just things like how to convert the YUV signals to RGB, but also how the RGB signal should be represented in terms of photon emission in a device-independent way.

How does FFmpeg identify colorspaces

In practical terms, the things you care about are 1) whether the pixel buffer contains RGB, YUV or some other type of signal, the bit depth of this signal; 2) whether the signal (assuming YUV) is full-range or restricted range; 3) the transformation matrix between YUV and RGB; 4) the linearization function from RGB to a linear RGB signal; and 5) the conversion matrix between linearized RGB and the device-independent XYZ colorspace.

FFmpeg stores all these properties in the AVFrame struct. The RGB/YUV format and bitdepth are stored in AVFrame->format; the signal range is stored in AVFrame->color_range; the YUV/RGB transformation matrix is stored in AVFrame->colorspace; the linearization function is called a transformation characteristics and is stored in AVFrame->color_trc; lastly, the RGB/XYZ matrix is stored in AVFrame->color_primaries.

How do you convert between colorspaces in FFmpeg

Conversion between RGB/YUV is typically done using swscale. Conversion between color types (matrix, primaries, transfer characteristics) can be done using the colorspace or colormatrix video filters. There’s also a filter using the external library zscale. colorspace/matrix have the following relationship:

  • they both do only YUV-to-YUV colorspace conversion; YUV-to-RGB or scaling requires swscale.
  • colormatrix supports only 8bit pixel formats; colorspace supports 10/12bit content also.
  • colormatrix does not do gamma/primary correction, whereas colorspace does (it has an option to disable this if you want a faster conversion).
  • colormatrix is C only, whereas colorspace has x86 SIMD (i.e. it’s faster).

Read the filters’ respective documentation to read up exactly on how to use them. The easiest way to use these filters is to ensure that the input AVFrame has all relevant struct members set to the correct value. Then, set the target colorspace property on the video filter, and it will output converted frames.

To convert RGB/YUV or scale using swscale, use swscale and set the correct color type using sws_setColorspaceDetails().

Last modified 12 months ago Last modified on Apr 26, 2016, 2:41:33 AM