wiki:

colorspace


Version 3 (modified by gdgsdg123, 14 months ago) (diff)

--

Colorspace support in FFmpeg

What is colorspace? Why should we care?

Colorspace describes how an array of pixel values should be displayed on the screen.

For example, it tells the format of the array of pixels (eg. RGB or YUV), and how the values of each color component should be translated, to be properly displayed by the photons of the screen. (ie. picking a colorspace randomly seems unlikely a good choice...)


The difference between RGB and YUV should be rather obvious:

  • RGB distinguishes color pixel values into 3 components: Red, Green, Blue. (hence the name)
  • YUV is an orthogonal representation that represents color pixel values in: Luminance (Y, or brightness), Chroma (UV, or color differences). (Note: YUV represents color in 3 components)



The conversion between YUV pixel buffer representation and its visual representation depends on the type of the YUV represented in the pixel buffers, which are essentially device-dependent.

Examples are:

These standards describe not just things like how to convert a YUV signal to RGB, but also how a RGB signal should be represented in terms of photon emission, in a device-independent way.

How does FFmpeg identify colorspaces?

In practical terms, the things you care are:

  1. Whether the pixel buffer contains RGB, YUV or some other type of signals, and the bit-depth.
  2. Whether the signals are full range or restricted range. (YUV only, unlikely a problem for other type of signals...)
  3. The transformation matrix between YUV and RGB.
  4. The linearization function from RGB to a linear RGB signal.
  5. The conversion matrix between the linearized RGB and the device-independent XYZ colorspace.


FFmpeg stores all these properties in the AVFrame struct:

  • The format (type and bit-depth), in AVFrame->format
  • The signal range, in AVFrame->color_range
  • The YUV/RGB transformation matrix, in AVFrame->colorspace
  • The linearization function (aka. transformation characteristics), in AVFrame->color_trc
  • The RGB/XYZ matrix, in AVFrame->color_primaries

How to convert between colorspaces using FFmpeg?

Conversion between RGB/YUV is typically done using swscale. Conversion between different color properties (bit-depth, range, matrix, transfer characteristics, primaries) can be done using the colorspace or colormatrix video filter. There's also a filter using the external library zscale. (for both aforementioned purposes) (...and seems to be a more reliable choice for all these swscale hazards)


Video filter colorspace, colormatrix have the following relationship:

  • They both do only YUV to YUV colorspace conversion; YUV to RGB, and scaling requires swscale.
  • colormatrix supports only 8bpc (8-bit per component) pixel formats, whereas colorspace supports 10bpc, 12bpc also.
  • colormatrix does not apply gamma (primaries) correction, whereas colorspace does (it has an option fast=1 to disable this if you want faster conversion, or compatible output with that produced by colormatrix). (Note: With fast=0 (default) it seems to produce significantly worse quality (discoloration)... gamma miscorrection?..)
  • colormatrix is C only, whereas colorspace uses x86 SIMD (ie. it's faster).


Anyway the major difference between them is colormatrix produces horrible quality for anything > 8bpc (8-bit per component)... while colorspace produces something decent, at least for 10bpc (for 8bpc they both produce similar bad quality... probably due to improper design in the algorithms). (floor instead of round on color approximation?..)
Anyway for 8bpc... colorspace still seems to produce slightly better quality than colormatrix (while it's pointless... as doing things in 10bpc first, then 10bpc -> 8bpc seems to be a better approach... if you don't mind dithering). (dithering is enforced in swscale YUV 10bpc -> 8bpc)




Read the filters' respective documentation to read up exactly on how to use them.

The easiest way to use these filters is to ensure that the input AVFrames have all relevant struct members set to the appropriate value. (or got to specify them manually as the filter arguments...)
Then, set the target color properties on the video filter, and it will output the converted frames.


To convert RGB/YUV or scale using swscale, use swscale and set the appropriate color properties using sws_setColorspaceDetails().

Attachments (14)