RGB <-> YUV

As I wrote in this post, a video consists of a sequence of frames and a frame consists of pixels.

A pixel is represented by color. Color has various forms of its representation, which is called color space.

The most popular color space is RGB (Red, Green, Blue). We usually use 8-bits for each color. We need 24 bits for a pixel.

However YUV(Y=luminance, UV=chrominance) color space is usually used in a video. One of the reasons why YUV is used is a historical reason. (An old TV can show only black and white.)

One of the other reasons is that YUV is useful for compression. This articles says that human visual system is much more sensitive in luminance(black and white) than in chrominance. H.264 cuts chrominance information while compressing a video because we want to reduce size of a video as much as possible without losing its quality.

However you sometimes want a frame with RGB color space. If so, you need to convert a frame from YUV to RGB.

You can do it by using sws_scale function in ffmpeg.

Here is a sample code for this.

gist.github.com

Assuming that it’s written in filter_frame function in your custom filter. This code do the following:

  1. convert a frame from YUV to RGB
  2. put 0 to all Red byte
  3. convert a frame from RGB to YUV

After applying this filter, the video becomes bluish.