wiki:

FilteringGuide


Version 30 (modified by llogan, 3 years ago) (diff)

this is a 2x2, not 4x4 grid

FFmpeg Filtering Guide

FFmpeg has access to many filters and more are added on a regular basis. To see what filters are available with your build see ffmpeg -filters.

Documentation

Refer to the FFmpeg documentation for each filters' documentation and examples. This wiki page is for user contributed examples and tips.

Contributions to this page are encouraged.

Examples

Scaling

Starting with something simple. Resize a 640x480 input to a 320x240 output.

ffmpeg -i input -vf scale=iw/2:-1 output

iw is input width. In this example the input width is 640. 640/2 = 320. The -1 tells the scale filter to preserve the aspect ratio of the output, so in this example the scale filter will choose a value of 240. See the FFmpeg documentation for additional information.

Speed up your video

See How to speed up / slow down a video for syntax (the setpts filter)/time lapse etc.

Filtergraph,Chain,Filter relationship

What follows the -vf in an ffmpeg command line is a Filtergraph description. This filtergraph may contain a number of chains, each of which may contain a number of filters.

Whilst a full filtergraph description can be complicated, it is possible to simplify it for simpler graphs provided ambiguity is avoided.

Remembering that filters in a chain are separated by commas "," chains by a semicolon ";" and that if an input or output is not specified it is assumed to come from the preceding or sent to the following item in the chain.

The following are equivalent:-

ffmpeg -i input -vf [in]scale=iw/2:-1[out] output
ffmpeg -i input -vf scale=iw/2:-1 output                                      # the input and output are implied without ambiguity

As are:-

ffmpeg -i input -vf [in]yadif=0:0:0[middle];[middle]scale=iw/2:-1[out] output # 2 chains form, one filter per chain, chains linked by the [middle] pad
ffmpeg -i input -vf [in]yadif=0:0:0,scale=iw/2:-1[out] output                 # 1 chain form, with 2 filters in the chain, linking implied
ffmpeg -i input -vf yadif=0:0:0,scale=iw/2:-1  output                         # the input and output are implied without ambiguity

multiple input overlay in 2x2 grid

Here four inputs are filtered together using the -filter_complex option. In this case all of the inputs are "-f lavfi -i testsrc" but could be other inputs. Within the filtergraph the first input is padded to the right and bottom by double its height and the other three inputs are individually filtered using hflip, negate, and edgedetect. The overlay filter is then used multiple times to arrange of latter three inputs on top of the first one. The offsets used in the overlay filter arrange the inputs into a grid shape.

ffmpeg -f lavfi -i testsrc -f lavfi -i testsrc -f lavfi -i testsrc -f lavfi -i testsrc -filter_complex "[0:0]pad=iw*2:ih*2[a];[1:0]negate[b];[2:0]hflip[c];[3:0]edgedetect[d];[a][b]overlay=w[x];[x][c]overlay=0:h[y];[y][d]overlay=w:h" -y -c:v ffv1 -t 5 multiple_input_grid.avi

Escaping characters

As described in the documentation, it can be necessary to escape commas "," that need to appear in some arguments, for example the select filter:-

ffmpeg -i input -vf select='eq(pict_type\,I)' output                         #to select only I frames

However an alternative, which also allows for white space within the filtergraph, and which may assist in clarity of reading complex graphs, is to enclose the whole filtergraph within double quotes " " thus:

ffmpeg -i input -vf "select=eq(pict_type,I)" output                          #to select only I frames
ffmpeg -i input -vf "yadif=0:-1:0, scale=iw/2:-1" output                     # deinterlace then resize

Note that the examples given in the documentation mix and match the use of "full quoting" and "\" escaping, and that use of unusual shells may upset escaping.

Burnt in Timecode

PAL 25fps non drop frame

ffmpeg -i in.mp4 -vf "drawtext=fontfile=/usr/share/fonts/truetype/DroidSans.ttf: timecode='09\:57\:00\:00': r=25: \
x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000@1" -an -y out.mp4

NTSC 30fps drop frame

(change the : to a ; before the frame count)_________________________________________________________
                                                                                                     \
ffmpeg -i in.mp4 -vf "drawtext=fontfile=/usr/share/fonts/truetype/DroidSans.ttf: timecode='09\:57\:00\;00': r=30: \
x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000@1" -an -y out.mp4

Scripting your command line parameters

If building complex filtergraphs the command line can get very messy so it can help to break things down into manageable pieces. However one needs to be careful when joining them all together to avoid issues due to your shell and escaped characters.

The following example shows a sample bash script containing a filtergraph of one chain with three filters; yadif, scale and drawtext.

#!/bin/bash
# ffmpeg test script

path="/path/to/file/"

in_file="in.mp4"
out_file="out.mp4"

cd $path

filter="-vf \"yadif=0:-1:0, scale=400:226, drawtext=fontfile=/usr/share/fonts/truetype/DroidSans.ttf: \
text='tod- %X':x=(w-text_w)/2:y=H-60 :fontcolor=white :box=1:boxcolor=0x00000000@1\""
codec="-vcodec libx264  -pix_fmt yuv420p -b:v 700k -r 25 -maxrate 700k -bufsize 5097k"

command_line="ffmpeg -i $in_file $filter $codec -an $out_file"

echo $command_line
eval $command_line
exit

Note that the double quotes " around the whole filtergraph have been escaped \" and the filtergraph spans more than one line, the echo command shows the full command as it is executed. Useful for debugging.

The eval invocation of the $command_line variable is required to avoid loss of the embedded escaped quotes which occurs if it is absent. Other shells may behave differently.

List of Filters

Filters bundled with libavfilter as of 3.15.102 (as configured with --enable-gpl). Filters relying on external libraries, such as frei0r, are not listed here. remember, you can get documentation for each on the ffmpeg documentation page, for instance aconvert's documentation, etc.

aconvert         A->A       Convert the input audio to sample_fmt:channel_layout.
afifo            A->A       Buffer input frames and send them when they are requested.
aformat          A->A       Convert the input audio to one of the specified formats.
amerge           |->A       Merge two audio streams into a single multi-channel stream.
amix             |->A       Audio mixing.
anull            A->A       Pass the source unchanged to the output.
aresample        A->A       Resample audio data.
asetnsamples     A->A       Set the number of samples for each output audio frames.
asetpts          A->A       Set PTS for the output audio frame.
asettb           A->A       Set timebase for the audio output link.
ashowinfo        A->A       Show textual information for each audio frame.
asplit           A->|       Pass on the audio input to N audio outputs.
astreamsync      AA->AA     Copy two streams of audio data in a configurable order.
atempo           A->A       Adjust audio tempo.
channelmap       A->A       Remap audio channels.
channelsplit     A->|       Split audio into per-channel streams
earwax           A->A       Widen the stereo image.
join             |->A       Join multiple audio streams into multi-channel output
pan              A->A       Remix channels with coefficients (panning).
silencedetect    A->A       Detect silence.
volume           A->A       Change input volume.
volumedetect     A->A       Detect audio volume.
aevalsrc         |->A       Generate an audio signal generated by an expression.
anullsrc         |->A       Null audio source, return empty audio frames.
abuffersink      A->|       Buffer audio frames, and make them available to the end of the filter graph.
anullsink        A->|       Do absolutely nothing with the input audio.
ffabuffersink    A->|       Buffer audio frames, and make them available to the end of the filter graph.
alphaextract     V->V       Extract an alpha channel as a grayscale image component.
alphamerge       VV->V      Copy the luma value of the second input into the alpha channel of the first input.
bbox             V->V       Compute bounding box for each frame.
blackdetect      V->V       Detect video intervals that are (almost) black.
blackframe       V->V       Detect frames that are (almost) black.
boxblur          V->V       Blur the input.
colormatrix      V->V       Color matrix conversion
copy             V->V       Copy the input video unchanged to the output.
crop             V->V       Crop the input video to width:height:x:y.
cropdetect       V->V       Auto-detect crop size.
decimate         V->V       Remove near-duplicate frames.
delogo           V->V       Remove logo from input video.
deshake          V->V       Stabilize shaky video.
drawbox          V->V       Draw a colored box on the input video.
edgedetect       V->V       Detect and draw edge.
fade             V->V       Fade in/out input video.
fieldorder       V->V       Set the field order.
fifo             V->V       Buffer input images and send them when they are requested.
format           V->V       Convert the input video to one of the specified pixel formats.
fps              V->V       Force constant framerate
framestep        V->V       Select one frame every N frames.
gradfun          V->V       Debands video quickly using gradients.
hflip            V->V       Horizontally flip the input video.
hqdn3d           V->V       Apply a High Quality 3D Denoiser.
hue              V->V       Adjust the hue and saturation of the input video.
idet             V->V       Interlace detect Filter.
lut              V->V       Compute and apply a lookup table to the RGB/YUV input video.
lutrgb           V->V       Compute and apply a lookup table to the RGB input video.
lutyuv           V->V       Compute and apply a lookup table to the YUV input video.
mp               V->V       Apply a libmpcodecs filter to the input video.
negate           V->V       Negate input video.
noformat         V->V       Force libavfilter not to use any of the specified pixel formats for the input to the next filter.
null             V->V       Pass the source unchanged to the output.
overlay          VV->V      Overlay a video source on top of the input.
pad              V->V       Pad input image to width:height[:x:y[:color]] (default x and y: 0, default color: black).
pixdesctest      V->V       Test pixel format definitions.
removelogo       V->V       Remove a TV logo based on a mask image.
scale            V->V       Scale the input video to width:height size and/or convert the image format.
select           V->V       Select frames to pass in output.
setdar           V->V       Set the frame display aspect ratio.
setfield         V->V       Force field for the output video frame.
setpts           V->V       Set PTS for the output video frame.
setsar           V->V       Set the pixel sample aspect ratio.
settb            V->V       Set timebase for the video output link.
showinfo         V->V       Show textual information for each video frame.
slicify          V->V       Pass the images of input video on to next video filter as multiple slices.
smartblur        V->V       Blur the input video without impacting the outlines.
split            V->|       Pass on the input video to N outputs.
super2xsai       V->V       Scale the input by 2x using the Super2xSaI pixel art algorithm.
swapuv           V->V       Swap U and V components.
thumbnail        V->V       Select the most representative frame in a given sequence of consecutive frames.
tile             V->V       Tile several successive frames together.
tinterlace       V->V       Perform temporal field interlacing.
transpose        V->V       Transpose input video.
unsharp          V->V       Sharpen or blur the input video.
vflip            V->V       Flip the input video vertically.
yadif            V->V       Deinterlace the input image.
cellauto         |->V       Create pattern generated by an elementary cellular automaton.
color            |->V       Provide an uniformly colored input.
life             |->V       Create life.
mandelbrot       |->V       Render a Mandelbrot fractal.
mptestsrc        |->V       Generate various test pattern.
nullsrc          |->V       Null video source, return unprocessed video frames.
rgbtestsrc       |->V       Generate RGB test pattern.
smptebars        |->V       Generate SMPTE color bars.
testsrc          |->V       Generate test pattern.
buffersink       V->|       Buffer video frames, and make them available to the end of the filter graph.
ffbuffersink     V->|       Buffer video frames, and make them available to the end of the filter graph.
nullsink         V->|       Do absolutely nothing with the input video.
concat           |->|       Concatenate audio and video streams.
showspectrum     A->V       Convert input audio to a spectrum video output.
showwaves        A->V       Convert input audio to a video output.
amovie           |->|       Read audio from a movie source.
movie            |->|       Read from a movie source.
buffer           |->V       Buffer video frames, and make them accessible to the filterchain.
abuffer          |->A       Buffer audio frames, and make them accessible to the filterchain.
buffersink_old   V->|       Buffer video frames, and make them available to the end of the filter graph.
abuffersink_old  A->|       Buffer audio frames, and make them available to the end of the filter graph.

Other Filter Examples

Developing your own Filters

Basically, you can follow the examples given of the existing filters, which helps a lot, as well.

Attachments (1)

Download all attachments as: .zip