Using libav*

Version 9 (modified by saste, 6 years ago) (diff)

extend/clarify/correct description of the FFmpeg libraries

FFmpeg itself is composed of several libraries that can be used individually, and outside of FFmpeg, for instance in your own program. These are:

  • libavutil contains various routines used to simplify programming, including random number generators, data structures, mathematics routines, core multimedia utilities, and much more.
  • libavcodec provides a decoding and encoding API, and all the supported codecs.
  • libavformat provides a demuxing and muxing API, and all the supported muxers and de-muxers.
  • libavdevice provides an interface for grabbing from input devices (e.g. webcames or lin-in audio) and rendering to output devices, and all the supported input and output devices
  • libswscale provides a scaling and (raw pixel) format conversions API, with high speed/assembly optimized versions of several scaling routines.
  • libavfilter provides an audio and video filtering API, and all the supported filters.
  • libpostproc provides video postprocessing routines
  • libswresample provides an audio resampling, rematrixing and sample format conversion API, and many high-quality optimized routines.
  • libavresample audio conversion routines (?)

Getting started

There is not much "web based" official documentation for using these libraries.

Check doc/examples, also doxygen documentation is fairly complete and should work as reference (example: the example codes as doxygen).

In general, you must have the appropriate library compiled/available on your machine (for instance, if using packages, something like libswscale-dev must be installed, or configure, build, and install FFmpeg yourself using the --enable-shared configure option), then you include the appropriate header file in your C code, then link against that library's linker file, like "gcc input.c -lswscale" or the like during the linker phase.


The web has a few tutorials, some of which are out of date. The doc/examples files usually use the latest ABI, however, should be more trustworthy.

Using libavformat and libavcodec by Martin Böhme, a good overview of the FFmpeg APIs, though quite out dated.

An FFmpeg and SDL Tutorial by Stephen Dranger, explains how to write a video player based on FFmpeg (updated source code for that tutorial, by Michael Penkov, is also now available).



Its Doxygen docu. see most of the tutorials, as well.


Its Doxygen docu. see most of the tutorials, as well.

Determining the right values to pass to AVCodecContext:

One user shared this advice for determining all the correct values:

[An] approach to figuring this out is:

  1. come up with the ffmpeg app command line which does what you want
  1. use the gdb debugger to execute the ffmpeg_g app, put a breakpoint on avcodec_encode_audio2() (or whichever method you need), and see what values the ffmpeg app uses for AVPacket and for the (audio or otherwise) related fields in AVCodecContext.


This file is also given as documentation. You may find some docu under the doxygen of libavcodec "Audio resampling".


This file is also given as documentation.


Its doxygen documentation.


If you have problems, one place to get help is to ask the libav-user mailing list, its description: "This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter." IRC might work also.