|Version 70 (modified by 18 months ago) ( diff ),|
Many platforms offer access to dedicated hardware to perform a range of video-related tasks. Using such hardware allows some operations like decoding, encoding or filtering to be completed faster or using less of other resources (particularly CPU), but may give different or inferior results, or impose additional restrictions which are not present when using software only. On PC-like platforms, video hardware is typically integrated into a GPU (from AMD, Intel or NVIDIA), while on mobile SoC-type platforms it is generally an independent IP core (many different vendors).
Hardware decoders will generate equivalent output to software decoders, but may use less power and CPU to do so. Feature support varies – for more complex codecs with many different profiles, hardware decoders rarely implement all of them (for example, hardware decoders tend not to implement anything beyond YUV 4:2:0 at 8-bit depth for H.264). A common feature of many hardware decoders to be able to generate output in hardware surfaces suitable for use by other components (with discrete graphics cards, this means surfaces in the memory on the card rather than in system memory) – this is often useful for playback, as no further copying is required before rendering the output, and in some cases it can also be used with encoders supporting hardware surface input to avoid any copying at all in transcode cases.
Hardware encoders typically generate output of significantly lower quality than good software encoders like x264, but are generally faster and do not use much CPU resource. (That is, they require a higher bitrate to make output with the same perceptual quality, or they make output with a lower perceptual quality at the same bitrate.)
Systems with decode and/or encode capability may also offer access to other related filtering features. Things like scaling and deinterlacing are common, other postprocessing may be available depending on the system. Where hardware surfaces are usable, these filters will generally act on them rather than on normal frames in system memory.
There are a lot of different APIs of varying standardisation status available. FFmpeg offers access to many of these, with varying support.
Platform API Availability
|Direct3D 9 (DXVA2)||N||N||N||Y||Y||Y||N||N||N||N|
PPartial support (some devices / some features).
FFmpeg API Implementation Status
|Internal||Standalone||Hardware output||Standalone||Hardware input||Filtering||Hardware context||Usable from ffmpeg CLI|
|Direct3D 9 / DXVA2||Y||-||Y||-||-||N||Y||Y|
-Not applicable to this API.
NPossible but not implemented.
FNot yet integrated, but work is being done in this area.
Use with the
ffmpeg command-line tool
Internal hwaccel decoders are enabled via the
-hwaccel option. The software decoder starts normally, but if it detects a stream which is decodable in hardware then it will attempt to delegate all significant processing to that hardware. If the stream is not decodable in hardware (for example, it is an unsupported codec or profile) then it will still be decoded in software automatically. If the hardware requires a particular device to function (or needs to distinguish between multiple devices, say if several graphics cards are available) then one can be selected using
External wrapper decoders are used by setting a specific decoder with the
-codec:v option. Typically they are named
codec_api (for example:
h264_cuvid). These decoders require the codec to be known in advance, and do not support any fallback to software if the stream is not supported.
Encoder wrappers are selected by
-codec:v. Encoders generally have lots of options – look at the documentation for the particular encoder for details.
Hardware filters can be used in a filter graph like any other filter. Note, however, that they may not support any formats in common with software filters – in such cases it may be necessary to make use of
hwdownload filter instances to move frame data between hardware surfaces and normal memory.
Video Decode and Presentation API for Unix. Developed by NVIDIA for Unix/Linux systems. To enable this you typically need the
libvdpau development package in your distribution, and a compatible graphics card.
Note that VDPAU cannot be used to decode frames in memory, the compressed frames are sent by libavcodec to the GPU device supported by VDPAU and then the decoded image can be accessed using the VDPAU API. This is not done automatically by FFmpeg, but must be done at the application level (check for example the
ffmpeg_vdpau.c file used by
ffmpeg.c). Also, note that with this API it is not possible to move the decoded frame back to RAM, for example in case you need to encode again the decoded frame (e.g. when doing transcoding on a server).
Several decoders are currently supported through VDPAU in libavcodec, in particular H.264, MPEG-1/2/4, and VC-1.
Video Acceleration API (VAAPI) is a non-proprietary and royalty-free open source software library ("libva") and API specification, initially developed by Intel but can be used in combination with other devices.
It can be used to access the Quick Sync hardware in Intel GPUs and the UVD/VCE hardware in AMD GPUs. See VAAPI.
Direct-X Video Acceleration API, developed by Microsoft (supports Windows and XBox360).
Several decoders are currently supported, in particular H.264, MPEG-2, VC-1 and WMV 3.
DXVA2 hardware acceleration only works on Windows. In order to build FFmpeg with DXVA2 support, you need to install the dxva2api.h header.
For MinGW this can be done by downloading the header maintained by VLC and installing it in the include patch (for example in
dxva2api.h is provided by default. One way to install mingw-w64 is through a
pacman repository, and can be installed using one of the two following commands, depending on the architecture:
pacman -S mingw-w64-i686-gcc pacman -S mingw-w64-x86_64-gcc
To enable DXVA2, use the
--enable-dxva2 ffmpeg configure switch.
To test decoding, use the following command:
ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark
VideoToolbox, only supported on macOS. H.264 decoding is available in FFmpeg/libavcodec.
NVENC and NVDEC are NVIDIA's hardware-accelerated encoding and decoding APIs. They used to be called CUVID. They can be used for encoding and decoding on Windows and Linux. FFmpeg refers to NVENC/NVDEC as CUDA.
NVENC can be used for H.264 and HEVC encoding. FFmpeg supports NVENC through the
hevc_nvenc encoders. In order to enable it in FFmpeg you need:
- A supported GPU
- Supported drivers for your operating system
- The NVIDIA Codec SDK or compiling FFmpeg with --enable-cuda-llvm
--enable-nvenc(default if the drivers are detected while configuring)
After compilation, you can use NVENC.
ffmpeg -i input -c:v h264_nvenc -profile high444p -pixel_format yuv444p -preset default output.mp4
You can see available presets, other options, and encoder info with
ffmpeg -h encoder=h264_nvenc or
ffmpeg -h encoder=hevc_nvenc.
NVDEC offers decoders for H.264, HEVC, MJPEG, MPEG-1/2/4, VP8/VP9, VC-1. Codec support varies by hardware (see the GPU compatibility table).
Note that FFmpeg offers both NVDEC and CUVID
hwaccels. They differ in how frames are decoded and forwarded in memory.
The full set of codecs being available only on Pascal hardware, which adds VP9 and 10 bit support. The note about missing
ffnvcodec from NVENC applies for NVDEC as well.
Sample decode using CUDA:
ffmpeg -hwaccel cuda -hwaccel_output_format cuda input output
Sample decode using CUVID:
./ffmpeg-git -hwaccel cuvid -c:v h264_cuvid -i input output
Full hardware transcode with CUVID and NVENC:
ffmpeg -hwaccel cuvid -c:v h264_cuvid -i input -c:v h264_nvenc -preset slow output
If ffmpeg was compiled with support for libnpp, it can be used to insert a GPU based scaler into the chain:
ffmpeg -hwaccel_device 0 -hwaccel cuvid -i input -vf scale_npp=-1:720 -c:v h264_nvenc -preset slow output.mkv
-hwaccel_device option can be used to specify the GPU to be used by the hwaccel in ffmpeg.
libmfx is a proprietary library from Intel for use of Quick Sync hardware on both Linux and Windows. On Windows it is the primary way to use more advanced functions beyond those accessible via DXVA2/D3D11VA, particularly encode. On Linux it has a very restricted feature set and is hard to use, but may be helpful for some use-cases desiring maximum throughput.
OpenCL can be used for a number of filters. To build, OpenCL 1.2 or later headers are required, along with an ICD or ICD loader to link to - it is recommended (but not required) to link with the ICD loader, so that the implementation can be chosen at run-time rather than build-time. At run-time, an OpenCL 1.2 driver is required - most GPU manufacturers will provide one as part of their standard drivers. CPU implementations are also usable, but may be slower than using native filters in ffmpeg directly.
OpenCL can interoperate with other GPU APIs to avoid redundant copies between GPU and CPU memory. The supported methods are:
- DXVA2: NV12 surfaces only, all platforms.
- D3D11: NV12 textures on Intel only.
- VAAPI: all surface types.
- ARM Mali: all surface types, via DRM object sharing.
- libmfx: NV12 surfaces only, via VAAPI or DXVA2.
AMD UVD is usable for decode via VDPAU and VAAPI in Mesa on Linux. VCE also has some initial support for encode via VAAPI, but should be considered experimental.
On Windows, UVD is accessible via standard DXVA2/D3D11VA APIs, while VCE is supported via AMF.