wiki:

DirectShow


Version 10 (modified by rogerdpack, 4 years ago) (diff)

--

DirectShow

FFmpeg can take input from "directshow" devices on your windows computer. These are typically video or audio devices that are attached.

c:\> ffmpeg -list_devices true -f dshow -i dummy
ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect
  libavutil      51. 74.100 / 51. 74.100
  libavcodec     54. 65.100 / 54. 65.100
  libavformat    54. 31.100 / 54. 31.100
  libavdevice    54.  3.100 / 54.  3.100
  libavfilter     3. 19.102 /  3. 19.102
  libswscale      2.  1.101 /  2.  1.101
  libswresample   0. 16.100 /  0. 16.100
[dshow @ 03ACF580] DirectShow video devices
[dshow @ 03ACF580]  "Integrated Camera"
[dshow @ 03ACF580]  "screen-capture-recorder"
[dshow @ 03ACF580] DirectShow audio devices
[dshow @ 03ACF580]  "Internal Microphone (Conexant 2"
[dshow @ 03ACF580]  "virtual-audio-capturer"
dummy: Immediate exit requested

Then use the listed devices like

c:\> ffmpeg -f dshow -i video="Integrated Camera" out.mp4

You can also pass the device certain parameters that it needs, for instance a webcam might allow you to capture it in "1024x768" at up to max 5 fps, or allow you to capture at "640x480" at 30 fps. You can enumerate the options like this:

c:\> ffmpeg -f dshow -list_options true -i video=<video device> ex:

ffmpeg -f dshow -list_options true -i video="Integrated Camera"
ffmpeg version N-45279-g6b86dd5 Copyright (c) 2000-2012 the FFmpeg developers
  built on Oct 10 2012 17:30:47 with gcc 4.7.1 (GCC)
  configuration:...
  libavutil      51. 74.100 / 51. 74.100
  libavcodec     54. 65.100 / 54. 65.100
  libavformat    54. 31.100 / 54. 31.100
  libavdevice    54.  3.100 / 54.  3.100
  libavfilter     3. 19.102 /  3. 19.102
  libswscale      2.  1.101 /  2.  1.101
  libswresample   0. 16.100 /  0. 16.100
[dshow @ 01D4F3E0] DirectShow video device options
[dshow @ 01D4F3E0]  Pin "Capture"
[dshow @ 01D4F3E0]   pixel_format=yuyv422  min s=640x480 fps=15 max s=640x480 fps=30
[dshow @ 01D4F3E0]   pixel_format=yuyv422  min s=1280x720 fps=7.5 max s=1280x720 fps=7.5
[dshow @ 01D4F3E0]   vcodec=mjpeg  min s=640x480 fps=15 max s=640x480 fps=30
[dshow @ 01D4F3E0]   vcodec=mjpeg  min s=1280x720 fps=15 max s=1280x720 fps=30
video=Integrated Camera: Immediate exit requested

You can see in this particular instance that it can either stream it to you in a "raw pixel_format" (yuyv422 in this case), or as an mjpeg stream.

You can specify the type (mjpeg) and size (1280x720) and frame rate to tell the device to give you (15 fps), like this:

ffmpeg -f dshow -s 1280x720 -r 15 -vcodec mjpeg -i video="Integrated Camera" out.avi

Sometimes it helps to specify "-vcodec copy" to save on cpu to re-encode, if you can receive the data in some type of pre-encoded format, like mjpeg in this instance.

Also this note that "The input string is in the format video=<video device name>:audio=<audio device name>. It is possible to have two separate inputs (like -f dshow -i audio=foo -f dshow -i video=bar) but my limited tests had shown a better synchronism when both were used in the same input."

See here for a list of more dshow options you can specify, for instance you can decrease latency on audio devices, specify a video by "index" if 2 have the same name displayed, etc.

Buffering

By default FFmpeg captures frames from the input, and then (does whatever you told it to do, for instance, re-encoding them and saving them to an output file). By default if it receives a frame "too early" (while the previous frame isn't finished yet), it will discard that frame, so that it can keep up the the real time input. You can adjust this by setting the "-rtbufsize" parameter, though note that if your encoding process can't keep up, eventually you'll still start losing frames just the same (and using it at all can introduce a bit of latency). It may be helpful to still specify some buffer, however, otherwise frames may be needlessly dropped.

See StreamingGuide for some tips on tweaking encoding (sections latency and cpu usage). For instance, you could save it to a very fast codec, then re-encode it later.

TroubleShooting

If you have a video capture card (ex: AverMedia, possibly some BlackMagic, though it may be a separate unrelated problem, and also some BlackMagic cards don't have the right inputs set up ask on the forum), it may not work (yet) out of the box with FFmpeg, as it lacks crossbar support presently. The work around currently is to install the AmerecTV software, which presents the capture card as directshow devices, then input the AmerecTV directshow devices into your FFmpeg. See here.

Related

AviSynth Input

FFmpeg can also take DirectShow input by creating an avisynth file (.avs file) that itself gets input from a graphedit file, which graphedit file exposes a pin of your capture source or any filter really, ex ("yo.avs") with this content:

!DirectShowSource("push2.GRF", fps=35, audio=False, framecount=1000000)

ffdshow tryouts

ffdshow tryouts is a separate project that basically wraps FFmpeg's core source (libavcodec, etc.) and then presents them as filter wrappers that your normal Windows applications can use for decoding video, etc. It's not related to "ffmpeg.exe" directly, at all.

Support

You can ask questions/comments about DirectShow on the zeranoe forum.