Version 18 (modified by burek, 4 years ago) (diff)

Added several link references and corrected few minor typos

FFmpeg can basically stream through one of two ways: It either streams to a some "other server", which restreams for it, or it can stream via UDP directly to some destination host, or possibly multicast destination. Servers which can receive from FFmpeg (to restream) include ffserver (linux only, though cygwin might work), or Wowza Media Server, or Flash Media Server. Even VLC can pick up the stream, then redistribute it, acting as a server. Since FFmpeg is sometimes more efficient than VLC, at doing the raw encoding, this can be a useful option compared to doing it all in VLC.

How to stream with several different simultaneous bitrates is described here.

NB that when you are testing your streams, you may want to test them with both VLC and FFplay, as FFplay sometimes introduces its own artifacts when it is scaled (it has poor quality scaling). Don't use ffplay as your baseline for determining quality.

Also note that encoding it to the x264 "baseline" is basically for older iOS devices or the like, see here.

The FFmpeg's "-re" flag means to "Read input at native frame rate. Mainly used to simulate a grab device." i.e. if you want to play a video file, but at realtime, then use this. My guess is you typically don't want this flag when streaming from a live device.

Here's how one guy broadcast a live stream:

$ ffmpeg -y -loglevel warning -f dshow -i video="screen-capture-recorder" -vf crop=690:388:136:0 -r 30 -s 962x388 -threads 2 -vcodec libx264 -vpre baseline -vpre my_ffpreset -f flv rtmp:///live/myStream.sdp

Here is my FFmpeg preset (libx264-my_ffpreset.ffpreset):

Here is what another person did:

Ffmpeg -f dshow -I video="Virtual-Camera" -vcodec libx264 -tune zerolatency
-b 900k -f mpegts udp://

And here is what another person did:

ffmpeg -f dshow -i video="screen-capture-recorder":audio="Stereo Mix (IDT High
Definition" -vcodec  libx264 -preset ultrafast -tune zerolatency -r 10
-async 1 -acodec libmp3lame -ab 24k -ar 22050 -bsf:v h264_mp4toannexb
-maxrate 750k -bufsize 3000k   -f mpegts udp://

NB that they also had to adjust the rtbufsize in that example. I'm also not entirely sure which presets are "best" or what the available options are. Also note that newer version of FFmpeg may need a different syntax for specifying preset/tune.


You can decrease latency by specify that I-frames come "more frequently" (or always, in the case of x264's zerolatency), though this can increase frame size/decrease quality, see here for some alternatives. To decrease cpu usage required to stream, you could (if capturing from live source) instruct the live source to feed a "smaller stream" (ex: webcam stream 640x480 instead of 1024x1280), or you could set a lower "output quality" setting, or specify a lower output bitrate. Or try a different output codec, or specify new parameters (for instance, different profile for libx264). Also specifying $ -threads 0 (the default) instructs the encoder to use all available cpu cores, which can speed up processing. You could also resize your input first, before transcoding it, so it's not as large. Applying a smoothing filter like hqdn3d before encoding might help it compress better.

You can of course, also set a lower frame rate to decrease cpu usage.

You could set a "maximum bit rate" or a lower "q rating" (quality level). If you're able to capture with a pixel format that matches your output format, that might help, since it avoids a conversion. Using 64-bit instead of 32-bit executables (for those that have a choice) can result in a slight speedup.

In general the more cpu you use to compress, the better the output image will be, or the large of an image you can handle.

And also basically, you can either decrease the input frame rate/size, or decrease the output frame rate/size, to save cpu.

Sometimes you can change the "pixel formats" somehow, like using rgb16 instead of rgb24, to save space (or yuv420 instead of yuv444 or the like, since 420 stores less information it can compress better).

Streaming a simple RTP audio stream from FFmpeg

FFmpeg can stream a single stream using the RTP protocol. In order to avoid buffering problems on the other hand, the streaming should be done through the -re option, which means that the stream will be streamed in real-time (i.e. it slows it down to simulate a live streaming source.

For example the following command will generate a signal, and will stream it to the port 1234 on localhost:

ffmpeg -re -f lavfi -i aevalsrc="sin(400*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp://

To play the stream with ffplay, run the command:

ffplay rtp://


The most popular streaming codec is probably libx264, though if you're streaming to a device which requires a "crippled" baseline h264 implementation, some have argued that the mp4 video codec is better. You can also use mpeg2video, or really any other video codec you want, typically, as long as your receiver can decode it, and it suits your needs.

Outputting files

FFmpeg supports splitting files (using "-f segment" for the output, see segment muxer) into time based chunks, useful for HTTP live streaming style file output.