Changes between Version 71 and Version 72 of StreamingGuide


Ignore:
Timestamp:
Dec 31, 2014, 5:44:40 AM (4 years ago)
Author:
c-14
Comment:

replace x264EncodingGuide with Encode/H.264

Legend:

Unmodified
Added
Removed
Modified
  • StreamingGuide

    v71 v72  
    5050NB that they also (for directshow devices) had to adjust the rtbufsize in that example.
    5151
    52 You can see a description of what some of these means, (for example bufsize, bitrate settings) in the [[x264EncodingGuide]].
     52You can see a description of what some of these means, (for example bufsize, bitrate settings) in the [[Encode/H.264]].
    5353
    5454Here is how you stream to twitch.tv or similar services (rtmp protocol), using ffmpeg 1.0 or ffmpeg-git (tested on 2012-11-12), this is also for pulseaudio users:
     
    6868== Latency ==
    6969
    70 You may be able to decrease initial "startup" latency by specifing that I-frames come "more frequently" (or basically always, in the case of [[x264EncodingGuide|x264]]'s zerolatency setting), though this can increase frame size and decrease quality, see [http://mewiki.project357.com/wiki/X264_Encoding_Suggestions here] for some more background.  Basically for typical x264 streams, it inserts an I-frame every 250 frames.  This means that new clients that connect to the stream may have to wait up to 250 frames before they can start receiving the stream (or start with old data).  So increasing I-frame frequency (makes the stream larger, but might decrease latency).  For real time captures you can also decrease latency of audio in windows dshow by using the dshow audio_buffer_size [http://ffmpeg.org/ffmpeg.html#Options setting].  You can also decrease latency by tuning any broadcast server you are using to minimize latency, and finally by tuning the client that receives the stream to not "cache" any incoming data, which, if it does, increases latency.
     70You may be able to decrease initial "startup" latency by specifing that I-frames come "more frequently" (or basically always, in the case of [[Encode/H.264|x264]]'s zerolatency setting), though this can increase frame size and decrease quality, see [http://mewiki.project357.com/wiki/X264_Encoding_Suggestions here] for some more background.  Basically for typical x264 streams, it inserts an I-frame every 250 frames.  This means that new clients that connect to the stream may have to wait up to 250 frames before they can start receiving the stream (or start with old data).  So increasing I-frame frequency (makes the stream larger, but might decrease latency).  For real time captures you can also decrease latency of audio in windows dshow by using the dshow audio_buffer_size [http://ffmpeg.org/ffmpeg.html#Options setting].  You can also decrease latency by tuning any broadcast server you are using to minimize latency, and finally by tuning the client that receives the stream to not "cache" any incoming data, which, if it does, increases latency.
    7171
    7272Sometimes audio codecs also introduce some latency of their own.  You may be able to get less latency by using speex, for example, or opus, in place of libmp3lame.
     
    110110Basically, the easiest way to save cpu is to decrease the input frame rate/size, or decrease the output frame rate/size.
    111111
    112 Also you could (if capturing from live source), instruct the live source to feed a "smaller stream" (ex: webcam stream 640x480 instead of 1024x1280), or you could set a lower output "output quality" setting (q level), or specify a lower output desired bitrate (see [[x264EncodingGuide]] for a background).  Or try a different output codec, or specify new parameters to your codec (for instance, a different profile or preset for [[x264EncodingGuide|libx264]]).  Specifying $ -threads 0 instructs the encoder to use all available cpu cores, which is the default.  You could also resize the input first, before transcoding it, so it's not as large.  Applying a smoothing filter like hqdn3d before encoding might help it compress better, yielding smaller files.
     112Also you could (if capturing from live source), instruct the live source to feed a "smaller stream" (ex: webcam stream 640x480 instead of 1024x1280), or you could set a lower output "output quality" setting (q level), or specify a lower output desired bitrate (see [[Encode/H.264]] for a background).  Or try a different output codec, or specify new parameters to your codec (for instance, a different profile or preset for [[Encode/H.264|libx264]]).  Specifying $ -threads 0 instructs the encoder to use all available cpu cores, which is the default.  You could also resize the input first, before transcoding it, so it's not as large.  Applying a smoothing filter like hqdn3d before encoding might help it compress better, yielding smaller files.
    113113
    114114You can also set a lower output frame rate to of course decrease cpu usage. 
     
    167167 -i 'udp://localhost:5000?fifo_size=1000000&overrun_nonfatal=1' tells ffmpeg where to pull the input stream from. The parts after the ? are probably not needed most of the time, but I did need it after all.
    168168
    169  -crf 30 sets the Content Rate Factor. That's an x264 argument that tries to keep reasonably consistent video quality, while varying bitrate during more 'complicated' scenes, etc. A value of 30 allows somewhat lower quality and bit rate.  See [x264EncodingGuide].
     169 -crf 30 sets the Content Rate Factor. That's an x264 argument that tries to keep reasonably consistent video quality, while varying bitrate during more 'complicated' scenes, etc. A value of 30 allows somewhat lower quality and bit rate.  See [[Encode/H.264]].
    170170
    171171 -preset ultrafast as the name implies provides for the fastest possible encoding. If some tradeoff between quality and encode speed, go for the speed. This might be needed if you are going to be transcoding multiple streams on one machine.