Opened 6 years ago

Closed 6 years ago

Last modified 6 years ago

#6975 closed defect (invalid)

Recent change to sync video output to screen refresh conflicts with -noframedrop

Reported by: Misaki Owned by:
Priority: normal Component: ffplay
Version: git-master Keywords:
Cc: Blocked By:
Blocking: Reproduced by developer: no
Analyzed by developer: no

Description

ffplay was recently changed so that it won't display frames faster than the screen refresh rate. With -noframedrop, this means that ffplay can't catch up if it needs to display frames faster than the refresh rate, which is often 60 fps or maybe slightly under like 59.8.

This can quickly be tested by creating a high-fps video, then playing it back:

$ ffmpeg -filter_complex color=black:r=120 -t 10 black.mp4
$ ffplay -noframedrop black.mp4

This will show video falling behind as shown with M-V, if your refresh rate is less than 120.

I don't know if this is an enhancement request or a bug report; I just want an option to make it work like it did in the past, so I can play 60 fps video without dropping frames when complexity spikes.

Even if someone can't or won't fix this bug, it would be nice to confirm it. I've tested with compiz, metacity and GNOME/Ubuntu, so I don't think it's due to a non-ffmpeg setting, but I can't be sure.

Change History (5)

comment:1 by Misaki, 6 years ago

What probably happened is that ffplay now pays attention to the vblank_mode environment variable. From https://stackoverflow.com/questions/17196117/disable-vertical-sync-for-glxgears

 time (  time ffplay -autoexit -noframedrop black.mp4 )
 time (  ffplay -autoexit -noframedrop black.mp4 )

real: 20 seconds

time ( vblank_mode=0 ffplay -autoexit -noframedrop black.mp4 )

real: 13.4 seconds

I thought that vblank_mode=0 time ffplay [...] reduced it to 10 seconds, but what was actually happening was that I have an alias of ffplay to 'ffplay -hide_banner -noframedrop', and for some reason the temporary environment variable and 'time' makes it so the alias isn't used.

I think it's enough though, and ffplay just doesn't speed late frames up to 2x speed, only to 1.5x speed.

I assume this environment variable has been set this way for a long time, since I was familiar with glxgears syncing to vertical refresh.

This may still be a bug, if users don't expect to have to deal with this non-ffmpeg setting in order to get 60 fps videos to work with -noframedrop.

in reply to:  description comment:2 by Carl Eugen Hoyos, 6 years ago

Component: undeterminedffplay
Resolution: invalid
Status: newclosed

Replying to Misaki:

$ ffmpeg -filter_complex color=black:r=120 -t 10 black.mp4
$ ffplay -noframedrop black.mp4

This will show video falling behind as shown with M-V, if your refresh rate is less than 120.

This is the expected behaviour afaict (what else would noframedrop mean?) that is also reproducible with old versions of FFplay, why do you believe there is a bug?

$ time ffplay -noframedrop black.mp4 -autoexit
ffplay version 0.9, Copyright (c) 2003-2011 the FFmpeg developers
  built on Nov 22 2012 09:29:11 with gcc 4.7.1 20120723 [gcc-4_7-branch revision 189773]
  configuration: --enable-gpl
  libavutil    51. 32. 0 / 51. 32. 0
  libavcodec   53. 42. 0 / 53. 42. 0
  libavformat  53. 24. 0 / 53. 24. 0
  libavdevice  53.  4. 0 / 53.  4. 0
  libavfilter   2. 53. 0 /  2. 53. 0
  libswscale    2.  1. 0 /  2.  1. 0
  libpostproc  51.  2. 0 / 51.  2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'black.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf55.48.100
  Duration: 00:00:10.00, start: 0.000000, bitrate: 27 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 15 kb/s, 120 fps, 120 tbr, 15360 tbn, 240 tbc
    Metadata:
      handler_name    :
   9.97 A-V:  0.000 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0   0/0

real    0m20.704s
user    0m0.549s
sys     0m0.126s

comment:3 by Misaki, 6 years ago

I used to able to play 1080p 60fps video with ffplay -noframedrop. Then I upgraded from Ubuntu 16.10 to Ubuntu 17.10, and now ffplay will only play at about 80% speed with -noframedrop. It won't even play a 720p 60fps file without gradually falling behind. It isn't lack of CPU resources.

Many things changed other than ffplay. Older version of ffplay acting the same way (I would have tested myself, but static packages of ffmpeg don't include ffplay) isolates the change to elsewhere.

Maybe this always happens with OpenGL-based rendering, and my system changed so that ffplay previously didn't use OpenGL, but now it does, or how OpenGL works changed.

-noframedrop can be helpful when this issue doesn't occur, because dropped frames can look worse than changes in video speed. VP9 is particularly prone to jumps in decoding complexity upon movement, leading to sections that will never display no matter how many times you try it, unless you use -noframedrop. H.264 can just drop B-frames, but VP9 falls behind and drops whole sections (1+ second) of video.

It would be possible to prevent this issue from occurring. Ignoring the environment variable, or as it seems assuming a different value for the default, would be making the judgement call that "the user intends for this variable to prevent unseen frames in 3D rendering, and does not intend for it to prevent 60fps video from being played correctly in some cases."

comment:4 by Misaki, 6 years ago

Another update to this closed ticket that, like probably every other issue I've reported that wasn't already fixed (manual page was already fixed), won't get fixed!

Setting vblank_mode=0 does not entirely fix everything. I still can't play 1080p 60fps video in realtime, with -noframedrop.

Why this won't get fixed:
1) It only affects video playback at some border of performance. People with recent computers might be able to play 3840x2160 video at 60 fps without running into whatever issue I have.
2) It only affects -noframedrop, which is not the default.

A random, existing 1080p, 60fps h.264 video with High profile at 3200 kb/s performs slightly better than a completely black video at 28 kb/s.

(
The random high bitrate video took 11.15 seconds to play with option '-t 10', ending with A-V at 0.53. User CPU of 9.7 seconds, system CPU 2.3.

The black video took 11.50 seconds to run, ending with M-V at 1.0, user CPU 6.2, sys 2.3.

When tested again with CPUs at performance setting = locked to highest frequency, the high bitrate video was still ahead at 10.88 vs 11.08 seconds, with A-V at 0.285 vs 0.635 for the black video. Black video user CPU down to 4.95.
)

In this case, the CPU can't be the limiting factor. It doesn't even saturate one CPU core.

Tested 3840x2160p video at 15 fps. This ran without the slowdown. The pixels per second is the same as the 1920x1080p 60fps video.

3840x2160p at 30fps does have slowdown. With CPUs locked to high performance,

10.76 M-V:  0.813 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0   

real	0m11.339s
user	0m10.104s
sys	0m3.948s

Completely black VP9 video at 1920x1080p 60 fps decodes slightly easier, with user=4.4 and sys=2.1, but is still delayed ending at M-V=0.356 in 10.8 seconds.

All the above results are with vblank_mode=0.

Tested mpeg1video, mpeg2video, ffv1, and mpeg4. All of them had slowdown. mpeg4 was the fastest with only 0.08 seconds late after 10 seconds. For those tested, not setting vblank_mode=0 made the slowdown worse; for mpeg4, M-V only went up to 0.238. (Also, all of these codecs had bitrate of 700~1500 kbps for completely black video.)

There does seem to be something significant about the framerate, with the difference in 2160p ('4K') at 15 fps vs 1080p at 60 fps.

3840x4320 at 30 fps played back in 18~20 sec after a slow first trial, with user CPU around 19.5~20 and sys around 7.9. Idle CPU as seen in top around 15% for this, compared to 27% idle for 3840x2160 30 fps and 55~60% idle for 1920x1080 60fps.

This seems like it could be consistent with some kind of rendering process referenced by ffplay being single-threaded. Correction, it takes ffmpeg 14 userspace seconds to decode the 4Kx4K video with '-f null -'. If ffplay is being limited by a single-threaded action, it would have to be decoding, though I'm not sure if that makes any sense if the decoding is done by the ffmpeg or libav libraries, and we assume that it isn't a change in ffmpeg/ffplay that's responsible for this issue.

Even if there is a single-thread limitation, it doesn't explain the slowdown with vblank_mode=0 and no CPUs saturated. In fact, having randomly noticed that you can make 'top' show CPUs separately, load is distributed evenly between them, but this might not rule out a single-threading cause.

I tried to test how many frames were being dropped with -framedrop (default), but first attempt failed. Adding -vf "drawtext='fontcolor=white:fontsize=96:x=(W-tw)/2+4:y=(H-th)/2+4:text=%{n}:alpha=0.5',boxblur,drawtext='fontcolor=white:fontsize=96:x=(W-tw)/2:y=(H-th)/2:text=%{n}'" seemed to be going well, with no obvious skipping and no M-V difference, but the numbers were actually going up much too slow and it was only around 190 when the video ended for VP9 video, instead of 600.

So I encode VP9 again at 15 fps encoding speed, at fastest speed of '-speed 5 -quality realtime', and default bitrate. Expect VP9 to show more obvious frame dropping than H.264, unless maybe B-frames are disabled.

Result: no obvious frame dropping from -framedrop. Without setting vblank_mode=0 but with -noframedrop, end delay is 0.972. With both set, end delay is 0.692. time is around 4.7 user, 2 sys, like before.

-v trace doesn't list dropped frames. More precise testing of how many frames are dropping when the CPU isn't saturated needs a better method of visual detection. This might help diagnose the cause, if it turns out that in fact no frames are being dropped, but also could show the potential improvement of getting ffplay under whatever system configuration I have (drivers, etc.) to use all available CPU to prevent slowdown with -noframedrop.


4Kx4K at 30 fps does drop plenty of frames for me. CPU usage with -framedrop, and optionally with -loop 0 for a 2-second video, is close to maximum (idle% often down to 5%, and occasionally at 0%). CPU with -noframedrop is 25% idle as mentioned before. Even if maybe half the frames are being dropped, user CPU from 'time' is down only 20%; sys CPU is less than half, suggesting the proportion of dropped frames. This is with no B-frames and -refs 1, so I'm surprised ffplay can skip anything at all. It's possible that ffplay is still decoding all frames, and just saving CPU by not displaying them.

I don't know if there could be something other than CPU that limits display of high-resolution video. I'm using compiz and my display is only 1280x800, which I think means that there shouldn't be a problem with the bandwidth of any buses, even if a 4Kx4K 30 fps video is being displayed. I have no idea how the video card is involved with this.

It would still help to do visual detection of framedropping in the 1080p 60fps video.


By reducing the size of output frames, slowdown is reduced. Adding -vf scale=1280:-1 to 1080p 60fps video increases CPU usage from 4.8 user to 7.8, but reduces the video lag after 10 sec from 0.6 to 0. Tellingly, it also reduces system CPU usage from 2.15 to 1.23.

It appears that if ffplay has a video source go to nullsink, it doesn't bother decoding that source. (Edit: it does increase CPU and cause slowdown, but much less than expected for full decoding.) With a command like time vblank_mode=0 ffplay -noframedrop "$(ls -t *.*|head -6|tail -1)" -infbuf -autoexit -vf nullsink,color=black:3840x2160:30,trim=0:5 &, there's no slowdown. At 60 fps, there is a slowdown, of M-V=2.77 at time 7.78.

This is much less than the slowdown for decoded black frames at the same resolution. Attempting to double fps of the 4K 30-fps file with -vf setpts=PTS/2, decoded black video uses 9~10 sec of user CPU for 5 sec of video at 60 fps compared to 2.6 sec, 3.9 sec sys CPU compared to 2.5 sec, and is 5.6 sec late after 5 sec of video input compared to 2.6~2.8 sec late for the newly-generated stream.

So the filter-generated stream displays significantly faster, but is still slowed at high resolution*fps even though less than one CPU core is being used.

It seems reasonable to discard single-threading as the cause.

The magnitude of the effect on 1080p 60fps suggests that it could have the same cause of some kind of bottleneck associated with high-resolution*fps output from ffplay, even though there's no slowdown in 2160p(4K) at 15 fps. This bottleneck apparently exists on my system now, even though it didn't before upgrading my (GNU/)Linux distribution. It does not seem likely that this additional problem (on top of unwanted vsync) is within ffplay/libav, even if it may be possible to work around it in some cases with inefficient filter usage.

Comparisons: vlc still plays 1080p 60fps video without reporting any dropped frames; it seems to me that I might be unable to watch 1080p 60fps on YouTube now, but that might have been due to an unrelated bug in compiz or the Linux kernel that caused high CPU.


In case someone has an interest in fixing this and it's relevant, here's build information:

ffplay version 3.3.4-2 Copyright (c) 2003-2017 the FFmpeg developers

built with gcc 7 (Ubuntu 7.2.0-8ubuntu2)
configuration: --prefix=/usr --extra-version=2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libavresample 3. 5. 0 / 3. 5. 0
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100

A non-rigorous test found that vlc took 6.8 user, 0.5 sys sec of CPU to play the test file; ffplay took 6 user, 2.5 sys. vlc does somewhat consistently report that it only displayed 575 frames for 600 decoded, but the number dropped seems constant after the first few seconds, going from 190/165 to 600/575. I have 'Drop late frames' unchecked in vlc's options, although I'm not sure it does anything. (It doesn't prevent drops after '5 seconds of late video'.) I note that cehoyos's system had 'sys' time of less than 1/4 of user time, at ~0.5 and ~0.12, while my system takes ~0.9 user and ~0.9 sys to play the same file.

It's possible there was a change to my system, or even to the build options used for Ubuntu's ffplay, that make it output video in a way that uses more system CPU and accepts lower throughput. I haven't tested non-compiz display managers yet to see if it's leading to more system CPU usage.

Last edited 6 years ago by Misaki (previous) (diff)

comment:5 by Misaki, 6 years ago

Last edited 6 years ago by Misaki (previous) (diff)
Note: See TracTickets for help on using tickets.