Opened 3 years ago

Closed 22 months ago

#3079 closed defect (fixed)

libavfilter caches and drops frames with multiple desynched input streams

Reported by: richardpl Owned by:
Priority: normal Component: ffmpeg
Version: git-master Keywords: bounty
Cc: michael Blocked By:
Blocking: Reproduced by developer: yes
Analyzed by developer: yes

Description

Problem:

Filter takes 3 streams, returns 1 stream.

If first input generates frames faster than 2 and/or 3 it will
pointlessly start to queue frames from 1st input.

Solution:
Do not queque and thus request new frames when frames from other inputs are not available.

Problem:
Filter returns multiple frames from .fitler_frame. Libavfilter cache
all such frames in memory.

Solution:
Do not cache frames in memory, forward them to next link immediately.

Change History (21)

comment:1 in reply to: ↑ description ; follow-up: Changed 3 years ago by Cigaes

Replying to richardpl:

Solution:
Do not queque and thus request new frames when frames from other inputs are not available.

That is already how lavfi works.

Solution:
Do not cache frames in memory, forward them to next link immediately.

Idem, that is already how lavfi works.

Do you have actual issues or are you just spreading FUD?

comment:2 Changed 3 years ago by cehoyos

How much are you willing to pay?

comment:3 Changed 3 years ago by richardpl

What is minimal you will accept?

comment:4 in reply to: ↑ 1 ; follow-up: Changed 3 years ago by richardpl

Replying to Cigaes:

Replying to richardpl:

Solution:
Do not queque and thus request new frames when frames from other inputs are not available.

That is already how lavfi works.

I can not get it to work with displace filter, take command from single example in
its doxy it have two geq filters, If I simplify geq filters then bufqueue overflow
does not happen.

Solution:
Do not cache frames in memory, forward them to next link immediately.

Idem, that is already how lavfi works.

I have zopan filter in personal repo, that takes one frame and returns many.
All frames are in memory.

Do you have actual issues or are you just spreading FUD?

I have one more issue(reported in another bug) and that is about pts resetting when h/w/format changes.

And this is not FUD, it is real issues encountered in real life usage of lavfi.

comment:5 in reply to: ↑ 4 ; follow-ups: Changed 3 years ago by Cigaes

Replying to richardpl:

I can not get it to work with displace filter, take command from single example in
its doxy it have two geq filters, If I simplify geq filters then bufqueue overflow
does not happen.

Missing full command line and console output. I thought you were familiar enough with this project not to waste developers' time like that.

I have zopan filter in personal repo, that takes one frame and returns many.
All frames are in memory.

I can only assume that this "zopan" filter does not respect the design guidelines.

comment:6 Changed 3 years ago by compn

example command lines will help developers reproduce and create new features to fix bugs.

comment:7 in reply to: ↑ 5 Changed 3 years ago by ubitux

Replying to Cigaes:

I have zopan filter in personal repo, that takes one frame and returns many.
All frames are in memory.

I can only assume that this "zopan" filter does not respect the design guidelines.

Speaking of this, the doc/filter_design.txt file definitely needs some updates, and hints about those design guidelines.

comment:8 Changed 3 years ago by gjdfgh

I suggest switching to vapoursynth and abandoning libavfilter.

comment:9 in reply to: ↑ 5 Changed 3 years ago by richardpl

Replying to Cigaes:

Replying to richardpl:

I can not get it to work with displace filter, take command from single example in
its doxy it have two geq filters, If I simplify geq filters then bufqueue overflow
does not happen.

Missing full command line and console output. I thought you were familiar enough with this project not to waste developers' time like that.

Even if filter is not yet in master?

I have zopan filter in personal repo, that takes one frame and returns many.
All frames are in memory.

I can only assume that this "zopan" filter does not respect the design guidelines.

Even if you never looked at actual code?

comment:10 Changed 3 years ago by Cigaes

If you can not reproduce any issue with official lavfi, but only with third-party patched versions, then please close this bug report as invalid and ask help to the third-party that provided the patched version.

comment:11 Changed 3 years ago by richardpl

  • Status changed from new to open

I will not do that.

comment:12 Changed 3 years ago by richardpl

ffmpeg -i matrixbench_mpeg2.mpg -f lavfi -i nullsrc,geq='r=128+50*sin(2*PI*X/800+T):g=128+50*sin(2*PI*X/800+T):b=128+50*sin(2*PI*X/800+T)' -lavfi overlay -y out.nut

ffmpeg version N-57391-g4803ada Copyright (c) 2000-2013 the FFmpeg developers

built on Oct 24 2013 12:03:55 with FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610
configuration: --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/lib --as=clang --cc=clang --disable-debug --disable-ffplay --disable-ffserver --disable-indevs --disable-outdevs --disable-postproc --disable-static --enable-gpl --enable-indev=lavfi --enable-indev=oss --enable-indev=x11grab --enable-nonfree --enable-openssl --enable-outdev=oss --enable-shared --enable-stripping --enable-x11grab --enable-libfreetype --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-ladspa --mandir=/usr/local/man --samples=../fate-suite
libavutil 52. 47.101 / 52. 47.101
libavcodec 55. 38.101 / 55. 38.101
libavformat 55. 19.104 / 55. 19.104
libavdevice 55. 5.100 / 55. 5.100
libavfilter 3. 89.100 / 3. 89.100
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 17.104 / 0. 17.104

[NULL @ 0x2a063800] start time is not set in estimate_timings_from_pts
Input #0, mpeg, from 'matrixbench_mpeg2.mpg':

Duration: 00:01:01.20, start: 0.220000, bitrate: 5944 kb/s

Stream #0:0[0x1bf]: Data: dvd_nav_packet
Stream #0:1[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, bt470bg), 720x576 [SAR 16:15 DAR 4:3], max. 11421 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:2[0x1c0]: Audio: mp2, 48000 Hz, stereo, s16p, 384 kb/s

Input #1, lavfi, from 'nullsrc,geq=r=128+50*sin(2*PI*X/800+T):g=128+50*sin(2*PI*X/800+T):b=128+50*sin(2*PI*X/800+T)':

Duration: N/A, start: 0.000000, bitrate: N/A

Stream #1:0: Video: rawvideo (G3[0][8] / 0x8003347), gbrp, 320x240 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 tbc

Output #0, nut, to 'out.nut':

Metadata:

encoder : Lavf55.19.104
Stream #0:0: Video: mpeg4 (FMP4 / 0x34504D46), yuv420p, 720x576 [SAR 16:15 DAR 4:3], q=2-31, 200 kb/s, 51200 tbn, 25 tbc
Stream #0:1: Audio: mp3 (libmp3lame) (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p

Stream mapping:

Stream #0:1 (mpeg2video) -> overlay:main (graph 0)
Stream #1:0 (rawvideo) -> overlay:overlay (graph 0)
overlay (graph 0) -> Stream #0:0 (mpeg4)
Stream #0:2 -> #0:1 (mp2 -> libmp3lame)

Press [q] to stop, ? for help
[Parsed_overlay_0 @ 0x2a00d640] [framesync @ 0x2a003324] Buffer queue overflow, dropping.

Last message repeated 18 times

[Parsed_overlay_0 @ 0x2a00d640] [framesync @ 0x2a003324] Buffer queue overflow, dropping.

Last message repeated 15 times

[Parsed_overlay_0 @ 0x2a00d640] [framesync @ 0x2a003324] Buffer queue overflow, dropping.

Last message repeated 16 times

[Parsed_overlay_0 @ 0x2a00d640] [framesync @ 0x2a003324] Buffer queue overflow, dropping.

Last message repeated 14 times

frame= 20 fps=6.0 q=13.7 Lsize= 156kB time=00:00:04.60 bitrate= 278.1kbits/s
video:83kB audio:72kB subtitle:0 global headers:0kB muxing overhead 0.730014%

comment:13 Changed 3 years ago by saste

  • Summary changed from libavfilter sucks to libavfilter caches and drops frames with multiple desynched input streams

comment:14 Changed 3 years ago by michael

  • Priority changed from critical to normal

do not abuse trac priorities, critical is about security & data loss, like a faulty rm in Makefile

comment:15 Changed 3 years ago by Cigaes

  • Analyzed by developer set
  • Component changed from avfilter to FFmpeg

The problem has actually nothing to do with lavfi: lavfi request frames on the correct input. Unfortunately, ffmpeg decides to provide frames on the other one.

The reason is the multithreaded handling of inputs in ffmpeg: geq is slower than the rest of the processing, therefore frequently ffmpeg tries to read a frame from it when it is not ready, and reverts to reading from the file. This behaviour may be useful when reading from network streams, but it is harmful in other cases.

Fixing it withoug losing the benefits of multithreaded inputs will be tricky.

Your bug reports suck.

comment:16 Changed 3 years ago by richardpl

Issue about caching all frames is still there.

comment:17 Changed 3 years ago by Cigaes

The command-line to reproduce it is not.

comment:18 Changed 3 years ago by michael

If i understand the issue correctly then a possible solution might be

  • add a source identifier to AVFrame, so that at the filtergraphs output as well as in every fifo filter we know precissely from which input how many frames are stored. These source identifiers would have to be lists as filters like overlay, produce frames that combine 2 sources.
  • Give fifos some means to export these statistics so that we can easily check how many frames from input X are in memory for a filtergraph.
  • Use these statistics to guide the inputs, like for example stop a thread when many more of its frames are held in memory than another thread.

With this inputs can run in multiple threads and at different speeds as long as it doesnt cause frames to accumulate in buffers, when such accumulation occurs due to interconnects in a complex filtergraph than the affected input threads would get paused

The alternative would be to disallow inputs to run at different speeds and pause threads that get too far ahead. But this could cause problems if timestamps get changed inside the filter graph or future filters send seek commands to the source somehow ...

comment:19 Changed 3 years ago by michael

  • Cc michael added

add myself to CC

comment:20 Changed 3 years ago by Cigaes

I believe you misunderstood part of the problem. lavfi already have the required infrastructure to select the best input to get the data flowing. The problem is that ffmpeg does not respect that selection.

More precisely, here is what happens:

  • Try to overlay 0:v on top of 1:v, muxed with audio from 1:a.
  • ffmpeg decides it needs a video packet, it requests a packet from the overlay filter graph.
  • overlay needs a frame on 0:v to progress, it does not have one, it marks it as needed.
  • ffmpeg tries to read from 0:v, XXX but 0:v is not ready.
  • ffmpeg marks the output video stream as temporarily unavailable and moves to the next stream: audio.
  • ffmpeg wants a frame for 1:a, so it reads from 1, possibly getting a frame for 1:v instead.
  • Repeat.

The problem is in XXX: if it happens occasionally due to thread scheduling randomness, it is not a problem because it is absorbed by the various buffers and ffmpeg catches up later.

But if XXX happens repeatedly because 0 is slower than 1, then it becomes a problem, because decoded frames from 1:v will accumulate in overlay's input.

But it has really nothing to do with lavfi and multiple inputs, the exact same happens if you just want to mux 0:v with 1:a, except this times it's encoded audio packets that accumulate in the muxer's queue, and that eats much less memory. But still, try this:

./ffmpeg_g -f lavfi -i testsrc=s=10240x7680,scale=320:240 -f s16le -i /dev/zero -f framecrc -
Error while decoding stream #1:0: Cannot allocate memory

IMHO, the good solution in this case is to stop being greedy: if ffmpeg decided it needs to read from input #‌0, then let it wait fro input #‌0; the demuxing threads are there to take care of input #‌1 in the meantime if necessary.

I have sent a patch to that effect to the devel mailing-list, I suggest we continue the discussion there, with a decent user interface.

comment:21 Changed 22 months ago by cehoyos

  • Resolution set to fixed
  • Status changed from open to closed

Fixed by Nicolas in 299a5687
The memory consumption was not reproducible after 3adb5f8d

Note: See TracTickets for help on using tickets.