Opened 7 years ago
Closed 7 years ago
#6682 closed defect (invalid)
hstack repeats input frames
Reported by: | Damon Maria | Owned by: | |
---|---|---|---|
Priority: | normal | Component: | undetermined |
Version: | git-master | Keywords: | hstack |
Cc: | damon@mahuika.co.nz | Blocked By: | |
Blocking: | Reproduced by developer: | no | |
Analyzed by developer: | no |
Description
Using the hstack filter in code I was having a strange problem where after pushing a single frame into each input buffer I would sometimes get 2 frames out of the buffersink. Also, in the resulting video, some parts would 'pause' between frames as tho the same image was used across 2 frames for one of the hstack inputs, but not the other.
I then switched to command line to try hstack and get the same result. I dumped the output to individual image files so I could make sure it wasn't an issue with my viewer.
How to reproduce:
$ ffmpeg -i 2017-08-31T220000-1.mp4 -i 2017-08-31T220000-2.mp4 -filter_complex "[0:v][1:v]hstack=inputs=2[v]" -map "[v]" -t 10 filename%03d.jpg ffmpeg version git-2017-09-21-6f15f1c Copyright (c) 2000-2017 the FFmpeg developers built with Apple LLVM version 9.0.0 (clang-900.0.37) configuration: --prefix=/usr/local/Cellar/ffmpeg/HEAD-6f15f1c --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma --enable-nonfree libavutil 55. 76.100 / 55. 76.100 libavcodec 57.106.101 / 57.106.101 libavformat 57. 82.101 / 57. 82.101 libavdevice 57. 8.101 / 57. 8.101 libavfilter 6.105.100 / 6.105.100 libavresample 3. 6. 0 / 3. 6. 0 libswscale 4. 7.103 / 4. 7.103 libswresample 2. 8.100 / 2. 8.100 libpostproc 54. 6.100 / 54. 6.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '2017-08-31T220000-1.mp4': Metadata: major_brand : isml minor_version : 512 compatible_brands: piff title : rtsp://root:root@192.168.14.104/axis-media/media.amp?camera=1 encoder : Lavf57.71.100 comment : rtsp-server Duration: 00:59:59.79, start: 0.000000, bitrate: 263 kb/s Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc, bt709), 800x600 [SAR 1:1 DAR 4:3], 263 kb/s, 4 fps, 4 tbr, 10000k tbn, 20000k tbc (default) Metadata: handler_name : VideoHandler Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '2017-08-31T220000-2.mp4': Metadata: major_brand : isml minor_version : 512 compatible_brands: piff title : rtsp://root:root@192.168.14.104/axis-media/media.amp?camera=2 encoder : Lavf57.71.100 comment : rtsp-server Duration: 00:59:59.79, start: 0.000000, bitrate: 222 kb/s Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc, bt709), 800x600 [SAR 1:1 DAR 4:3], 222 kb/s, 4 fps, 4 tbr, 10000k tbn, 20000k tbc (default) Metadata: handler_name : VideoHandler Stream mapping: Stream #0:0 (h264) -> hstack:input0 Stream #1:0 (h264) -> hstack:input1 hstack -> Stream #0:0 (mjpeg) Press [q] to stop, [?] for help Output #0, image2, to '/Users/damon/Downloads/cmd-line-hstack-2/filename%03d.jpg': Metadata: major_brand : isml minor_version : 512 compatible_brands: piff title : rtsp://root:root@192.168.14.104/axis-media/media.amp?camera=1 comment : rtsp-server encoder : Lavf57.82.101 Stream #0:0: Video: mjpeg, yuvj420p(pc), 1600x600 [SAR 1:1 DAR 8:3], q=2-31, 200 kb/s, 4 fps, 4 tbn, 4 tbc (default) Metadata: encoder : Lavc57.106.101 mjpeg Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1 Past duration 0.999992 too large frame= 40 fps=0.0 q=24.8 Lsize=N/A time=00:00:10.00 bitrate=N/A dup=0 drop=27 speed= 35x video:2406kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
As I flick through the resulting images sections of the frames (corresponding the inputs to the hstack) clearly remain the same between frames sometimes.
Other points that show that I think this is a real problem:
- I get the same result in code and on the command line
- Using the -t 10 option to only dump 10 seconds (the input videos are an hour long) the 40th image output (the videos are only 4 FPS) is from an earlier time in the video for the hstack output than if I dump one of the individual videos to files in the same way (without hstack). Which would happen if images are being duplicated.
- If I run with the null muxer then it outputs errors that (to my uneducated eye) seem to indicate exactly this problem is happening:
$ ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=2" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=3" -filter_complex "[0:v][1:v][2:v]hstack=inputs=3[v]" -map "[v]" -t 60 -f null - ... [null @ 0xbe19849b80] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 155 >= 155 [null @ 0xbe19849b80] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 156 >= 156 [null @ 0xbe19849b80] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 157 >= 157 Last message repeated 1 times [null @ 0xbe19849b80] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 158 >= 158 Last message repeated 1 times [null @ 0xbe19849b80] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 159 >= 159 Last message repeated 1 times ...
- I've tried with both files and RTSP streams as input (my intention is to use RTSP in the end)
Attachments (1)
Change History (22)
comment:1 by , 7 years ago
Cc: | added |
---|
comment:2 by , 7 years ago
- The error doesn't seem to happen if the 2 input videos are identical
- I've tried other videos as input and it seems to happen less often, but it still does happen
- The "non monotonically increasing dts" error definitely seems to mimic where there are duplicated frames
comment:3 by , 7 years ago
Priority: | important → normal |
---|
Do you think this is a regression?
Can you only reproduce the issue with rtsp-to-mov recordings you made with FFmpeg using -codec copy
(and are likely invalid) or also other samples?
comment:4 by , 7 years ago
The video files I am having the most problem with recorded with copy codec from RTSP. But I am having problems with other video files (that I did not record) and the direct RTSP streams.
comment:6 by , 7 years ago
Looks like input is VFR, use setpts filter to reset all pts for each input to hstack, if you expect one to one relation between input frames and output frame.
comment:7 by , 7 years ago
OK. I can confirm using setpts solves the problem:
ffmpeg -i 2017-08-31T220000-1.mp4 -i 2017-08-31T220000-2.mp4 -filter_complex "[0:v]setpts=N/(4*TB)[a];[1:v]setpts=N/(4*TB)[b];[a][b]hstack=[v]" -map "[v]" -t 10 /Users/damon/Downloads/cmd-line-hstack-2/filename%03d.jpg
The produces frames without skipping and when using the null muxer does not give any "non monotonically increasing dts" warnings.
The problem is this breaks my use-case. The camera these 2 videos were originally dumped from is an AXIS Q3708: https://www.axis.com/global/en/products/axis-q3708-pve The camera has 3 sensors in it that when hstack'ed together produce the 180º image. Each sensor produces it's own RTSP stream so I need to sync and combine the streams in my client. The reason I'm trying to use ffmpeg in the first place (rather than OpenCV's VideoCapture) is that I was hoping ffmpeg would do this syncing for me. If I just open the streams and combine them myself and the network causes slightly different start times then my images are misaligned. I have used Wireshark to sniff the RTCP packets and can see the camera is correctly sending NTP time in the Sender Report. I presume by using setpts that timing info is replaced and different start times or dropped frames would cause mis-matched frames?
I understand this have moved from a bug report to a more general ffmpeg question. I'm happy to move this over to Stackoverflow if that's a better place.
comment:8 by , 7 years ago
Please provide command line and complete, uncut console output for re-encoding one rtsp stream.
follow-up: 10 comment:9 by , 7 years ago
cehoyos: I'm not certain what you're asking for. If it's the command line and console when I saved the RTSP streams to file like the files I've been using above then it's:
$ ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -c:v copy -map 0:0 -t 10 -metadata title="rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" 2017-09-22T231000-1.mp4 ffmpeg version 3.3.4-1~16.04.york0 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609 configuration: --prefix=/usr --extra-version='1~16.04.york0' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared libavutil 55. 58.100 / 55. 58.100 libavcodec 57. 89.100 / 57. 89.100 libavformat 57. 71.100 / 57. 71.100 libavdevice 57. 6.100 / 57. 6.100 libavfilter 6. 82.100 / 6. 82.100 libavresample 3. 5. 0 / 3. 5. 0 libswscale 4. 6.100 / 4. 6.100 libswresample 2. 7.100 / 2. 7.100 libpostproc 54. 5.100 / 54. 5.100 Input #0, rtsp, from 'rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1': Metadata: title : Session streamed with GStreamer comment : rtsp-server Duration: N/A, start: 0.249978, bitrate: N/A Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 800x600 [SAR 1:1 DAR 4:3], 4 fps, 4 tbr, 90k tbn, 180k tbc Output #0, mp4, to '2017-09-22T231000-1.mp4': Metadata: comment : rtsp-server title : rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1 encoder : Lavf57.71.100 Stream #0:0: Video: h264 (Main) ([33][0][0][0] / 0x0021), yuvj420p(pc, bt709, progressive), 800x600 [SAR 1:1 DAR 4:3], q=2-31, 4 fps, 4 tbr, 90k tbn, 90k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press [q] to stop, [?] for help [mp4 @ 0x916602e20] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly [mp4 @ 0x916602e20] Non-monotonous DTS in output stream 0:0; previous: 0, current: 0; changing to 1. This may result in incorrect timestamps in the output file. frame= 42 fps= 11 q=-1.0 Lsize= 249kB time=00:00:10.00 bitrate= 204.3kbits/s speed=2.67x video:248kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.541773%
I've been recording videos with a script so haven't noticed the warning about "Timestamps are unset" and "Non-monotonous DTS in output stream 0:0". I don't remember them from when I was testing the script. Does that point to a problem? As I said above, I have Wireshark'ed the RTP and RTCP packets and they contain random starting, monotonic sequence numbers, and the Sender Reports contain NTP timestamps matching the pts in the RTP stream.
Note: my testing previous to this has been on my dev machine using the git master version of ffmpeg. I've been dumping the videos from the machine the camera is connected to which is (now, just upgraded) ffmpeg 3.3.4. Also note, the videos I used above were recorded with 3.3.3. I could compile ffmpeg from a git clone on the camera machine too if you would prefer.
comment:10 by , 7 years ago
Replying to damonmaria:
cehoyos: I'm not certain what you're asking for. If it's the command line and console when I saved the RTSP streams to file like the files I've been using above then it's:
As said, this command line produces an invalid output file, but that would not explain why stacking the rtsp frames does not work.
Duration: N/A, start: 0.249978, bitrate: N/A
The start time is the (original) reason for the issue you see, try -timestamps abs
.
follow-ups: 14 15 comment:11 by , 7 years ago
Adding -timestamps abs
didn't seen to affect it:
$ ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -c:v copy -map 0:0 -t 10 -metadata title="rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -timestamps abs 2017-09-22T231000-1.mp4 ffmpeg version 3.3.4-1~16.04.york0 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609 configuration: --prefix=/usr --extra-version='1~16.04.york0' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared libavutil 55. 58.100 / 55. 58.100 libavcodec 57. 89.100 / 57. 89.100 libavformat 57. 71.100 / 57. 71.100 libavdevice 57. 6.100 / 57. 6.100 libavfilter 6. 82.100 / 6. 82.100 libavresample 3. 5. 0 / 3. 5. 0 libswscale 4. 6.100 / 4. 6.100 libswresample 2. 7.100 / 2. 7.100 libpostproc 54. 5.100 / 54. 5.100 Input #0, rtsp, from 'rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1': Metadata: title : Session streamed with GStreamer comment : rtsp-server Duration: N/A, start: 0.250000, bitrate: N/A Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 800x600 [SAR 1:1 DAR 4:3], 4 fps, 4 tbr, 90k tbn, 180k tbc Output #0, mp4, to '2017-09-22T231000-1.mp4': Metadata: comment : rtsp-server title : rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1 encoder : Lavf57.71.100 Stream #0:0: Video: h264 (Main) ([33][0][0][0] / 0x0021), yuvj420p(pc, bt709, progressive), 800x600 [SAR 1:1 DAR 4:3], q=2-31, 4 fps, 4 tbr, 90k tbn, 90k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press [q] to stop, [?] for help [mp4 @ 0x15bd592b80] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly [mp4 @ 0x15bd592b80] Non-monotonous DTS in output stream 0:0; previous: 0, current: 0; changing to 1. This may result in incorrect timestamps in the output file. frame= 41 fps= 12 q=-1.0 Lsize= 533kB time=00:00:09.75 bitrate= 448.0kbits/s speed=2.79x video:532kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.253399%
comment:12 by , 7 years ago
I thought I better check to see if the -timestamps abs
lead to the start time being a nice round 0.250000
compared to the slightly off value in the previous comment. So I ran that again with the timestamps option and got: Duration: N/A, start: 0.249978, bitrate: N/A
. And looking through my console history I can see it's always hovering around that value, occasionally being exactly a quarter second.
comment:13 by , 7 years ago
After reading some more of the docs I also just tried adding the -vsync cfr
option with the hstack as it sounded like it would 'smooth' out any not quite exactly 0.250000
times. This seems to stop the non monotonically increasing dts
errors when using the null
muxer. Although I then get lots of RTSP and decoding errors which is strange as I don't get them when using copy to dump to file and decode those files later.
comment:14 by , 7 years ago
comment:15 by , 7 years ago
Replying to damonmaria:
$ ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -c:v copy -map 0:0 -t 10 -metadata title="rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -timestamps abs 2017-09-22T231000-1.mp4
And please stop using -vcodec copy
with rtsp input and mov output: The resulting file is invalid, you cannot use it to show an issue. Since you want to use a video filter, you have to re-encode anyway.
comment:16 by , 7 years ago
I was originally using -vcodec copy
because images resulting from the saved videos will be used for AI training. I didn't want to re-encode and have the images the model was trained on not match the direct output of the camera RTSP stream (which the model will be predicting on). I tried re-encoding lossless but obviously that creates huge files.
I presume from what you've said above to properly test this I should be doing the hstack directly on the RTSP streams, not on files I've saved using -vcodec copy
. Sure, I'll do that from now on. I did when originally posting the issue in the first place test what I was trying with hstack directly on the RTSP streams. The problem there is I get lots of RTSP and decode errors which don't happen when I used -vcodec copy
and then hstack'ed them later. It's not that the machine can't decode fast enough since running hstack on the saved files runs at 20x speed.
Using -timestamps
as an input option doesn't work for me:
$ ffmpeg -timestamps abs -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -t 10 2017-09-22T231000-3.mp4 ... Option timestamps not found.
I have just tried saving a single RTSP stream (with re-encoding) and am getting "Past duration XXX too large" errors. Maybe this is indicative of the root problem:
$ ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -t 10 -vsync cfr 2017-09-22T231000-4.mp4 ffmpeg version 3.3.4-1~16.04.york0 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609 configuration: --prefix=/usr --extra-version='1~16.04.york0' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared libavutil 55. 58.100 / 55. 58.100 libavcodec 57. 89.100 / 57. 89.100 libavformat 57. 71.100 / 57. 71.100 libavdevice 57. 6.100 / 57. 6.100 libavfilter 6. 82.100 / 6. 82.100 libavresample 3. 5. 0 / 3. 5. 0 libswscale 4. 6.100 / 4. 6.100 libswresample 2. 7.100 / 2. 7.100 libpostproc 54. 5.100 / 54. 5.100 Input #0, rtsp, from 'rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1': Metadata: title : Session streamed with GStreamer comment : rtsp-server Duration: N/A, start: 0.249989, bitrate: N/A Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 800x600 [SAR 1:1 DAR 4:3], 4 fps, 4 tbr, 90k tbn, 180k tbc Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264)) Press [q] to stop, [?] for help No pixel format specified, yuvj420p for H.264 encoding chosen. Use -pix_fmt yuv420p for compatibility with outdated media players. [libx264 @ 0xaf0a0c90c0] using SAR=1/1 [libx264 @ 0xaf0a0c90c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2 [libx264 @ 0xaf0a0c90c0] profile High, level 3.1 [libx264 @ 0xaf0a0c90c0] 264 - core 148 r2795 aaa9aa8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=4 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to '2017-09-22T231000-4.mp4': Metadata: title : Session streamed with GStreamer comment : rtsp-server encoder : Lavf57.71.100 Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuvj420p(pc), 800x600 [SAR 1:1 DAR 4:3], q=-1--1, 4 fps, 16384 tbn, 4 tbc Metadata: encoder : Lavc57.89.100 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1 Past duration 0.999947 too large Last message repeated 2 times Past duration 0.999992 too large Past duration 0.999947 too large Last message repeated 3 times Past duration 0.999992 too large Past duration 0.999947 too large Past duration 0.999992 too large Past duration 0.999947 too large Last message repeated 2 times Past duration 0.999992 too large Last message repeated 2 times Past duration 0.999947 too large ...
comment:17 by , 7 years ago
Sorry for the useless suggestion of timestamps
, I mixed something up. If the rtsp server you use cannot provide wall clock timestamps there is no way you can synchronize the streams. (The same is true in case the server does provide correct timestamps but FFmpeg cannot read them.)
comment:18 by , 7 years ago
No problems, ffmpeg is pretty new to me so any help with getting to the bottom of is invaluable.
Where to from here with the hstack issue? Do you suspect that the timestamps the RTSP server is producing are invalid and that's why hstack is generating extra (duplicated) frames? I have a pcap file of the RTCP and RTP packets of 2 streams at once and the NTP timestamps in the Sender Reports are correct, and match between the different streams (since the one camera produces all 3 streams).
by , 7 years ago
Attachment: | 192.168.13.104-2-streams.pcap added |
---|
Packet capture of RTCP and RTP for 2 simultaneous streams from one camera
comment:19 by , 7 years ago
Trying to use ffprobe to diagnose more I've noticed that the pkt_dts
/ pkt_pts
can be +/- 3. For example (video is 4 FPS):
pkt_pts=1012502 pkt_pts_time=0:00:11.250022 pkt_dts=1012502 pkt_dts_time=0:00:11.250022 best_effort_timestamp=1012502 best_effort_timestamp_time=0:00:11.250022
If ffmpeg is trying to combine 2 streams and the pts is off by this tiny fraction, would that cause the issues I'm seeing?
comment:20 by , 7 years ago
OK. Looking through the code for rtpdec.c
would this be the reason my streams are not syncing:
static void finalize_packet(RTPDemuxContext *s, AVPacket *pkt, uint32_t timestamp) { ... if (s->last_rtcp_ntp_time != AV_NOPTS_VALUE && s->ic->nb_streams > 1) { ...
The code inside that if
is the only place the NTP timestamp from the RTP stream is used. But if the RTPDemuxContext
's ic
(the AVFormatContext
) is not shared with the other RTSP inputs then nb_streams
will be 1 (each RTSP input only has a single video stream). And so the multiple RTSP inputs cannot be synced with each other even tho they are using the same NTP timestamps.
By employing various ffmpeg options I've managed to solve all other issues above except for the actual syncing of the inputs.
comment:21 by , 7 years ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
hstack uses framesync, that will duplicate input frames to get same or similar pts for output. If inputs are VFR it will give confusing output.