ffmpeg does not handle HTTP read errors (e.g. from cloud storage)
|Reported by:||Derek Prestegard||Owned by:|
|Blocking:||Reproduced by developer:||no|
|Analyzed by developer:||no|
Summary of the bug:
When reading large source files via https (e.g. a signed URL on AWS S3 or similar object storage), ffmpeg does not handle HTTP errors gracefully. Errors are occasionally expected when using services like S3, and it's expected that the application handle them via a retry mechanism.
ffmpeg seems to interpret an error as the end of the input file. In other words, if an error is encountered it simply stops encoding and finishes writing the output. This means the output file will be truncated.
Ideally ffmpeg would be able to retry when hitting errors. This would enable reliable processing of large files in cloud storage systems without implementing a "split and stitch" or "chunked encoding" methodology. Although these approaches are feasible and widely used, they have some impact to quality and complexity.
How to reproduce:
Perform any transcode of a large (100 GB +) file via S3 signed URL. It will most likely produce a truncated output.
Patches should be submitted to the ffmpeg-devel mailing list and not this bug tracker.