Opened 11 months ago

#10414 new defect

Memory leak when using derived device context of type AV_HWDEVICE_TYPE_QSV

Reported by: Karim379 Owned by:
Priority: important Component: avutil
Version: git-master Keywords: h264_qsv
Cc: Karim379 Blocked By:
Blocking: Reproduced by developer: yes
Analyzed by developer: no

Description

Hi
I am able to encode ID3D11Texture2D successfully using a derived device context of type AV_HWDEVICE_TYPE_QSV and its hardware frames context but when I try to free them the memory does not decrease
Here is a small example to reproduce the problem which covers only creating the derived contexts

#include <iostream>
#include <d3d11.h>
extern "C"
{
#include <libavutil/opt.h>
#include <libavcodec/avcodec.h>
#include <libavutil/hwcontext_d3d11va.h>
#include <libavutil/hwcontext_qsv.h>
#include <libavutil/hwcontext.h>
}


int main()
{
                while (true)
                {
                                AVFrame* frame = av_frame_alloc();
                                const AVCodec* codec = avcodec_find_encoder_by_name("h264_qsv");
                                AVCodecContext* encoderContext = avcodec_alloc_context3(codec);


                                encoderContext->width = 1920;
                                encoderContext->height = 1080;
                                encoderContext->time_base.den = 120;
                                encoderContext->time_base.num = 1;
                                encoderContext->framerate.den = 1;
                                encoderContext->framerate.num = 120;
                                encoderContext->max_b_frames = 0;
                                encoderContext->thread_count = 1;
                                encoderContext->gop_size = 10;
                                encoderContext->sw_pix_fmt = AV_PIX_FMT_NV12;
                                encoderContext->pix_fmt = AV_PIX_FMT_QSV;


                                av_opt_set(encoderContext->priv_data, "profile", "baseline", 0);
                                av_opt_set(encoderContext->priv_data, "preset", "veryfast", 0);
                                av_opt_set(encoderContext->priv_data, "async_depth", "1", 0);
                                av_opt_set(encoderContext, "b", "15M", 0);
                                av_opt_set(encoderContext, "maxrate", "15M", 0);
                                av_opt_set(encoderContext, "bufsize", "15M", 0);



                                AVBufferRef* hardwareDeviceContext;
                                int err = av_hwdevice_ctx_create(&(hardwareDeviceContext), AV_HWDEVICE_TYPE_D3D11VA, "hw", nullptr,
                                                0);

                                AVHWDeviceContext* deviceContext = (AVHWDeviceContext*)hardwareDeviceContext->data;
                                AVD3D11VADeviceContext* ctx = (AVD3D11VADeviceContext*)deviceContext->hwctx;
                                //ctx->device = getD3DDevice();

                                AVBufferRef* hardwareFrameContext;
                                hardwareFrameContext = av_hwframe_ctx_alloc(hardwareDeviceContext);

                                AVHWFramesContext* frames_ctx = (AVHWFramesContext*)(hardwareFrameContext->data);
                                frames_ctx->format = AV_PIX_FMT_D3D11;
                                frames_ctx->sw_format = AV_PIX_FMT_NV12;
                                frames_ctx->width = encoderContext->width;
                                frames_ctx->height = encoderContext->height;
                                frames_ctx->initial_pool_size = 1;
                                err = av_hwframe_ctx_init(hardwareFrameContext);

                                err = av_hwdevice_ctx_create_derived(&encoderContext->hw_device_ctx, AV_HWDEVICE_TYPE_QSV, hardwareDeviceContext, 0);

                                err = av_hwframe_ctx_create_derived(&encoderContext->hw_frames_ctx, AV_PIX_FMT_QSV, encoderContext->hw_device_ctx, hardwareFrameContext, 0);


                                err = avcodec_open2(encoderContext, codec, 0);


                                av_hwframe_get_buffer(encoderContext->hw_frames_ctx, frame, 0);

                                av_buffer_unref(&hardwareDeviceContext);
                                av_buffer_unref(&hardwareFrameContext);

                                hardwareDeviceContext = nullptr;
                                hardwareFrameContext = nullptr;

                                av_buffer_unref(&encoderContext->hw_device_ctx);
                                encoderContext->hw_device_ctx = nullptr;

                                av_buffer_unref(&encoderContext->hw_frames_ctx);
                                encoderContext->hw_frames_ctx = nullptr;

                                av_frame_unref(frame);
                                av_frame_free(&frame);
                                frame = nullptr;

                                avcodec_close(encoderContext);
                                avcodec_free_context(&encoderContext);

                                encoderContext = nullptr;
                }




                return 0;
}

Memory leak appears only when these 2 lines are used

err = av_hwdevice_ctx_create_derived(&encoderContext->hw_device_ctx, AV_HWDEVICE_TYPE_QSV, hardwareDeviceContext, 0);
err = av_hwframe_ctx_create_derived(&encoderContext->hw_frames_ctx, AV_PIX_FMT_QSV, encoderContext->hw_device_ctx, hardwareFrameContext, 0);

I am using ffmpeg 6.0 with option --enable-libmfx
System Info:
CPU: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz
OS: Windows 10 Pro
Best regards

Change History (0)

Note: See TracTickets for help on using tickets.