Opened 7 years ago

Last modified 21 months ago

#7037 open enhancement

ffmpeg destroys HDR metadata when encoding

Reported by: mario66 Owned by: Carl Eugen Hoyos
Priority: normal Component: avcodec
Version: git-master Keywords: libx265 hdr
Cc: delirio@getairmail.com, val.zapod.vz@gmail.com, dc, thovo Blocked By:
Blocking: Reproduced by developer: no
Analyzed by developer: no

Description

Summary of the bug:

Hi,

I have a new 4k bluray with a 10 bit HDR movie and I now want to convert it to 1080p with x265, but keep the HDR.

I downloaded a ffmpeg binary build against 10 bit x265 and started the conversion with '-vf "scale=1920x1080"' option. While the resultsing video is indeed encoded with the Main10 profile, the video itself looks dull and as if the HDR information was lost. I notice that metadata changed.

In the original video, there are the following tags:

colour_range                             : Limited
colour_primaries                         : BT.2020
transfer_characteristics                 : PQ
matrix_coefficients                      : BT.2020 non-constant
MasteringDisplay_ColorPrimaries          : R: x=0.680000 y=0.320000, G: x=0.265000 y=0.690000, B: x=0.150000 y=0.060000, White point: x=0.312700 y=0.329000
MasteringDisplay_Luminance               : min: 0.0050 cd/m2, max: 4000.0000 cd/m2
MaxCLL                                   : 500 cd/m2
MaxFALL                                  : 200 cd/m2

In my new video, these are not included any more. I consider this to be a BUG because I never told ffmpeg to mess with color spaces etc. therefore ffmpeg should just leave everything as it is including the metadata.

How to reproduce:

% ffmpeg -i in.mkv -vf "scale=1920x1080" -map 0:v -map 0:a? -map 0:s? -c:v libx265 [-pix_fmt yuv420p10le] -c:a copy -c:s copy out.mkv

The argument in square brackets is an alternative I tried but the result is the same.

Version:
ffmpeg-20180220-a877d22-win64-static

Change History (87)

comment:1 by James, 7 years ago

What's lost, only the MasteringDisplay, MaxCLL and MaxFALL values, or also colour_range, colour_primaries, transfer_characteristics and matrix_coefficients?

Also, can you provide us with a sample to reproduce this? What's the x265 version you're using?

comment:2 by mario66, 7 years ago

Everything is gone. Also colour_range, etc.

Sample:
https://we.tl/xpeDCFJYDH

It's only the Time Warner logo, but you can see the difference very clearly. And you can see the that the tags are lost. (Note: Not the original video, but the bug is still reproducible on that)

It says "ffmpeg version N-90094-ga877d22d9a" and "x265 2.6+37-1949157705ce:[Windows][GCC 7.2.0][64 bit] 10bit" (for the x265 included in ffmpeg)

Last edited 7 years ago by mario66 (previous) (diff)

comment:3 by Carl Eugen Hoyos, 7 years ago

Component: undeterminedavcodec
Keywords: libx265 added
Priority: importantnormal

comment:4 by mario66, 7 years ago

Could someone from the ffmpeg team please download the sample and store it somewhere internally? I just saw that WeTransfer deletes the file after 7 days automatically. Unfortunately I couldn't upload to your website because you have a filesize limit of 2,5 MB...

Last edited 7 years ago by mario66 (previous) (diff)

comment:5 by mario66, 7 years ago

  • deleted -
Last edited 7 years ago by mario66 (previous) (diff)

comment:7 by mario66, 6 years ago

I don't understand. Why is no one working on this? There is clearly a demand for this. People have to write awfully long script for this to compensate this shortcoming. Here is one example: https://forum.doom9.org/showthread.php?s=1e8582f9d692cbc7f04d205d05964122&p=1833683#post1833683

comment:8 by mario66, 6 years ago

Priority: normalimportant

comment:9 by Carl Eugen Hoyos, 6 years ago

Priority: importantnormal

This is not a regression afaict.

comment:10 by mario66, 6 years ago

If you were as fast fixing this bug as you are to downplay the importance, we would have no issue here.

I created a forum thread here https://forum.doom9.org/showthread.php?t=175278 and it got lots of attention. People wrote me personal messages because they had the exact same problem and asked me if I meanwhile found a solution. The truth is, however, that still no solution exists. I'm disappointed. ffmpeg is just another failing open source project where the developers only focus on integrating fancy cool stuff no one needs. If you read the changelog it mentions dozens of unnecessary filters you added but it never says that something was fixed.

This is why open source always fails because the glorified people who work in their spare time for this project just do things that make fun not things that are actually necessary.
Only if people get payed to fix bugs there is a chance that the project will become usable. That's the sad truth.

Libav was so right. It had much better developers with a much better mindset, too bad it was abandoned.

comment:11 by mario66, 6 years ago

Owner: set to Carl Eugen Hoyos
Status: newopen

comment:12 by mario66, 6 years ago

cehoyos, so you are going to fix this bug now? Great!

in reply to:  10 comment:13 by Carl Eugen Hoyos, 6 years ago

Replying to mario66:

Libav was so right. It had much better developers with a much better mindset, too bad it was abandoned.

I am curious: So why did you report the issue here?

comment:14 by mario66, 6 years ago

Libav was focused on getting things right before working on fancy new stuff. This was the absolute correct thing to focus on. Unfortunately, it got abandoned because, like I said, doing things right doesn't attract developers. It's the fancy new stuff that attracts developers, because they just want to have fun while coding. This is how "open source development" always works. Libav was the rare exception but too few people followed them. It's so sad. I would definitively use it if it was actively developed, this I can tell you.

HDR is just not usable with ffmpeg because of this. It was not developed until the end. Probably something more fancy was on the way.

"ffmpeg HDR metadata" is the among the top 4 (!) suggestions in Google if you type "ffmpeg HDR". Because everyone who wants to encode HDR will stumble across this. HDR is already in the mass market. Even cheap TV supports it. I would consider this as a major drawback of ffmpeg that it cannot properly handle HDR. And I am sure some people haven't even noticed but they will be screwed once they find out that in their media backup collection important metadata is missing.

I'm just telling the truth.

Last edited 6 years ago by mario66 (previous) (diff)

in reply to:  14 comment:15 by Carl Eugen Hoyos, 6 years ago

Replying to mario66:

Libav was focused on getting things right before working on fancy new stuff. This was the absolute correct thing to focus on. Unfortunately, it got abandoned because, like I said, doing things right doesn't attract developers. It's the fancy new stuff that attracts developers, because they just want to have fun while coding. This is how "open source development" always works. Libav was the rare exception but too few people followed them.

Again: If any of this were true why did you report your issue here?

It's so sad. I would definitively use it if it was actively developed, this I can tell you.

Why do you believe it is not actively developed?

in reply to:  10 comment:16 by James, 6 years ago

Replying to mario66:

This is why open source always fails because the glorified people who work in their spare time for this project just do things that make fun not things that are actually necessary.
Only if people get payed to fix bugs there is a chance that the project will become usable. That's the sad truth.

You're chastising people for doing something fun and interesting in what you acknowledged is their spare time, and not what you think should be done instead. I hope you realize how absurd and entitled that sounds.

Libav was so right. It had much better developers with a much better mindset, too bad it was abandoned.

Adding color information to the filter chain during format negotiation in encoding scenarios is a huge change that no one capable of implementing has felt like doing. And that includes every developer from ffmpeg and libav for the better part of the past two decades.

This is a bug tracker, not a discussion forum. Stick to technical reports here, and criticize development elsewhere.

comment:17 by mario66, 6 years ago

I'm not chastising anyone, I just analyze the situation. Look, there is no problem with having fun. I didn't mean it in an insulting way. I also like to have fun. One just should be honest about it.

What I want to say is this: if we have a wrong model of reality, bad things can happen. Like a complete misalignment of incentives. Under the correct world model (where devs do coding for fun, not for the greater good), you will realize there is absolutely zero incentive for anyone to fix this bug. That's a serious issue. One step could be to build some gamification elements into this. Like every new feature submission has to be followed by one bugfix submission, otherwise you can not submit new features any more. Bugs like the present one would be fixed within weeks. You are really fast. Just look how fast cehoyos was to downplay the importance of this very serious issue. It was like few minutes reaction time - on a Sunday! There is lots of spare time and energy but the incentives must be misaligned if this time and energy is not converted into good code. This is what I wanted to tell right from the beginning.

You can transfer this discussion and my arguments to other places if you want. But it was important for me to tell it here since this is an issue that prevents this particular bug from being fixed.

in reply to:  17 comment:18 by Carl Eugen Hoyos, 6 years ago

Replying to mario66:

I'm not chastising anyone, I just analyze the situation. Look, there is no problem with having fun. I didn't mean it in an insulting way. I also like to have fun.

One just should be honest about it.

I am sorry, could you elaborate what (or whom) you mean here?

It is one thing that you post complete BS above (some of which absolutely makes me laugh after all those years) but if this is meant as an insult, I'd like to know;-)

comment:19 by mario66, 6 years ago

No one was insulting anyone, jamrial was suggesting I was chastising people but this is obviously not true. I was reacting to his post. You are allowed to do everything, and I will also not chastise you for having fun with my analysis here. :-)

It is important, however, to talk about reality even if this collides with the self-image some developers may have that they are serving the greater good. I know lots of people don't like their self-image being challenged and they will either react enraged or they start laughing (like you did) as a mechanism of self-defense.

This self-image of serving the greater good is not what we can observe in reality. It is a fact that this bug is objectively more important than most of the fancy new features added in the latest releases. I proved this with Google statistics. Still, this bug is open. Thus, the developers must have a different motivation. q.e.d.

Last edited 6 years ago by mario66 (previous) (diff)

in reply to:  17 comment:20 by Carl Eugen Hoyos, 6 years ago

Replying to mario66:

One just should be honest about it.

I am not a native speaker: Please explain what this means and who you mean.

comment:21 by mario66, 6 years ago

I will put it in other words:
People who do coding for fun should be honest that they do coding for fun (and not for charity). I didn't target that to any specific people, there may be some people with a more noble motivation (like the ones who started libav), but as I have proved above there is a significant percentage of people contributing here who do not care what is actually important but just care for having fun. And it is important to point that out. Not to chastise those people but because if we accept this reality, we can change the system and the incentives such that people can continue to have fun but the resulting product will be much better. There is a whole research area called Mechanism design that tells us how we can change incentives toward desired objectives. Gamification is one approach that takes the "fun" element into account.

Last edited 6 years ago by mario66 (previous) (diff)

in reply to:  21 comment:22 by Carl Eugen Hoyos, 6 years ago

Replying to mario66:

People who do coding for fun should be honest that they do coding for fun (and not for charity).

Who claims to code for charity?
(I wonder what this even means...)

I didn't target that to any specific people, there may be some people with a more noble motivation (like the ones who started libav)

Do you realize how offending this sentence is?

Last edited 6 years ago by Carl Eugen Hoyos (previous) (diff)

comment:23 by mario66, 6 years ago

I'm not sure why you keep responding here since jamrial didn't want this to be used as a discussion forum. However, if I am asked directly I feel eligible to answer to that. ;-)

ffmpeg has a donate button on their website. With a red heart next to it. This appears like charity to me.

Ok, but you are right. Not everyone here claims to work for the greater good. Like for example you. You're contributing here to be hired as a freelancer. (Also shown on the website) Your motivation is to appear as competent as possible to potential employers. This may not necessarily lead to the most valuable technical contributions, just to contributions that appear valuable to outsiders. We are all just Homo economicus, and this would be the best strategy if you act according to your incentives.

However, there should be someone at the top of the organization who knows what's going on at the lower level. Someone who knows the motivation of the people and can set the rules accordingly. I'm not sure if someone like this exists. Because what we can observe is that major functionality that is used by millions of people (like HDR) is broken and does not get fixed within years (!).

It's like social market economy. You are from Austria, you should know this. In a social market economy, people are still allowed (and even encouraged) to act according to their incentives. However, the system is designed in such a way that if everyone followes his or her personal incentives, the prosperity of the society is maximized. This does not work in a free marked. There must be boundaries and regulations. (Although some far-right Austrians - even in the government - don't think so and disastrously want to strip away as much government regulation as possible) A free marked will degenerate since the incentives are not aligned with what is best for the society.

Same for ffmpeg. ffmpeg is currently managed like a free marked. Everyone can contribute anything without coordination or regulations or special programs to encourage fixing of important bugs. People just do what's best for their curriculum vitae or what makes fun. But no one concentrates those forces towards the desired outcome of a stable and usable medial library.

in reply to:  23 comment:24 by Carl Eugen Hoyos, 6 years ago

Replying to mario66:

ffmpeg has a donate button on their website. With a red heart next to it. This appears like charity to me.

I am not a native speaker, but I strongly believe this argumentation makes absolutely no sense (even more so after reading our donations site).

You're contributing here to be hired as a freelancer. (Also shown on the website)

(While I partly understand your argumentation)
No.

Your motivation is to appear as competent as possible to potential employers.

LOL
No.

comment:25 by Balling, 5 years ago

Funny disscussion, but seriously with HDR one should not copy the metadata by default. IMO, in so many cases convertation of HDR includes color management, transfer characteristics changes, luminance convertation and tone mapping is done by default to just present it (differently according by device capabilities), there is global metadata and metadata in SEI, etc, etc. Just read this https://github.com/sekrit-twc/zimg/issues/69 for new ICtPCp color space which is now part of HEVC standards and new BT.2100. Crasy one. In so many case the metadata is broken, so you know.

comment:26 by mario66, 5 years ago

Priority: normalcritical

Yeah, absolutely. There are so many things going on in terms of HDR but no one in the ffmpeg community seems to care. So when I have to deal with HDR content, I just think "oh shit, no way, this is not going to work", then I will look for the same content in SDR format, so that I can convert it to my desired format. It's lower quality for sure but at least it will be played as intended by the creator. As an alternative you could just leave the HDR content untouched which may waste disk space. This is sad. ffmpeg is all about converting from one format to the other, but when HDR videos is involved, the best thing is to just not convert it. Then ffmpeg will be obsolete. So I think this is even critical. It is a substantial threat to ffmpeg itself. If ffmpeg will not be able to deal with HDR, there will be no use case for ffmpeg in the future. It will be a relic of the past, when people still used SDR. This is the most critical thing I can imagine.

Also it is sad to see that the system still works as before. People just submit code based on what they think is best for them or what made the most fun to code but still no one tries to balance forces and to incentivize people to do actual needed things.

Last edited 5 years ago by mario66 (previous) (diff)

comment:27 by Carl Eugen Hoyos, 5 years ago

Priority: criticalnormal

comment:28 by mario66, 5 years ago

This time it took you actually 8 hours to downplay the importance. This project is loosing traction? ;-)

Your decision. But life tells you, if you neglect future trends, you will become a relic of the past. The only way to avoid this if you make adaptation and new technologies as your top priorities. Too bad this is not happening. Just keep in mind, if ffmpeg doesn't survive, there will be no project where you can code for fun or promote yourself as a freelancer. Then it will be over. Then you all have to go to different projects. Maybe even libav - but there coding just for fun is not so easy because there people actually want to implement things the right way not just fast and the way it makes the most fun.

comment:29 by Balling, 5 years ago

Okay, I think you are right. Though metadata insertion is not the first priority, we should add support for Atmos (at least in EAC3 as the standard is there), Dolby Vision ICtCp https://github.com/sekrit-twc/zimg/issues/117 and HDR10+ (both encoding in h.265 (just activate it in h.265 encoder) and decoding with already present patches https://patchwork.ffmpeg.org/patch/11828/, etc). Those codecs require dynamic metadata manipulation, so we will need it or ask this guy https://github.com/rigaya/NVEnc/issues/119. After that we can do (and finally consider) what metadata and when should be the default.

Last edited 5 years ago by Balling (previous) (diff)

comment:30 by mario66, 5 years ago

While reading your post, I realized maybe I just used the wrong wording. No one wants to work on something boring like "bugs". But when we call this a "fancy new technology", that is currently not "supported", there will be much more incentive to work on these. This is the missing piece in my puzzle. We don't have to incentivize people to fix bugs, we just have to stop calling it a bug!

To whomever is picking this up: I wish you all the best and hope you have a good time while working on this very exciting, futuristic technologies like HDR10+ and Dolby Vision. I wish I had the talent to work on such exciting topics. I can only imagine how much fun it will make to work with these technologies. :-)

Last edited 5 years ago by mario66 (previous) (diff)

comment:31 by mario66, 5 years ago

Type: defectenhancement

in reply to:  30 ; comment:32 by gdgsdg123, 5 years ago

Replying to mario66:

HDR is already in the mass market. Even cheap TV supports it. I would consider this as a major drawback of ffmpeg that it cannot properly handle HDR.

You don't seem to understand how displays work...


Replying to mario66:

...played as intended by the creator.

Ensuring the color consistency among different displays is a huge challenge. Which is technically achievable (given certain constraints) but impractical in practice.

"No man ever steps in the same river twice, for it's not the same river and he's not the same man."


Alternatively, few content creators actually knew what the heck they were doing.





Replying to mario66:

Yeah, absolutely. There are so many things going on in terms of HDR but no one in the ffmpeg community seems to care. So when I have to deal with HDR content, I just think "oh shit, no way, this is not going to work", then I will look for the same content in SDR format, so that I can convert it to my desired format. It's lower quality for sure but at least it will be played as intended by the creator. As an alternative you could just leave the HDR content untouched which may waste disk space. This is sad. ffmpeg is all about converting from one format to the other, but when HDR videos is involved, the best thing is to just not convert it. Then ffmpeg will be obsolete. So I think this is even critical. It is a substantial threat to ffmpeg itself. If ffmpeg will not be able to deal with HDR, there will be no use case for ffmpeg in the future. It will be a relic of the past, when people still used SDR. This is the most critical thing I can imagine.


Replying to mario66:

...futuristic technologies like HDR10+ and Dolby Vision.

You mention HDR so much... But do you really understand what HDR means, behind all those tech jargons, shenanigans, whatsoever?..


In layman's terms, it simply makes the black more black, the white more white. (thus High Dynamic Range)

In essence, it's no more than color management. (how the values of the pixels should be translated and displayed by the output device)

There's no essential difference between HDR and SDR. (in some ways, they can be considered exactly the same thing)



Replying to jamrial:

...that no one capable of implementing has felt like doing.

Salesman's hyperbole...





Replying to Balling:

LG C9 OLED after calibration in all modes (SDR, HDR and Dolby Vision) has delta E 2000 less than 0.5. While you cannot distinguish values less than 3.0. And in delta E ITP it is even better.

Ask manufacturers claim to produce such high accuracy displays about the limitations of the product, you shall then understand why it's impractical in practice.

Will you throw your TV out of the window for only a single dead pixel found on the panel or the colors looked slightly dim?..


Also note: Delta E is only a referential metric, in which subjectivity played as the definitive part of the definition.

And your expression reads like: "if you take the measurement in mm instead of cm the length will be longer..."



Replying to Balling:

Cracked Calman can be found on torrents and calibrator you can buy for 200 $$

Having a referential color tuning profile can be helpful but does not solve the problem in essence...

Do realize: even displays of the same model come from the same production line at the same time, there's still a difference in their output characteristics.



Replying to Balling:

The blacks and whites, I think you do not understand HDR either.))

I think you do not understand English well...





Replying to mario66:

gdgsdg123, you do not really understand what HDR mean in practice...

In SDR (8 bit)...

So you don't even understand what dynamic range means...

Higher bit depth only allows a possibly smoother transition between the extremes, while the extremes remain unchanged (so is the dynamic range).



Replying to mario66:

...change the distribution of the pixel values... ...uniform distribution... ...most pixel values centered around some values and only very few pixels (e.g. looking into sunlight, fire) occupy the tails of the distribution.

...

...on a per frame / sample basis.

What those words hint, I call it variable quantization geometry. (haven't found a commonly accepted tech jargon describing the very nature)

But which has nothing to do with HDR.



I'll use a simple example to demonstrate its purpose:


Think of a content, which only exhibits many different fades of red and a single green among all the pixels.

To effectively store the many fades of red, enough number of bits must be allocated, else banding/discoloration will be inevitable.
To effectively store the single green, what it has to take may be less than 1 bit...

And a quantization geometry designed according to the above guideline works perfectly... for the above content only.


What if the content was of exactly the opposite or just some random values (as is in real life)?.. There's where the "one size fits all" solution comes:

Uniform distribution of the representation granularity for all components (primary components of the colorspace), as a constant quantization geometry.

Which works well generically but can do be improved.





Replying to Balling:

(10 bit color) It is just used because 1) it was done to accelerate technology (and it worked: Nvidia had to activate 10 bit color in Photoshop on geforce line (actually what they did is they activated 10 bit in WINDOWED OpenGL API))
2) perceptual quantizer is HDR and HDR IS PQ. Unfortunately, 8 bit was really not enough for it. That is all. Also banding issues as the HDR data should be modified according to display capabilities (so if mastering data and display data is not the same you will have convertion). And 8 bit is possible, for example dolby vision is just 8 bit on the wire (with metadata to 12 or 10 bits) (but 10 bit in files with metadata to 12 bits).

So. HDR is only about new tone curve, the PQ. What is so cool about PQ? Well, in gamma tone curve (the sRGB standard) when light intensity reaches less than 0.1 nit or greater than 100 nit, there is no more differentiation in the color value.
While PQ (ST.2084) provides detail in both the 100 to 10000 nit, and the 0.001 to 0.1 nit ranges.
See https://docs.microsoft.com/en-us/windows/win32/direct3ddxgi/high-dynamic-range-and-wide-color-gamut

So blacks are more black (not the black itself, it is the same) and whites are whiter (not the white itself again).

Remember, that bulls* about 2d color space is what it is: a bullsh. The color space is 3 dementional. And the 3 demention is actually the luminosity.

Your statement is highly inaccurate but at least... not entirely wrong.

Note: how black is the black is hardware specific (defined by the output device), the same for the white.

Last edited 5 years ago by gdgsdg123 (previous) (diff)

in reply to:  32 comment:33 by Balling, 5 years ago

Replying to gdgsdg123:
LG C9 OLED after calibration in all modes (SDR, HDR and Dolby Vision) has delta E 2000 less than 0.5. While you cannot distinguish values less than 3.0. And in delta E ITP it is even better.

Cracked Calman can be found on torrents and calibrator you can buy for 200 $$

The blacks and whites, I think you do not understand HDR either.))

Last edited 5 years ago by Balling (previous) (diff)

comment:34 by mario66, 5 years ago

gdgsdg123, you do not really understand what HDR mean in practice. But let me use this opportunity to explain this in more detail so that you guys see why this is an absolutely critical "feature" you need to "support".

In SDR (8 bit), you basically have 256 values per color channel, so you are quite limited in how many different nuances you can display. So what you do when you record a movie, you need to make sure that from one pixel value to the other, the difference is low enough that a viewer cannot see any banding. That gives you a limit on how bright or dark something can be. Everything above or below that limit needs to be clipped. TV users at home then set max brightness on their TV so that it looks good for them.

Now HDR comes in which means, first of all, you can represent pixel values with higher precision. In HDR, there are 1024 (10 bit) or 4096 (12 bit) different values per color channel. In a nutshell, that's the basic idea of HDR. So, how does this "it simply makes the black more black, the white more white" that lot of customers believe comes in? This is where things get complicated now. When the creator now decides where max brightness is clipped, he can choose now much higher values. Because you can represent much higher values before a viewer notices any banding. But this changes your distribution of the pixel values! Where with SDR you basically had an approximately uniform distribution, with HDR you now have most pixel values centered around some values and only very few pixels (e.g. looking into sunlight, fire) occupy the tails of the distribution. Now if this is applied on your TV naively, everything will look dull and low contrast, and bright light would not be any brighter. Why is this? Well, you typically have set the max brightness of you TV based on SDR content. What you would need to do is to increase the brightness dramatically on you TV, only then you would be able to actually see the HDR as it was intended to be. But what is now the correct brightness value? And what if you then switch to a SDR movie, you need to reset the brightness again? Also too bright backlight can limit the ability to display really dark scenes, what was also the goal of HDR. Problems over problems...
Then the solution the movie industry came up with was to now directly control the max brightness of your TV by using meta data. If you ever watched an HDR movie, you will notice that the backlight is much more active even higher to what you set on your TV. That's exactly because of this meta data. Carried to extremes, you could even set max brightness on a per frame / sample basis which would further improve the experience since you can now tell the TV to dim the backlight in very dark scenes.

This is what industry pushes for and what this meta data is all about. If those meta data would be lost, your movie would look dull, low contrast and the whites are not more white, the blacks are not blacker... See the initial posting.

It is absolutely critical that this meta data is preserved. This everyone needs to understand. If you encode an HDR movie with ffmpeg at the moment and then delete the original, you are basically f*cked. The meta data was lost and no way to recover.

Last edited 5 years ago by mario66 (previous) (diff)

in reply to:  26 comment:35 by Illya, 5 years ago

Replying to mario66:

Also it is sad to see that the system still works as before. People just submit code based on what they think is best for them or what made the most fun to code but still no one tries to balance forces and to incentivize people to do actual needed things.

Why don't you 'balance forces and incentivize people to do actual needed things'?

You should offer to pay a developer to implement it if you need it so urgently. But also keep in mind that there isn't full support on Linux for HDR yet (you would need to wait for this, or find a developer willing to test on windows), along with the fact that developers may not even have the hardware to display it (you would need to fund hardware for this as well).

in reply to:  34 ; comment:36 by Balling, 5 years ago

Replying to mario66:
You are 100 percent wrong. 10 bit color has nothing to do with HDR. It is just used because 1) it was done to accelerate technology (and it worked: Nvidia had to activate 10 bit color in Photoshop on geforce line (actually what they did is they activated 10 bit in WINDOWED OpenGL API))
2) perceptual quantizer is HDR and HDR IS PQ. Unfortunately, 8 bit was really not enough for it. That is all. Also banding issues as the HDR data should be modified according to display capabilities (so if mastering data and display data is not the same you will have convertion). And 8 bit is possible, for example dolby vision is just 8 bit on the wire (with metadata to 12 or 10 bits) (but 10 bit in files with metadata to 12 bits).

So. HDR is only about new tone curve, the PQ. What is so cool about PQ? Well, in gamma tone curve (the sRGB standard) when light intensity reaches less than 0.1 nit or greater than 100 nit, there is no more differentiation in the color value.
While PQ (ST.2084) provides detail in both the 100 to 10000 nit, and the 0.001 to 0.1 nit ranges.
See https://docs.microsoft.com/en-us/windows/win32/direct3ddxgi/high-dynamic-range-and-wide-color-gamut

So blacks are more black (not the black itself, it is the same) and whites are whiter (not the white itself again).

Remember, that bulls* about 2d color space is what it is: a bullsh. The color space is 3 dementional. And the 3 demention is actually the luminosity.

And again: both DCI P3 (and even bt2020 extensions which can be found in not so many films, for example the cartoon Multiverse Spiderman and BBC planet earth II) and 4k is not mandatory for HDR as well, but again SMPTE wanted to accelerate all those technologies.

Last edited 5 years ago by Balling (previous) (diff)

in reply to:  36 comment:37 by mario66, 5 years ago

Replying to Balling:
I appreciate your more technical clarification, but in a nutshell my explanation is correct and provides people who have never heared of HDR an intuitive introduction in this topic. Because in its core, the following mechanism is true: If you open up color space, i.e. you can now represent brighter values, you have to tell the TV that your distribution changed and that max brightness is now much higher. Basically with a TV you can only control the max brightness, although obviously you are interested in controlling the average brightness. Everything will orient itself around this max brightness. And depending on whether the content is equally distributed within the color space or if you have a long-tail distribution you need to set different max brightness values to achieve the same average brightness. This effect is based on logic and cannot be wrong, although of course this is just a simplification and in reality they are more complex.

Last edited 5 years ago by mario66 (previous) (diff)

comment:38 by Balling, 5 years ago

Again. Wrong. Any modern display is HDR, just because it is brighter than 100 nit. SDR should be 100 nit max! When you calibrate OLED TV you need to set maximum brightness for SDR, and it is 100 nit is dark room and 200 nit in light room. That is the point! With hdr you cannot manipulate the brightness. Okay? Just cannot!!! HDR manipulation should be blocked (it is not on android and in many cases but it should be, okay?).

See this https://www.youtube.com/watch?v=0XTgz5Z1bhE
https://www.youtube.com/watch?v=EN1Uk6vJqRw&t=7s
and https://www.youtube.com/watch?v=bwdDPxrmWPw

in reply to:  38 comment:39 by mario66, 5 years ago

Replying to Balling:
HDR metadata does specify brightness. This is what "MaxCLL", "MaxFALL" and others stand for. See: https://www.mysterybox.us/blog/2016/10/19/hdr-video-part-3-hdr-video-terms-explained
What you probably mean is that it should not be manipulated by the user, this is somewhat correct. This is why we need the metadata. Of course nowadays the user does not really need to change the brightness on his TV for HDR content, because it is handled by the metadata! What I wanted to provide is an explanation how we came into the situation we are currently in, easy to follow for beginners.
However, that you keep complaining about such minor technical inaccuracies but fail to see the big picture, that is, metadata are essential, is somewhat symptomatic for this bug tracker discussion. UHD HDR Blu Ray will be the new standard. ffmpeg needs to preserve the metadata, otherwise the content will be corrupted. If ffmpeg cannot deal with such videos which in the future will be the new baseline, ffmpeg will be obsolete! This is the big picture you and others fail so recognize.

Last edited 5 years ago by mario66 (previous) (diff)

comment:40 by Balling, 5 years ago

It will never be obsolete, because Nvidia (nvenc) and Intel (mediasdk) are supporting it. Again we need to first implement dynamic metadata then we can see and look what metadata should be preserved. Also please don't forget that ffmpeg is a library. One should preserve only such values that could not usually change (it is right with tags (like MP3)) and many other stuff, like HDR metadata inside (!) frames. But outside metadata can change and you must understand that the part of the program which is doing that is not ffmpeg, it is h265 coder (it BTW already supports Dolby Vision (except dual layer) and HDR10+).
https://bitbucket.org/multicoreware/x265/wiki/Home

Last edited 5 years ago by Balling (previous) (diff)

comment:41 by mario66, 5 years ago

Yes, AFAIK, x265 (the encoder I used) already supports both HDR10+ and Dolby Vision. I wonder, though, why it is still not applied in ffmpeg. I gave it a try just recently and still all the meta data are lost. It must be something in ffmpeg that does not enable this meta data transfer properly.

Last edited 5 years ago by mario66 (previous) (diff)

comment:42 by mario66, 5 years ago

Some sources for my explanation: http://www.abacanto.net/wp-content/uploads/2018/12/TEKTRONIX-TIPSTRICKS-152-MaxFALL-MaxCLL-ES.pdf

I think before we talk about dynamic metadata, I should point out that even static meta data as present in HDR10 is not supported in ffmpeg. See the initial post. These were global tags. If I remember correctly the example video I uploaded here was from a HDR10 (no "plus") source.

So tell me again, how is this not a massive failure caused by largely misaligned incentives if such a simple fix is not applied? What is so difficult about it? The only posts here complaining about difficulty were talking about dynamic metadata, so that even I have been confused and thought this would be the reason why my video didn't work. But no, if you think about it, this particular video only needed support for HDR10, and I'm sure x265 supports this since a long time. x265 is not to blame.

Then the next step of course would be dynamic HDR, which of course this will be more challenging, I appreciate this. Nevertheless it will be an important long term goal you should definitively aim for!

comment:43 by Balling, 5 years ago

Dolby Vision encoding is already supported, what is not supported in h.265 is dual layer (when you have two video steams (one in 4k, the other in full hd) inside container, see dolby demo blu-ray disks on torrents) and decoding it (both dual layer and NAL values and EVEN THE ICTCP COLOR, goddammit, aka Bt.2100 (though Dolby has a different process called IPTPQc2)).
Codec for HDR10+ is supported (decoding are in patches on patchwork only and you need to activate it in h.265).

I will point out ;) that ffmpeg should insert metadata in sync with h.265 parameters and color managment filters and nvenc coder. That work need to be done and there are many crazy things about it. Again read this https://github.com/sekrit-twc/zimg/issues/117

You must understand that mastering primeries should be applied when you are mastering the file, not trancoding it. Players should not use that data, they should use data inside container (again inside metadata which then is tranfered in SAI of HDMI). If they are not doing that that is their problem. Again this is a library! This changes will broke API! And if we will do it in a hurry it will broke it more than once, which is unacceptable. Again as you said there are scripts that can insert metadata.

Last edited 5 years ago by Balling (previous) (diff)

in reply to:  42 ; comment:44 by Tomas Härdin, 5 years ago

Replying to mario66:

So tell me again, how is this not a massive failure caused by largely misaligned incentives if such a simple fix is not applied? What is so difficult about it?

You could try not being rude. You are not entitled to any developer's time. As for aligning incentives, see Illya's comment.

Did you try explicitly telling ffmpeg to copy metadata using -map_metadata? Sometimes that may be necessary.

in reply to:  44 ; comment:45 by mario66, 5 years ago

Replying to Balling:
Once again thanks for the technical insides you provide. Indeed, this makes everything more complicated. Of course this can be optional, but it should be simple switch, not a multi-line very complicated bash script only those who studied the subject for several years can understand. (what you currently would need, see some post above) A simple switch would be perfect.
But I never said this needs to be rushed. I left this bug alone for one year before I for the first time started to complain about no progress. Then after another year I now complain again. Did anyone even start to look into this in the last 22 month? It doesn't seem so.
Feel free to use another year to tackle all the API difficulties. There is really no hurry. I'm just worried when progress continues with this speed, ffmpeg will not have this implemented in 2050, where it will be obsolete and only remembered by the elderly: "once there was a tool free of use and very powerful, but the developers failed to do some chore once in a while that was necessary to keep the project alive. Learn your lessons", I might tell my children and grandchildren, "there is no all fun, no chore project. Once in a while you have to do what's necessary to continue with the fun parts".

Replying to Tjoppen:
Nope, -map_metadata did not copy the metadata. But this is actually what a solution could be. Like in this application:
https://forum.blackmagicdesign.com/download/file.php?id=20325&sid=41b96079c85c28a5dcfa982292fee85c
Make a simple switch "-export_HDR10 [yes|no]" (like in the application above). Problem solved. Make the default "no". Then also Balling will be satisfied.
You think that monetary incentive would be a solution. I disagree. People have an incentive to use their spare time for something useful, without the need to give them money. What an intelligent project leader would do is to balance those forces so that at the end everyone feels happy and necessary features and bugfixes are implemented. Ever heard of "gamification"? You can also give non monetary rewards and incentives. Look, I do not want that people feel uncomfortable while coding. It is a great thing to have fun while coding. But you can trigger "fun" with incentives and rewards if you are an intelligent project leader.

in reply to:  45 comment:46 by Carl Eugen Hoyos, 5 years ago

Replying to mario66:

I do not want that people feel uncomfortable while coding.

No, you definitely do.

in reply to:  45 ; comment:47 by j-b, 5 years ago

Replying to mario66:

but no one in the ffmpeg community seems to care.

Replying to mario66:

gdgsdg123, you do not really understand what HDR mean in practice

Replying to mario66:

So tell me again, how is this not a massive failure caused by largely misaligned incentives if such a simple fix is not applied?

Replying to mario66:

Look, I do not want that people feel uncomfortable while coding. It is a great thing to have fun while coding. But you can trigger "fun" with incentives and rewards if you are an intelligent project leader.


You are quite rude and your insinuations are borderline insulting.

Please stop and start adapting a correct tone on this bugtracker.

Last edited 5 years ago by j-b (previous) (diff)

comment:48 by mario66, 5 years ago

Well, now you are switching to "shoot the messenger"...

But I think you got my point. Since I see so many quotes of my posts from so many different people that means my concerns about the future of ffmpeg was read by a lot of people. This is good. Now you need to decide what you do with this information. See you again in another year. I'm not so optimistic that there will be a solution soon, but well, that's up to you.

Last edited 5 years ago by mario66 (previous) (diff)

in reply to:  47 comment:49 by Balling, 5 years ago

Replying to j-b:
Remember what happened to Linus for his code of conduct. <3 he cannot even use f*ck anymore(( Not a nice thing.

@mario66
Lets do it this way: give us as much links to those mentioned bash scripts etc, etc, you can. Because as of now, we do not understand what you want, you cannot (I think) just copy metadata but as I saw on katcr and rarbg it is indeed a problem, as you should paste a crazy amount of metadata inside the new file and there should be a script out there suitable for every pirate ;) needs. Please spend some time on it if you want this done. Thanks))) But again, i do think that we should implement dynamic part of that shi* first.

comment:50 by mario66, 5 years ago

The first step, IMHO, would be to support the video and meta-data from my initial post. It's a simple HDR10 video, should be no big deal. Here someone wrote a script to manually copy the metadata: https://forum.doom9.org/showthread.php?s=1e8582f9d692cbc7f04d205d05964122&p=1833683#post1833683 It looks very complicated and if you scroll through the thread lots of people got error messages. I assume it will be a pain in the ass to actually use this script as an average end user. Something like this should be officially supported.
In the end what should be possible is that the transcoded video behaves exactly the same in terms of HDR. My TV should use the same parameters for processing the video, i.e. all the meta data should be indistinguishable. I should not be able to tell a difference between the original and the transcoded version except if you have very good vision and can see the compression artifacts (which I cannot for example).

Then it would be great if ffmpeg would be at least aware of all those meta data. That means if ffmpeg detects such meta data which it cannot deal with (both static and dynamic) it should at least print a warning that something is odd (in red font), and that the user knows he should probably not delete the original. On the other side if no such warning occurs the user should know he is safe in a way that all the important information were transferred and no loss of information occurred. If there is maybe an option in ffmpeg that needs to be turned on to transfer the meta data, the existence of this option should be mentioned in the warning message.
I'm not a big fan of hiding information from the user. If you make a design decision like dismissing meta data only because it is "technically incorrect" if this meta data would be copied, please be very verbose about this in your application. Always tell the user if there is a "twist" in his input video he might not have thought of, and what he can do to resolve that twist.

P.S: I think part of the problem is that you even don't know what exactly needs to be done for which video. There is no documentation, no How-To for HDR, nothing. As an end user, you have to google that yourself, see awfully long scripts someone wrote, people claiming HDR would be broken for ffmpeg etc. so that in the end you are just scared and don't want to use ffmpeg for any HDR related content. Is this really the impression you want to give your end users? A clear documentation of what works and what does not work and what you have to be aware of, this would be great! This bug tracker where ffmpeg developers just try to defeat me (instead of the actual bug) is now No. 2 Google result for "ffmpeg hdr". Something that does not really help either.

Of course, as a long-term goal, full support for every type of HDR, both dynamic and static, would be great. ffmpeg presents itself as an executable to the end users (not a library) he can use to convert videos. Thus, I think ffmpeg should do everything it can to avoid corrupted or incomplete outputs. But I hope I could give you some hints where to start.

Last edited 5 years ago by mario66 (previous) (diff)

comment:51 by Balling, 5 years ago

It is now first google result.

As i said links, I already have seen that link...

One of the guys said "Dolby Vision has an 8 bit AVC profile with an enhancement layer" it is a lie! Dolby Vision IS USUALLY 8 bit on the wire while 10 bit in files! The source will send a DoVi Vendor Specific Info Frame (VSIF) for the TV to understand that you send DoVi signaling. Also please note nvidia NVAPI already support (in windows only) dolby vision signalling, that works is Mass Effect Andromeda https://youtu.be/PC6MyPn6kkg and Need for Speed Heat! And it is 8 bit there! So that link is bad. Use github search and try to find scripts there.

"Piping from ffmpeg into x265 works." I was right, right? It is h.265 coder that inserts (or can as well) metadata.

JSON bug was fixed (https://forum.doom9.org/showthread.php?p=1886901#post1886901) in #8228 and since then there were no updated there! It is a BAD LINK, if you do not understand it...

Last edited 5 years ago by Balling (previous) (diff)

comment:52 by mario66, 5 years ago

Yes, I know, it is a "bad link". But I do not have any more scripts, I didn't find any script that works. I think there is currently none. (If there was one no need for this bug report)
Then there is this link: https://forum.doom9.org/showthread.php?t=175227 where people just tinkered around a set of parameters so that they got the meta data transferred for a particular video. (Which needs to be changed for every video individually)
But to be honest I did not tinker around with any scripts or parameters, I started this bug report where I hoped people would understand the issue and submit a fix.
Because in the end I would not be able to tell if everything went successfully or if there was a "twist" which I was not aware of. But you, I hoped, are the experts. You could tell whether an output is complete or incomplete, and you can tell what exactly one needs to do to avoid loss of HDR information. I think it is better if there is something official, a documentation or functionality, coming from the developers itself. This will be of great value for all of your users!

I outlined above what would be the ideal outcome of a fix:
1) In the end what should be possible is that the transcoded video behaves exactly the same in terms of HDR. My TV should use the same parameters for processing the video, i.e. all the meta data should be indistinguishable. I should not be able to tell a difference between the original and the transcoded version except if you have very good vision and can see the compression artifacts (which I cannot for example).
2) A clear documentation of what works and what does not work and what you have to be aware of

You keep telling that "nvidia NVAPI already support (in windows only) dolby vision signalling". But this is irrelevant since I use x265 codec inside ffmpeg. Also x265 supports HDR10 but then when combined with ffmpeg all the meta data are lost. This is an ffmpeg issue, not a codec issue.

Thanks four your tenacity, I hope I could clarify a bit more. :-)

Last edited 5 years ago by mario66 (previous) (diff)

comment:53 by Balling, 5 years ago

You understand that pirates on rarbg use ffmpeg? The metadata is correct for both the original bit perfect WEBDL or Blu-ray and downconverts. Dolby Vision content obviously could not be downconverted yet.
So there should be a perfectly working script out there that uses that json ffprobe data, for example. Github is the place for it, or github gists. I will try to search through it /sigh

comment:54 by mbosner, 5 years ago

Well beside the discussion. Which developer could/should i sponsor to help improving color space/format handling in ffmpeg? ATMOS support would be also nice ;)

comment:55 by delirio, 5 years ago

Cc: delirio@getairmail.com added
Keywords: metadata added
Priority: normalimportant
Summary: ffmpeg destroys HDR information when encodingffmpeg destroys HDR metadata when encoding

ffmpeg is a great product and the developers deserve congratulations for their devotion to this project. As for current discussion I would like to add the following updates:

  1. nvenc already added the option to copy the HDR metadata: see https://github.com/rigaya/NVEnc/releases/tag/4.60:
Add option to copy HDR related metadata. ( #185, --master-display copy, --max-cll copy)
  1. Given the popularity of the HDR10 and the fact that it is supported by all current 4K TV manufacturers, it would be a pity not to support this HDR medatada; what is the point of encoding from a 4K HDR original source, if the HDR support is lost? Also all original sources are coming in HDR in the last years.
  1. You can manually create some script to manually extract this metadata and later insert, but definitely would prefer that ffmpeg handles this automatically.

A great thank you for the developers again!

Last edited 5 years ago by delirio (previous) (diff)

in reply to:  53 ; comment:56 by delirio, 5 years ago

Replying to Balling:

You understand that pirates on rarbg use ffmpeg? The metadata is correct for both the original bit perfect WEBDL or Blu-ray and downconverts. Dolby Vision content obviously could not be downconverted yet.
So there should be a perfectly working script out there that uses that json ffprobe data, for example. Github is the place for it, or github gists. I will try to search through it /sigh

I have searched as you suggested on RARBG for their 4K encodes and no matter it was WEB or Bluray encodings (on every group), the mediainfo shows that they were using x265 library on Windows for encoding, no mention of ffmpeg/lavf encoder. They either aren't using ffmpeg at all or just avoid it for 4K HDR releases because of missing support.

comment:57 by Carl Eugen Hoyos, 5 years ago

Keywords: metadata removed
Priority: importantnormal

in reply to:  56 comment:58 by Balling, 5 years ago

Replying to delirio:

Replying to Balling:

You understand that pirates on rarbg use ffmpeg? The metadata is correct for both the original bit perfect WEBDL or Blu-ray and downconverts. Dolby Vision content obviously could not be downconverted yet.
So there should be a perfectly working script out there that uses that json ffprobe data, for example. Github is the place for it, or github gists. I will try to search through it /sigh.

I have searched as you suggested on RARBG for their 4K encodes and no matter it was WEB or Bluray encodings (on every group), the mediainfo shows that they were using x265 library on Windows for encoding, no mention of ffmpeg/lavf encoder. They either aren't using ffmpeg at all or just avoid it for 4K HDR releases because of missing support.

That it is not how it works, ffmpeg IS USING x265 library as I said. And in Mediainfo that is called "Writing library" for a reason. The problem with rigaya project is it is using C++ /sigh Though it is MIT, so no problem there.

Last edited 5 years ago by Balling (previous) (diff)

in reply to:  56 ; comment:59 by mario66, 5 years ago

Replying to delirio:

They either aren't using ffmpeg at all or just avoid it for 4K HDR releases because of missing support.

Thanks for researching this. Exactly my point. ffmpeg is going to die out, all your hard work you "donated" to this project will have no purpose, as all your users walk away and use different applications that don't lack this important functionality.
I hope you realize I will just want to help you, with provocative theories, in the hope you will wake up and understand this dramatic situation.

cehoyos, I'm glad to see you are still with us, fresh and alert to downplay the importance of this very critical bug. This time within two hours, nice!

in reply to:  59 comment:60 by Balling, 5 years ago

Replying to mario66:

Replying to delirio:

They either aren't using ffmpeg at all or just avoid it for 4K HDR releases because of missing support.

ffmpeg is going to die out, all your hard work you "donated" to this project will have no

Nobody is using x265 directly. It is possible of course, but ffmpeg is using x265 and YOU CAN pass arguments to it (-x265-params). Listen: ffmpeg is a library, it means that you need to use something like mkvmerge, which IS USED by rarbg... Something like:

 ./src/mkvmerge \
      -o output.mkv\
      --colour-matrix 0:9 \
      --colour-range 0:1 \
      --colour-transfer-characteristics 0:16 \
      --colour-primaries 0:9 \
      --max-content-light 0:1000 \
      --max-frame-light 0:300 \
      --max-luminance 0:1000 \
      --min-luminance 0:0.01 \
      --chromaticity-coordinates 0:0.68,0.32,0.265,0.690,0.15,0.06 \
      --white-colour-coordinates 0:0.3127,0.3290 \
      input.mov 

It is really simple.

So, I will update you. Very soon Google will give us updated HDR10+ patches, they are already fully ready (they are using it in Youtube and Google Films, but GPL 3, ha-ha, so will have to give it to us). And then I think we will copy rigaya implementation of metadata or mkvmerge...
Can you please update that post https://forum.doom9.org/showthread.php?p=1886901#post1886901 saying that issue was fixed https://trac.ffmpeg.org/ticket/8228? I am not registered there.

comment:61 by mario66, 5 years ago

It is not a solution in my opinion if you have to manually find out all those meta data and copy them all one by one via. command line. ffmpeg is an executable, not only a library. And as executable ffmpeg should behave with no "surprises".

That means, if I do:
"$ ffmpeg -i input.mp4 output.avi"
as said on your website: https://ffmpeg.org/
where you claim "Converting video and audio has never been so easy." then, metadata should be preserved, OR user should be clearly warned that meta data is lost AND there should be a simple* (!) command line switch to preserve meta data "so easy". This is my "definition of done" for this ticket and nothing else.

P.S. *And by "simple" I mean a single command line switch, something like "-copy_hdr_metadata [True|False]" and nothing else, no background knowledge about HDR metadata structure should be required!

Deleting important metadata by default is a big surprise for most users.

Last edited 5 years ago by mario66 (previous) (diff)

in reply to:  61 comment:62 by gdgsdg123, 5 years ago

Replying to mario66:

ffmpeg -i input.mp4 output.avi

Friendly reminder: by doing so you are basically telling FFmpeg to generate a file randomly for you. (and a lot more important things can be "broken" this way)


And a funny fact: if you mess with these color management data the clip might actually look better...

in reply to:  61 comment:63 by Balling, 5 years ago

Replying to mario66:

And as executable ffmpeg should behave with no "surprises".

Libraries are also executables. The difference is that library has many functions (API) that can be called and may not have main() function.

Again, please write there https://forum.doom9.org/showthread.php?p=1886901#post1886901 about what I told you
and open the same issue here: https://bitbucket.org/multicoreware/x265/issues?status=new&status=open because it is x265 that is pasting metadata. If you want this, x265 devs should do the backend of (encoder) pasting metadata and send patches to ffmpeg decoder.

comment:64 by mario66, 5 years ago

Part of the success of ffmpeg is that if you leave out options it will choose the most reasonable default value for that. Imagine every single command line option here: https://ffmpeg.org/ffmpeg.html#Options
would have unintuitive / surprising default values and to get a simple video converted, every single one of those options needed to be set to a reasonable value.
No one would use ffmpeg. Ever.
I understand that you are an expert and you do not rely on default values anyway, but this is the exception.

For example in my case (for SDR videos), I used to just call:

ffmpeg -i in.mkv -map 0:v -map 0:a? -map 0:s? -c:v libx265 -crf 18 -c:a copy -c:s copy out.mkv

and I was happy with the result all the time... until HDR came along.

Copying HDR10 metadata could be done in ffmpeg as a simple post processing step, independent of the video codec. Only for HDR10+ or Dolby Vision you need support from the codec. And there your are right, the x265 guys need to submit patches for this.

in reply to:  64 ; comment:65 by gdgsdg123, 5 years ago

Replying to mario66:

ffmpeg -i in.mkv -map 0:v -map 0:a? -map 0:s? -c:v libx265 -crf 18 -c:a copy -c:s copy out.mkv

With this command I suspect you'd already messed with the colorspace (color management data) quite a lot... (regardless the creator of the "in.mkv" may not have correctly handled it in the first place)


Anyway from your command parameter choosing I can see that you don't care much about quality... that's why many broken things become "worksforme".

in reply to:  65 ; comment:66 by mario66, 5 years ago

Replying to gdgsdg123:

With this command I suspect you'd already messed with the colorspace (color management data) quite a lot...

Thanks, I never thought about this.

Anyway from your command parameter choosing I can see that you don't care much about quality... that's why many broken things become "worksforme".

I regularly do ABX tests and find that my converted videos do not suffer from any quality loss.

But what would you recommend? Should every user of ffmpeg do a "Bachelor of Color Management" before he can start using it to convert videos?

You maybe know that in Germany, there is a right to do "Private copying", we even pay taxes for this, see Private copying levy. I just want to be able to execute this right.

I think it would be much better and in the spirit of the idea of an open, free application if you'd empower everybody, also the average Joe, to use ffmpeg and to get the best possible outcome with minimal knowledge of color spaces.

in reply to:  66 ; comment:67 by Balling, 5 years ago

Replying to mario66:

Replying to gdgsdg123:

With this command I suspect you'd already messed with the colorspace (color management data) quite a lot...

Thanks, I never thought about this.

Anyway from your command parameter choosing I can see that you don't care much about quality... that's why many broken things become "worksforme".

I regularly do ABX tests

You mean SSIM tests? https://en.wikipedia.org/wiki/Structural_similarity

I just want to be able to execute this right.

All my life I have never payed for ANY digital content. Music, video, apps. That is what I call cool. I saved for 2080 Ti))

Should every user of ffmpeg do a "Bachelor of Color Management"

Yes. At least with HDR. Or just use scripts. Or use mkvmerge.

Okay, I will open issue on x265 myself. https://bitbucket.org/multicoreware/x265/issues/533/hdr10-metadata-copy-by-default-if-no-icc

Last edited 5 years ago by Balling (previous) (diff)

in reply to:  66 comment:68 by gdgsdg123, 5 years ago

Replying to mario66:

But what would you recommend?

There's no "one size fits all solution" in this case...

Anyway, for reference.


Replying to mario66:

Should every user of ffmpeg do a "Bachelor of Color Management" before he can start using it to convert videos?

I fear it requires more...


Replying to mario66:

I think it would be much better and in the spirit of the idea of an open, free application if you'd empower everybody, also the average Joe, to use ffmpeg and to get the best possible outcome with minimal knowledge of color spaces.

I think so but it doesn't change the fact that functionality and user friendliness seemed to dislike each other...

in reply to:  67 comment:69 by delirio, 5 years ago

Replying to Balling:

Replying to mario66:

Replying to gdgsdg123:

With this command I suspect you'd already messed with the colorspace (color management data) quite a lot...

Thanks, I never thought about this.

Anyway from your command parameter choosing I can see that you don't care much about quality... that's why many broken things become "worksforme".

I regularly do ABX tests

You mean SSIM tests? https://en.wikipedia.org/wiki/Structural_similarity

I just want to be able to execute this right.

All my life I have never payed for ANY digital content. Music, video, apps. That is what I call cool. I saved for 2080 Ti))

Should every user of ffmpeg do a "Bachelor of Color Management"

Yes. At least with HDR. Or just use scripts. Or use mkvmerge.

Okay, I will open issue on x265 myself. https://bitbucket.org/multicoreware/x265/issues/533/hdr10-metadata-copy-by-default-if-no-icc

Thank you for opening a ticket there. This ticket is indeed going in the right direction and I am pretty confident we will have this so desired functionality ffmpeg working soon.

comment:70 by mario66, 5 years ago

I appreciate that you start to care about the average end user who is not so familiar with color spaces. Indeed, as Balling suggested, even for SDR videos tags like "colour_primaries=BT.709" are present, and they are lost after transcode!
Now I'm not sure if this is even a problem because TVs could assume this as the default color space, but still it is a scary thought that all my transcoded videos are destroyed by missing color information. For example what happens if I buy a new TV that does not assume BT.709 as the default? Will I be even able to watch the videos without huge shift in color space?

I think this topic has too long been ignored. Color management experts like you always knew how to tweak those options, but 99,9% of end users did not. I did not find any reference to this missing meta data problem in any guides on how to transcode a Blu Ray video. I think this is not known to the general public. I'm always eager to learn new things but there are always things I even do not know that I do not know them. The "unknown unknowns". There is no chance to learn them if no one writes about this.

There is a lot of room for improvements in documentation, and I would be very happy if this would also be a result of our interesting discussion. I tried to trigger something with this ticket: https://trac.ffmpeg.org/ticket/8482 but was closed by my fried "cehoyos" who has no interest in improving this project.

Last edited 5 years ago by mario66 (previous) (diff)

in reply to:  70 comment:71 by gdgsdg123, 5 years ago

Replying to mario66:

For example what happens if I buy a new TV that does not assume BT.709 as the default?

Blame the TV manufacturer...


Replying to mario66:

Will I be even able to watch the videos without huge shift in color space?

The red is never the red... anyway, the influence of the BT.601 / BT.709 mistagging is much less discernable.



Replying to mario66:

...experts like you always knew how to...
...
There is no chance to learn them if no one writes about this.

Isn't it contradicting?..

Or an interesting topic: what's the real difference between the average Joe and the extraordinary Joe?



Replying to mario66:

There is a lot of room for improvements in documentation...

Many things in the world are poorly documented... FFmpeg is just another no exception.

in reply to:  70 comment:72 by Balling, 5 years ago

Replying to mario66:

"colour_primaries=BT.709" are present, and they are lost after transcode!

Well, in rarbg this info is always present as they either bit to bit web dl (and netflix and Amazon always put that tag) or are using good scripts to downconvert.

There is a simple thing: ​
BT.601 ("Standard-Definition" or SD)
​BT.709 ("High-Definition" or HD)
​BT.2020 ("Ultra-High-Definition" or UHD).
That is almost always the case with SD and HD.
But with HDR and UHD, well, not at all((

TV does not assume BT.709 as the default

No, that could not happen, for SDR it is assumed.
For HDR there is other info in HDR 10 bit bitstream that says if it is bt2020 or dci-p3 in bt.2020 or ICtCp.

any guides on how to transcode a Blu Ray video

Were you searching through avsforum? I think i saw it there. Amyway, you should not use ffmpeg, but other utilities like mkvtoolnix GUI or TSmuxer. E.g. https://www.avsforum.com/forum/39-networking-media-servers-content-streaming/2798233-fantastic-beasts-3d-atmos-attempt-blu-ray-quality-2.html#post54859492

without huge shift in color space?

And that happens on EVERY GALAXY DEVICE except Galaxy s10/Tab s6. Did not see any clever guy complaining besides me. That does not happen on LG C9, BTW.

room for improvements in documentation

What you suggested is not really an improvement, as it is marketing BS editing. Nobody should care about that, solving this issue could be much better. But scripts are there.

Last edited 5 years ago by Balling (previous) (diff)

comment:73 by mario66, 5 years ago

I have to tell you that your plan to let x265 submit patches to ffmpeg will probably not work. x265 devs clearly said they are contracted by a commercial company who has no interest in supporting or strengthening open source, see: https://bitbucket.org/multicoreware/x265/issues/533/hdr10-metadata-copy-by-default-if-no-icc#comment-56239152
They just want to sell their commercial products where x265 is included and of course can deal with HDR meta data. Then obviously they have no interest in giving ffmpeg the same functionality. Currently they have a de-facto monopoly for this feature and can raise insanely high prices for their software. They know how key this feature is, and so should you. I hope you begin to realize how key all this HDR meta data support is for ffmpeg and for free and open source video codecs in general. If you cannot make it work, who can?
This will be the end of free open source video processing. I tried everything I can to convince you how important this is, I was very bold to make my point. Now it is up to you to pick up this topic and be the saver of free open source video processing (or be its gravedigger)

Last edited 5 years ago by mario66 (previous) (diff)

comment:74 by Balling, 5 years ago

This will be the end of free open source video processing.

Next update to you. First some complaints ;)
WTF, you still did not write that this https://forum.doom9.org/showpost.php?p=1886901&postcount=41 is fixed in #8228? You are very lazy, really. I, on the other hand... But see next.
WTF, you still did not address that "there is global metadata and metadata in SEI". I suppose you did not understand!!! And need an example. Player should not really only look in values in global meta!!! It is just wrong! Download this: magnet:?xt=urn:btih:AB4E88D2999CEC65F1C59A55EE786E8DEFB0DC22 open in madvr in Media Player classic and open Ctrl-J info and go to 16:15 minutes and on 16:20 metadata in mkv will change from 284/145 nits to 378/277 nits as the scene changes... Okay??? That is the point, metadata in mkv in HDR10 static metadata can change on scene basis! So one can say it is dynamic actually)) We need to support that properly, that is the point... And it will be after this (or at least I hope so): https://patchwork.ffmpeg.org/project/ffmpeg/patch/20200223234124.17689-18-sw@jkqxz.net/ Also look in EDID (DisplayID) standard CTA861-3A: "The Dynamic Range and Mastering InfoFrame carries data such as the EOTF and the Static Metadata associated with the dynamic range of the video stream.
If the Source supports the transmission of the Dynamic Range and Mastering InfoFrame and if it determines that the Sink is capable of receiving that information, the Source shall send the Dynamic Range and Mastering InfoFrame once per Video Field while it is sending data associated with the dynamic range of the video stream. The Source shall not send a Dynamic Range and Mastering InfoFrame to a Sink that does not have at least one of the ET_n bits set to ‘1’."
Note: this actually works on LG C9 in zero frame latency and I think should work everywhere...
Also look here https://www.intel.com/content/www/us/en/programmable/documentation/aky1476080261496.html#ink1506343118369

Now news: VLC finally did IPTPQc2/IPT/ICtCp color (so bad colors in Dolby Vision decoding might be good in the near future with some interframes problems still). See discussion https://github.com/mpv-player/mpv/issues/7326 and link to code https://code.videolan.org/videolan/libplacebo/commit/f850fa2839f9b679092e721068a57b0404608bdc, but I am afraid the matrixes are actually wrong and should be those (after 213 multiplier I mean)

/** IPT to LMS */
    public static final short[] IPTPQ_YCCtoRGB_coef = {
        8192, 799, 1681, // 1.0,
        8192, -933, 1091, // 1.0,
        8192, 267, -5545, // 1.0,
    };

    /** IPT to LMS */
    public static final long[] IPTPQ_YCCtoRGB_offset = {
        0, 134217728, 134217728 // 0, 0.5, 0.5
    };

    /** LMS inverse cross-talk matrix for c == 0.02 */
    public static final short[] IPTPQc2_RGBtoLMS_coef = {
        17081, -349, -349,
        -349, 17081, -349,
        -349, -349, 17081,
    };

also we are missing scaling polynomials and some other things that Dolby still hides, mfers. Next step would be to support decoding FEL (stream actually can be decoded and looks very interesting, for 12 bit metadata) and MEL (second videostream is either grey or green with just 10 bit dynamic metadata, so not interesting very much) in dual layer, hahahah.

Next I am now in gmail with Google HDR10+ dev, he promised to open a fork of ffmpeg on github with HDR10+ ready! And soon it will be in Youtube and Google Films, also 1:1 Dolby Vision and HDR10+ will be produced!!! It is perfect, is not it! Look here #8530, I will post a link when they are ready, he promised he will post it on 21st of February... Well, he did not...

Last edited 5 years ago by Balling (previous) (diff)

comment:75 by mario66, 5 years ago

I will not write in some form thread I didn't start for a bug I haven't fixed that the bug "is fixed in blah blah". That you need to do yourself, if you think whatever was the problem in this forum thread is fixed by whatever patch was submitted. I'm not your legman.

So, you are telling me the initial example uploaded here: http://samples.ffmpeg.org/ffmpeg-bugs/trac/ticket7037/ now works like a charm? If I do "ffmpeg -i in.mkv -c:v libx265 out.mkv", the file will be correctly transcoded in terms of color space and necessary meta data needed to reconstruct the color space are all preserved? It will look on any device out there the same before and after the transcode (besides from compression artifacts)?
If YES, good, case closed.
If NO, then this is what you need to fix. And I don't care for nvenc of Google dev or whatever you are trying to tell me. This is irrelevant. Please fix the bug reported here.
This is a simple yes/no answer. There should be no ambiguity about this.

comment:76 by Elon Musk, 5 years ago

Yes, i will fix this bug immediately in next 10 seconds.

in reply to:  75 comment:77 by Balling, 5 years ago

Replying to mario66:

I will not write in some form thread I didn't start for a bug I haven't fixed that the bug "is fixed in blah blah". That you need to do yourself, if you think whatever was the problem in this forum thread is fixed by whatever patch was submitted. I'm not your legman.

You need to wait 5 days after registration on that forum to post, so do it yourself, please. OMG.

If your player does not understand internal metadata then it is a bad player. Write to your favourite player, etc, they should fix it.

Anyway, as I said it is not that simple to fix it, as metadata can change in mkv... I dunno how difficult this will be to do.

Last edited 5 years ago by Balling (previous) (diff)

comment:78 by mario66, 5 years ago

Then the bug here is not fixed. It's as simple as that. You are telling a lot of technical stuff about VLC, Google, Dolby Vision, FEL, MEL etc. And of course I appreciate your technical expertise you donate to this project. But I just want to be sure that we are on the same page. The page where everyone should be is the bug I reported initially. For ffmpeg. Not for any other application. This is the first step, this should be prioritized. This is often the problem with ffmpeg that you do the second step before the first step. HRD10+, Dolby Vision etc. Yes this is all relevant for the future. But now I'm talking about a simple HDR10 static metadata example. This is what's deployed in the real world now. This you should now make rock solid. Then you can move to HDR10+ and Dolby Vision, then 12 Bit or whatever. In this order. And then again you should keep implementing all those features until they are rock solid.
If you continue with the current attitude we will have neither HDR10 nor any advanced dynamic meta data HDR usable in the real world. Just people who started to implement something that works under laboratory conditions but never finish the actual integration for real world usage to the end. This is the core problem here, in my opinion. Developers do the first 80% of development, where it makes fun to code and you make a lot of progress. Then, when it comes to completing the implementation and making the software to work in the real world with all the little details you have to consider, like to make sure not loose the meta data during transcode, developers quickly loose interest...

in reply to:  78 comment:79 by Balling, 5 years ago

Cc: val.zapod.vz@gmail.com added

Replying to mario66:

developers quickly loose interest...

No! Nobody looses it!
I just posted this https://patchwork.ffmpeg.org/project/ffmpeg/patch/20200223234124.17689-18-sw@jkqxz.net/ that transfers this external metadata in internal metadata! But even after that we will still need to support HDR10 static metadata BT.2390 DTM in sidedata with Pludge constant and Hermite curve to present HDR10 correctly! OMG, we are doing it! Everythimg is already here, Madvr support DTM, dolby vision polinomials are also here, HDR10+ will be soon given by google, dammit, THERE IS A SCRIPT TO COPY METADATA. NVENC and NVDEC both support Dolby Vision (in NVAPI) and HDR10+ (rigaya).

Last edited 5 years ago by Balling (previous) (diff)

comment:80 by five82, 5 years ago

<deleted>

Last edited 5 years ago by five82 (previous) (diff)

comment:82 by Rapper_skull, 4 years ago

Sorry guys but the discussion is really long and for the largest part very old. I just wanted to know if there's a summary of the current status of HDR10, HDR10+, DV, etc. in ffmpeg, both for decoding and encoding. Thank you.

in reply to:  82 comment:83 by Balling, 4 years ago

Replying to Rapper_skull:

Sorry guys but the discussion is really long and for the largest part very old. I just wanted to know if there's a summary of the current status of HDR10, HDR10+, DV, etc. in ffmpeg, both for decoding and encoding. Thank you.

Nothing really changed so far, but we are much closer, HDR10+ patches from Google are much more ready; DV IPTPQc2 colorspace is also soon to be added, just need some polish on the reshaper and we have an app for PC from Dolby; BT.2390 color DTM for static metadata is implemented in mpv (but not black point correction in it, alas), it also implemented change from 100 nits to 203 nits for SDR in HDR reference white per BT.2408. Also we have all patches to support HDR meta throughput https://patchwork.ffmpeg.org/project/ffmpeg/patch/20200823223310.233061-8-sw@jkqxz.net/, but as there are changes in meta inside container, and more (you can read about it here https://www.mail-archive.com/ffmpeg-devel@ffmpeg.org/msg108319.html), we are not ready yet.

Last edited 4 years ago by Balling (previous) (diff)

comment:84 by Balling, 4 years ago

I am going to attach a sample where HDR10 static metadata changes when scene changes. https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/uploads/16c628c535865d7282a48317064345a2/out.mp4

That already crashes h265parse of gstreamer due to multiple SPS. mpv does not have such a problem and ffmpeg too. Nice.

You can use

ffprobe -show_frames -select_streams v out.mp4

to check it out! Maybe add to ffprobe FATE, would be nice.

Change is as follows on frame pkt_pos=7258705:

side_data_type=Content light level metadata
max_content=284
max_average=145

to

side_data_type=Content light level metadata
max_content=378
max_average=277

According to free CTA-861-H, it must be supported by display and players.

Also, I am pretty sure it is not VFR, but alas mkv in ffmpeg is very strange, it thinks it is VFR on all mkv's (and I got this file from mkv blu-ray rip by ripping it D:) ). Maybe adopting a MatroskaParser like Nevcariel did for LavFilters fork (and he even did fix some of other ffmpeg bugs) will be it... But I doubt it.
See: https://git.1f0.de/gitweb/?p=ffmpeg.git;a=commit;h=629d82013a9d5471bb5323890bed6969bdbe8885;js=1

comment:86 by dc, 21 months ago

Cc: dc added

comment:87 by thovo, 21 months ago

Cc: thovo added
Note: See TracTickets for help on using tickets.