Problems in swscale:

  • poor API
    • does not use ffmpeg internal data structures (e.g. AVFrame) or other approaches (e.g. AVOptions)
    • has all kind of random crap that shouldn't be there (vector API, context caching, sws_convertPalette8ToPacked24 ???
  • very static in its core conversion approach: "fast" (direct) AtoB converters vs. "scaled path" (the generic case)
    • the fast AtoB converters are cute but cannot be chained. so anything not directly covered by a fast path goes through generic path even if it's slower than a chain of two fast paths
    • the scaled path is static and limited
      • only YUV allowed as internal common format
      • almost always invoked for non-8bpp pixel format conversions
      • several functions do multiple things at once for performance reasons, which makes adding new types of work very hard

        for example, since yuv is the internal common format, all rgb input is immediately converted to yuv. however, that means if we want to convert colorspaces, we need to convert back to rgb, ungamma, xyz, gamma, rgb, and yuv before we can move on, which is ridiculous

  • assembly optimizations are hard
    • the scaled path has some fairly modern optimizations, but they can't set constraints on buffer sizes (since that's an ABI change)
      • sometimes we revert to C for end of image
      • sometimes we don't convert last few pixels at all
      • sometimes we overread/overwrite
    • the unscaled paths are almost universally underoptimized (only mmx/mmx2 coverage, very little sse2 and virtually no avx2)
    • fast conversion simd often uses contexts to pass around information, which isn't very portable

A good fix-up of swscale would have the following elements:

  • completely rewrite the API
    • should "feel" like all the other libs. libswresample may serve as a good example here
    • use AVFrame, AVOptions, AVPixelFormat, AVColor*, hide most internal details away from the API
      • it is specifically OK if this breaks mplayer.
    • some features will disappear
      • sws_convertPalette8ToPacked24 etc.
      • context caching
      • vector API
      • SWS_CS_* (replace by AVColor*)
  • more dynamic scaling paths
    • "fast" direct conversions become chainable
    • the "scaled" path becomes dynamic
      • dynamic internal format
      • dynamic ordering (chaining) of filters
      • merging of operations (e.g. reading yuv input and yuv2rgb conversion to prevent memory stores) where possible and where it gains significant speed
    • it may well be possible for the "fast" and "scaled" codepaths to be mixed, although this obviously needs more thought
  • simd / platform optimizations
    • these should ideally not be exposed in the common code (i.e. a grep for x86 in libswscale/*.c should be mostly empty)
    • simd should be allowed to set constraints. E.g., over-reading and -writing of buffers should be enabled in some scenarios where that makes the assembly easier to write. This should be conveyed to the user so they can choose fast conversion by having padded buffers, or a slower conversion by not having padded buffers
    • a "cleaner" SIMD API, see libavcodec/libavfilter/libswresample for examples, particularly for the "fast" versions
    • bringing some sanity into the choice of which function is assigned to which pointer under which conditions, or at least documenting it in human-understandable form, would also help a lot
  • documentation
    • API docs
    • user docs
      • description of filter trade-offs
      • filter recommendations for end user scenarios
      • maybe even some comparisons of different filters on real-world content

Things to figure out:

  • slicing
[8:43am] <BBB> michaelni: poke
[9:02am] <michaelni> BBB, pong ?
[9:02am] <BBB> michaelni: so who’s going to work on swscale?
[9:03am] <BBB> like, you really seem to like swscale. I think it needs tons of love to get that kind of status. who will do that work?
[9:03am] <BBB> (the loving)
[9:03am] <michaelni> i had hoped that a GSoC student would move it forward but i think noone submitted a proposal
[9:04am] <BBB> that’s because it needs love
[9:04am] <wm4> BBB: swscale can't be saved
[9:04am] <BBB> you can’t expect students to give it love; students have no background, they add some new features
[9:04am] <michaelni> pedro does work slowly on improving it it seems
[9:04am] <BBB> maintainers need to give it love, ideally the original developer(s)
[9:04am] <BBB> swscale needs massive amounts of love
[9:05am] <wm4> because it's not a code problem but a development problem
[9:05am] <kierank> wm4: ^
[9:05am] <kierank> that
[9:05am] <BBB> wm4: I think it can be salvaged - I’m not sure any of the original code will be left at the end
[9:05am] <michaelni> BBB with comments like wm4&kieranks i will not even think about touching the code, iam not enjoying this environment
[9:06am] <BBB> michaelni: but you’re not talking to them, you’re talking to me
[9:06am] <BBB> michaelni: and I didn’t say we should rm -fr swscale
[9:06am] <BBB> michaelni: I just said it needs love
[9:06am] <michaelni> yes it does
[9:06am] <BBB> michaelni: so… who will give it that kind of love?
[9:06am] <wm4> michaelni: well it's because you actively resists any radical measures which might actually save swscale
[9:06am] <BBB> we need some maintainers stepping up the love game, otherwise it will go down the drain
[9:07am] <BBB> michaelni: did you ever read ? (ignore the politics for now, focus on the technical complaints in the document)
[9:07am] <michaelni> i read it long ago
[9:07am] <michaelni> not recently if it changed
[9:08am] <BBB> I don’t think it did
[9:08am] <BBB> so … let’s say (hypothetically) that we wanted to salvage swscale
[9:08am] <BBB> can we totally redesign it? and will you actively help in coding it up?
[9:09am] <michaelni> iam happy if its totally redesigned, thats not a problem at all
[9:10am] <michaelni> i want the main practically used codepathes to stay fast though
[9:10am] <michaelni> a 5% slowdown should not happen
[9:10am] <BBB> well, there’s many dimensions to the total set of problems in swscale… api is one, lack of integration with the rest of ffmpeg is another
[9:10am] <wm4> the first thing you'd have to accept to make progress at all (instead of burning developer time on all these crazy details every time someone dares to take a look at it) is (temporary) deoptimization
[9:11am] <nevcairiel> swscale isnt very fast to begin with
[9:11am] <nevcairiel> all its "fast" optimization are super low quality
[9:11am] <kierank> not true
[9:11am] <BBB> the internals are very … unextendable, which isn’t necessarily a problem for corner case optimizations, but at this point, xyz support is a total hack, whereas if we are moving from to bt2020 support, we really should start thinking about xyz being a central component of the processing chain
[9:12am] <BBB> michaelni: obviously one that’s skipped if it’s not necessary, but swscale as it is right now would be merely one of the steps
[9:12am] <BBB> michaelni: fast paths for direct conversions are obviously ok, but it’s mostly very static and unpredictable right now
[9:12am] » michaelni  need to awnser apoe call iam bac in 5min
[9:12am] <BBB> michaelni: I still believe we go through a full yuv conversion and scaling path if we convert gbrp14 to gbrp12
[9:12am] <BBB> ok
[9:13am] <kierank> anotherother big question is whether to include the pseudo pixel formats
[9:14am] <kierank> r210, v210 etc
[9:14am] <BBB> nevcairiel: most optimizations are from 1985, so they’re mmx (or sometimes mmx2) at best
[9:14am] <BBB> there’s little bits of sse2/avx2 added by various people a few years ago
[9:14am] <BBB> and now finally some arm
[9:14am] <BBB> but it’s mostly still a mess
[9:14am] <ubitux> optimisation behaves strangely when constraints don't fit btw
[9:15am] <ubitux> like, it seems x86 yuv2rgb will just ignore the last pixels if linesizes are not enough
[9:15am] <kierank> the problem I hacked around is when doing chroma conversions swscale does a full luma multiply and shift which ends up being a NOOP
[9:15am] <ubitux> it would be great to have some kind of automatic c-fallback to fill the padding
[9:15am] <kierank> but burns an entire cpu core
[9:16am] <ubitux> and make sure all simd received aligned & padded stuff
[9:16am] <ubitux> btw, anyone for my vscale question earlier? in yuv2planeX, offset can only be 0 or 3?
[9:17am] <ubitux> (x86 simd seems to assume so, and it looks like it's the case in practice in my tests so far)
[9:18am] <BBB> ubitux: let me check
[9:20am] <ubitux> only one where it could not be 3 or 0 would be where it is use_mmx_vfilter ? (c->uv_offx2 >> 1) : 3
[9:20am] <BBB> oh god the dithering
[9:20am] <ubitux> yes, dithering should be optional btw, it's badly done currently in the options
[9:24am] <BBB> ubitux: I don’t know what uv_offx2 does ...
[9:24am] <ubitux> libswscale/utils.c:    c->uv_offx2 = dst_stride + 16;
[9:24am] <durandal_1707> pile of hacks upon hacks let it rest in peace
[9:25am] <ubitux> durandal_1707: please be more constructive
[9:25am] <ubitux> we're not going to depend on an external lib for converts
[9:26am] <BBB> where is use_mmx_vfilter set?
[9:26am] <ubitux> libswscale/x86/swscale_template.c:                c->use_mmx_vfilter= 1;
[9:27am] <ubitux> so, it can only be ≠ (0, 3) when this special yuv2yuvX function is used
[9:28am] <ubitux> so it shouldn't matter
[9:28am] <ubitux> i was surprised the other day when it was triggering this yuv2yuvX instead of the ff_yuv2planeX_{mmx,see,...}
[9:28am] <ubitux> (which also exists)
[9:29am] <ubitux> while it is triggering yuv2planeX_8_c when running on arm
[9:29am] <ubitux> (or basically with no asm)
[9:29am] <BBB> it seems we don’t set the sse2 versions if this was already initialized
[9:29am] <BBB> that looks like a bug
[9:30am] <BBB> ../libswscale/x86/swscale.c:    case 8: if ((condition_8bit) && !c->use_mmx_vfilter) vscalefn = ff_yuv2planeX_8_  ## opt; break; \
[9:30am] <BBB> we should just force c->use_mmx_vfilter to 0?
[9:30am] <BBB> anyway
[9:30am] <BBB> yes you’re right it’s only 0 and 3 then, for your use case
[9:34am] <BBB> ubitux: re simd behaving stragenyl, that’s certainly something I’d like to discuss with michaelni later on in this conversation
[9:34am] <BBB> ubitux: but let’s start by keeping things simple
[9:34am] <durandal_1707> ubitux: I'm still waiting for nlmeans
[9:34am] <ubitux> yeah, i should get done with this one
[9:36am] <ubitux> BBB: btw, i agree with michaelni about having the colorspace convert in sws
[9:36am] <ubitux> since it's already there...
[9:37am] <BBB> what is already there?
[9:37am] <michaelni> BBB all scaling goes through yuv, unscaled rgb converts might be without yuv
[9:38am] <ubitux> BBB: SWS_CS_* 
[9:38am] <michaelni> theres some avx in swscale
[9:38am] <BBB> let’s do one dimension of this problem set at a time
[9:38am] <BBB> let’s start with api: swscale api is from the 80s. it needs to die and the approach in the avscale blueprint isn’t so bad. backwards compatibility to hell, no part of the public api (except maybe the version macros) should stay
[9:39am] <BBB> do you agree?
[9:39am] <ubitux> i think wm4 had a patch for that?
[9:39am] <ubitux> unless you're not talking about the AVFrame wrapping?
[9:39am] <BBB> the api should integrate with the rest of ffmpeg (e.g. use AVOptions, like swresample does; use AVFrame; etc.), and no context caching, “sws_convertPalette8ToPacked24” or vector api
[9:39am] <michaelni> BBB the old API interface should be provided by using wrapers around the new
[9:40am] <BBB> no, the old api should just die
[9:40am] <ubitux> yeah i agree with deprecating the old api
[9:40am] <BBB> it’s useless and nobody wants it except mplayer, which is almost actual proof that it should die
[9:40am] <kierank> no kill the old api
[9:40am] <kierank> it's horrible
[9:40am] <kierank> which implies a new lib
[9:40am] <BBB> michaelni: I think you have pretty much consensus that the old api needs to die here
[9:41am] <michaelni> sure, if thats what people want
[9:41am] <BBB> michaelni: so, that’s dimension 1. now, dimension 2: scaling stages…
[9:41am] <BBB> (I’ll talk simd/platform ops later)
[9:41am] <BBB> so, right now, we have two kind of paths: “direct conversions” like yuv422p_to_yuyv
[9:41am] <BBB> and we have the scaled path, as invoked (iirc) by gbrp14 to gbrp12
[9:42am] <BBB> (fortunately it uses a filter size of 1 so it’s not that bad, but still)
[9:42am] <BBB> we need a more generic approach to “scaled path”
[9:42am] <michaelni> yes
[9:42am] <BBB> I think avscale calls this kernels
[9:42am] <BBB> you can sort of see this in the colorspace thing also, although that’s obviously fairly limited (on purpose)
[9:43am] <BBB> internal format should be dynamic, it can’t be only yuv
[9:43am] <BBB> if input and output is xyz, that should be ok
[9:43am] <BBB> also, to go from rgb to rgb should not involve a yuv conversion, and xyz to rgb shouldn’t either
[9:43am] <michaelni> it was planed since the 80ties that the internal format should be anything just wasnt ever done
[9:43am] <BBB> right, but so this is a component of “major love"
[9:44am] <BBB> we’ve been saying for years that stuff needs to be done and nobody does it
[9:44am] <BBB> it may well be that a new approach will be so fundamentally different that we need new implementations of every simd function, and that pretty much all existing code will eventually be rewritten
[9:44am] <michaelni> also you(plural) should talk with pedro arthur he imlemented a more generic filter path last year
[9:45am] <michaelni> and that code is enabled and works but i think it itself needs cleanup and documentation
[9:45am] <BBB> a kernel style approach has issues btw, as can be seen in the avscale document
[9:45am] <ubitux> BBB: small parenthesis about xyz: it appears many people use lut3d to do the convert
[9:45am] <ubitux> (vf lut3d does that)
[9:46am] <ubitux> (i had a local patch to support xyz as input, needs to upstream it, but i have some format negociation issues)
[9:46am] <BBB> I actually wrote avscale years ago, long before libav started caring about it, I tihnk I talked about it on irc back then
[9:46am] » michaelni  maybe wasnt on iRC back then 
[9:46am] <BBB> the issue with kernels is that you tend to want to design it in such a way (naturally) that the number of memory intermediates goes up a lot compared to swscale
[9:47am] <BBB> so this needs to be designed by ery knowledgable people who understand performance
[9:47am] <BBB> and will end up with a lot of functions, some near-duplicates, to do the same thing, if you want the same performance as you have with swscale
[9:47am] <BBB> (in all cases)
[9:47am] <michaelni> BBB i think you absolutely must talk with pedro arthur about this
[9:47am] <BBB> so, lots of code, or slightly slower
[9:48am] <michaelni> as pedro did alot of work on swscale filter stuff and that is i think very similar to kernels
[9:48am] <BBB> (and you can see why I eventually threw my avscale out of the window, I just didn’t care enough to write so much code)
[9:48am] <BBB> so, simd is largely a game about assumptions, right? as kierank sayd, we have tons of simd that doesn’t work right on odd image sizes or non-multiple-of-4-s etc.
[9:49am] <BBB> a nice thing of a new api is that we can document assumptions
[9:49am] <BBB> e.g. “buffers should have at least this much padding"
[9:49am] <BBB> and then just require that to be the case
[9:49am] <BBB> and then we can just overwrite in our simd instead of underconvert
[9:49am] <BBB> (or, worst of all, revert to C, which is just so ugly that I don’t know what to say)
[9:50am] <BBB> (swscale does a combination of all 3)
[9:50am] <michaelni> the lack of SIMD assumtation docs is a real problem, yes
[9:50am] <michaelni> yes
[9:50am] <BBB> and about pedro… I can look back at what he did, but you see my point about students, right? this is the job of a maintiner, not a student
[9:51am] <BBB> if we want pedro to help discussing this, he should be here, or we should get him here. is he on irc?
[9:51am] <michaelni> pedro is no student anymore in the gsoc sense 
[9:51am] <BBB> is he ready to be maintainer?
[9:51am] <michaelni> i suspect he isnt on irc but yes he should join ideally
[9:51am] <BBB> in the short term, none of this will happen, which is why I wrote the filter
[9:52am] <BBB> it fixes my immediate problem and will allow me to move forward with actual things I’d like to do
[9:52am] <michaelni> i think he is ready but maybe he lacks confidence or maybe time i dont know
[9:52am] <michaelni> pedro agreed to co/backup mentor a swscale task if we had a student (which we dont)
[9:56am] <BBB> michaelni: is pedro’s filter chaining work documented anywhere? or where do I find it?
[9:58am] <BBB> ubitux: for video, I doubt people use xyz much, it’s primarily an image thing to actually ever want the data in xyz… but xyz is an intermediate useful to convert between various rgb chromaticity primaries (“colorspaces”)
[9:59am] <michaelni> BBB iam not sure its documented all that much, pedro last summer ended up doing more than i wanted and possibly good docs where one thing that he forgot
[10:00am] <BBB> where do I look for the code?
[10:00am] <BBB> I should make a wiki page out of this discussion so we can refer to it in future continuations
[10:01am] <michaelni> git log -p --author Arthur
[10:02am] <durandal_1707> I can help with coding if blueprints are there
[10:27am] <ubitux> BBB: about assembly, the unscaled path is actually very complex to handle
[10:27am] <ubitux> first, it has way too much arguments
[10:28am] <ubitux> 2nd, to workaround this, since it's using inline asm in x86, it passes the sws context
[10:28am] <ubitux> in which the fields are duplicated and directly readable
[10:28am] <BBB> I think we should add that, I hadn’t complained that much about the unscaled optimizations yet, indeed
[10:28am] <ubitux> (by offsetting the context with some)
[10:28am] <BBB> but some of that code is hideous also
[10:29am] <ubitux> (with some +macro)
[10:29am] <BBB> yeah I remember
[10:29am] <BBB> the fast bilinear scaler is also very “"interesting""
[10:29am] <BBB> (it’s fast, I admit)
[10:30am] <ubitux> BBB: also slicing
[10:30am] <BBB> I have surprisingly few opinions on slicing
[10:30am] <BBB> some people seem to hate it, I don’t really care
[10:30am] <ubitux> do you know how it's done currently?
[10:30am] <BBB> roughly, yes
[10:31am] <BBB> I wonder if it isn’t easier if slicing was done internally and externally it just had a “-threads” parameter that users can set
[10:31am] <ubitux> it's not threadable slicing, it's just for the destination
[10:31am] <BBB> I always thought it was for threading
[10:31am] <ubitux> no
[10:32am] <BBB> :-P
[10:32am] <BBB> well good thing I had no opinion on it I guess
[10:32am] <ubitux> apparently it was about some cache locality of some sort
[10:32am] <michaelni> btw as slicing is mentioned, theres the use case where the whole image doesnt fit in memory (gimp does that) this may be worth a thought in case of a redesign
[10:33am] <ubitux> michaelni: doesn't work by squares?
[10:33am] <ubitux> only slices?
[10:34am] <ubitux> photoshop works by big blocks (call it tiles/squares/bigblocks/whatever)
[10:34am] <michaelni> to clarify we dont support that currently, gimp uses its own scaler
[10:34am] <michaelni> i just saw that gimp caching code a long time ago (which used squares)
[10:34am] <michaelni> or rectangles dont remember
[10:36am] <michaelni> also some kind of square / slice /tiling for multitreading is probably "important"
[10:37am] <ubitux> yeah
[10:37am] <ubitux> BBB: about the poor api, let's talk about the options...
[10:38am] <ubitux> dithering configuration is broken
[10:38am] <BBB> hm, dithering
[10:38am] <ubitux> like, you haven't a dither=none
[10:38am] <ubitux> it's just enabled, sometimes
[10:38am] <ubitux> (0=auto)
[10:38am] <BBB> yeah, nobody knows when it’s enabled
[10:38am] <michaelni> btw, iam not sure though how important/usefull such non memory squares are ...
[10:38am] <ubitux> also the scaler selection mixed within flags
[10:39am] <BBB> the avscale blueprint is pretty good at explaining that, the filter selection should be an enum
[10:39am] <BBB> (AVOption)
[10:39am] <BBB> and the rest of the flags can just be bools or whatevers as separate options
[10:39am] <BBB> I Guess pre-AVOption extending api was hard so this made sense, but with AVOption we realy don’y need it anymore
[10:39am] <ubitux> small nit: wth are param0 and param1
[10:40am] <BBB> hahaha right
[10:40am] <BBB> they are options for some filters
[10:40am] <BBB> wasn’t it alpha/beta for one of the larger filters?
[10:40am] <ubitux> error_diffusion belongs to dithering btw
[10:40am] <ubitux> it's currently a sws falgs
[10:40am] <ubitux> what are the implication of accurate_rnd too
[10:40am] » michaelni  doesnt remember exactly what param0 and 1 did for each filter but yes they where filter params
[10:41am] <BBB> For SWS_BICUBIC param[0] and [1] tune the shape of the basis function, param[0] tunes f(1) and param[1] f´(1) | For SWS_GAUSS param[0] tunes the exponent and thus cutoff frequency | For SWS_LANCZOS param[0] tunes the width of the window function
[10:41am] <ubitux> no other comment so far
[10:41am] <michaelni> these all should be AVOptions
[10:42am] <BBB> there’s also SWS_FULL_CHR_H_INT/INP
[10:42am] <BBB> in fact, I just noticed SWS_DIRECT_BGR, ...
[10:43am] <BBB> I believe int/inp had to do with non-420p support
[10:43am] <BBB> (I mean, practically speaking)
[10:43am] <BBB> doesn’t accurate_rnd increase precision of some internal codepath/something?
[10:43am] <ubitux> small note: there are a few dithering algorithms in vf paletteuse filter
[10:43am] <michaelni> BBB, yes accuarte_rnd uses more accurate code
[10:44am] <michaelni> IIRC no pmulhw
[10:44am] <michaelni> which would loose the lsbs
[10:44am] <ubitux> it has different meaning depending on the codepath
[10:44am] <BBB> I think it’s totally fine to basically eliminate all these flags and go back to “let’s just make it do the right thing”
[10:44am] <ubitux> it's not well defined
[10:44am] <BBB> I can see how pmulh(u)w was critically important for performance in the mplayer era
[10:45am] <BBB> I think it’s totally fair to say that with axv2, it is not all that relevant anymore
[10:45am] <ubitux> it's used between 32 vs 16 in some rgb code iirc
[10:45am] <BBB> I also wonder if half of the filters should be deleted
[10:45am] <BBB> like sinc, gauss
[10:46am] <BBB> maybe even fast-bilinear
[10:46am] <BBB> (that would clean up the code so much)
[10:46am] <michaelni> the filters like sinc gauss should have nearly no complexity as its just different numbers
[10:46am] <BBB> it’s user complexity
[10:46am] <BBB> we should expose the ideal configuration settings to our user
[10:47am] <BBB> when to use spline or lanczos: when upscaling
[10:47am] <BBB> (and caring about quality)
[10:47am] <BBB> when to use bicublin: when speed is critical and you’re downscaling
[10:47am] <BBB> that’s very helpful to end users
[10:47am] <BBB> when to use gaussian?
[10:47am] <BBB> I don’t know… I don’t think anyone knows
[10:47am] <michaelni> with scalig different people will want different options and some people just like to have the choice
[10:47am] <ubitux> i'd keep the different filters
[10:47am] <ubitux> it's useful to make various visual comparison
[10:48am] <michaelni> sinc is something that some people "know" is best until they try it
[10:48am] <ubitux> :-D 
[10:48am] <BBB> but doesn’t that mean we should remove it?
[10:48am] <BBB> why keep the option there
[10:48am] <ubitux> people will bug you to implement it because it's the perfect filter
[10:48am] <nevcairiel> many of the various filters are just different kernels over the same kind of filter, so preserving them costs you practically nothing
[10:48am] <BBB> do you know how many people thought x264 was the best encoder in the world but they were using it with default ffmpeg parameters (instead of presets)?
[10:49am] <ubitux> so you have it to show them it's shit, or just as a visual demonstration (educative purpose, experiment, ...)
[10:49am] <michaelni> its very important that the defaults are good
[10:49am] <BBB> I guess as long as defaults and documentation is good, I don’t mind
[10:49am] <BBB> but documentation is not good right now 
[10:50am] <michaelni> the docs need some love, i could probably help with that if theres a list of what needs new/better docs
[10:51am] <fritsch> michaelni: "sinc" <- one still learns that in university, that's why
[10:51am] <av500> avscale
[10:52am] <av500> /undo
[10:52am] <Shiz> BBB: alternatively, pseudo-filter names like 'upscale' and 'downscale' that are just aliases for whatever is best
[10:52am] <nevcairiel> a perfect sinc filter is perfect - a windowed sinc is just an approximation 
[10:52am] <BBB> hm, filter presets
10:53am] <michaelni> fritsch, yes, its true what one learns but sinc results from some axioms and these dont apply that way to images
[10:53am] <ubitux> what's the filter window used in sws? is there such concept?
[10:55am] <fritsch> the raspberry pi people that implemented one from scratch decided to go for a special weighted bicubic filter
[10:55am] <fritsch> cause of implementation details / performance quality
[10:56am] <fritsch> kodi's lanczos3 filter needs quite a bit oomph and for example a hsw gpu is too slow to do 50 fps from 1080 to 4k
[10:57am] <wm4> fritsch: does kodi's scale width and height separately?
[10:58am] <fritsch> pi uses mitchell-natravali iirc, so the default most likely also ffmpeg uses
[10:59am] <fritsch> wm4: i need to look in detail, we use a pseudo separated filter
[10:59am] <fritsch> so most likely no 
[10:59am] <fritsch> but wait a mo
[10:59am] <fritsch> it's from a time where you did not have a float intermediate buffer in the gpu
[10:59am] <fritsch> or where that extension was "patented" by someone
[11:01am] <BBB> so this may just be me, but I tend to think that swscale should be software. I’m all for doing things in hardware, but I don’t know if we should make swscale more complex for that
[11:01am] <BBB> or, rather, I wouldn’t know how to do it so it makes no sense for me to design it
[11:01am] <BBB> I don’t even know if the concept makes any sense at all
[11:01am] <fritsch> wm4: it uses a 4x4 convolution shader at the end
[11:01am] <fritsch> wm4: so "no" to your question
[11:02am] <fritsch> BBB: shaders are really, really mighty especially for convolution
[11:02am] <fritsch> i don't see a point doing that on the cpu
[11:02am] <BBB> I know shaders, I love them
[11:03am] <BBB> but my point is more about “do you want to use the swscale api if you’re going to scale stuff in hardware?”
[11:03am] <nevcairiel> shaders work fine if you already have the image on the gpu
[11:03am] <nevcairiel> if you dont, its a lot of overhead and potentially not worth it
[11:03am] <BBB> I’m not saying you shouldn’t scale in hw; you should, totally!
[11:03am] <BBB> I’m just wondering if swscale is the ideal place to serve as an intermediate layer
[11:03am] <wm4> fritsch: separating them makes it quite a bit faster
[11:03am] <fritsch> jep
[11:04am] <fritsch> but you need a float intermediate buffer
[11:04am] <fritsch> to do so
[11:04am] <fritsch> which we did not have (on all gpus) at this time
[11:04am] <fritsch> iirc gwenole also implemented his lanczos3 in libva without separated kernels
[11:04am] <wm4> fritsch: no you don't
[11:05am] <fritsch> wm4: then you loose information
[11:05am] <wm4> nonsense
[11:05am] <fritsch> nonsense 
[11:05am] <fritsch> come one doing a float multiplication and storing in non float intermediate buffer
[11:05am] <fritsch> drives the separation nuts
[11:06am] <wm4> a 16 bit fixed point buffer preserves more information than a 16 bit float buffer
[11:07am] <fritsch> wm4: then you need to scale appropriately twice
[11:07am] <fritsch> e.g. scale the filter weights
[11:07am] <fritsch> and inverse at the end
Last modified 2 months ago Last modified on Jan 24, 2017, 5:35:32 PM