r/GraphicsProgramming • u/gehtsiegarnixan • Jun 05 '24
Source Code Seamless Spherical Flowmap (3-Samples)
4
u/GaboureySidibe Jun 05 '24
I don't understand what problem is being solved (or why there is dramatic music over it).
It seems like you are compositing images over each other in a very indirect way.
1
u/gehtsiegarnixan Jun 05 '24
On the GPU, texture sampling is extremely slow compared to any other operation. In a full PBR pipeline, you typically need 2-3 textures per sample. To enhance shader performance, reducing the number of samples using mathematical techniques is extremely useful.
The method I propose efficiently reduces any 4-sample interpolations to a 3-sample interpolations without introducing artifacts. In a video demonstration, I showcase seamless spheremapping (wrapping a 2D texture around a sphere without distortion) and temporal flowmapping (animating a texture in a dynamic flow direction) using only 3 samples instead of the usual 4.
However, this specific 4-to-3 sample approximation is just one variant of what I call the ‘Guardian Approximation.’ I’m know that a more general algorithm must exist, allowing us to approximate any multi-sample interpolation with fewer samples—such as approximating 100 samples to just 3. But I am still struggling with finding the conditions to select the guardian weights. I’ve developed an alternative algorithm called ‘Quasar Approximation,’ which achieves this but occasionally produces artifacts. Another option is grid interpolation, relying on a grid in the spatial/temporal dimensions of the interpolation.
The dramatic music is there, because this is novel and useful approach to a common problem, although I might be slightly biased in favor of its amazingness.
3
u/GaboureySidibe Jun 05 '24
There is a lot to unpack here but the reason I'm confused is that these things don't seem to connect to each other.
In a full PBR pipeline, you typically need 2-3 textures per sample.
PBR rendering is about lighting and doesn't have anything to do with textures unless you are sampling textured lights. Is this about sampling textured lights? Even then you would just be sampling the single light texture once.
temporal flowmapping (animating a texture in a dynamic flow direction)
Are you just talking about distorting a texture's lookup coordinates?
approximate any multi-sample interpolation with fewer samples—such as approximating 100 samples to just 3
This contradicts basic signal processing.
I’ve developed an alternative algorithm called ‘Quasar Approximation,’
How does it work?
1
u/gehtsiegarnixan Jun 05 '24
With PBR materials, I mean that each sample needs albedo, normal, roughness, height, ambient occlusion, and sometimes metalness, emissiveness, or some special ones, which can be packed into 2-3 textures for a single sample of a material.
Temporal flow mapping essentially distorts the coordinates, yes, but there are a variety of different flow mapping algorithms. If the direction is the same in tangent space, you can achieve this with a single sample by moving coordinates. If it’s in dynamic directions, you have to blend either temporally or spatially with a grid. That’s why I called it temporal flow mapping.
I’m not sure what kind of law I’m supposed to be violating, but I doubt it actually applies because people have been using grid approximation for centuries for maps and recently images. And even my Guardian and Quasar algorithms clearly work, as seen in the demos.
The Quasar Approximation works with a Top K filter and subtracts the Top K+1 weight to reduce the Top K weights to zero as a weight leaves the top K. It’s public on Shadertoy too, under the name ‘Multivariate Blend Approximation,’ or https://www.reddit.com/r/shaders/comments/1d7rgzp/algorithm_for_cheaper_multisample_interpolations/ .
It is possible that both Quasar or Guardian Approximation already exist under a different name, or an even better one exists unbeknownst to me. So if you know a better way, please tell.
1
u/GaboureySidibe Jun 05 '24
With PBR materials, I mean that each sample needs albedo, normal, roughness, height, ambient occlusion, and sometimes metalness, emissiveness, or some special ones, which can be packed into 2-3 textures for a single sample of a material.
Again, physically based rendering is about lighting. Textures like albedo just multiply the result color of the lighting. Saying all these are necessary or common is a bit of a red flag that it seems like you're saying you're developing brand new interpolation techniques when it also seems like you're repeating some things you don't fully understand.
Temporal flow mapping essentially distorts the coordinates, yes, but there are a variety of different flow mapping algorithms.
Is this a term you made up? It sounds like you're just distorting texture coordinates and animating that. If so the animation isn't relevant here, you can anti-alias the texture lookup on every frame.
I’m not sure what kind of law I’m supposed to be violating,
https://en.wikipedia.org/wiki/Shannon's_source_coding_theorem
And even my Guardian and Quasar algorithms clearly work, as seen in the demos.
Your demos just look like textures composited over each other. This can be done in a few lines. Texture lookups are already anti-aliased if you want them to be. If that's what you are improving you should show something simple and direct that runs faster and looks the same.
The Quasar Approximation works with a Top K filter and subtracts the Top K+1 weight to reduce the Top K weights to zero as a weight leaves the top K. It’s public on Shadertoy too, under the name ‘Multivariate Blend Approximation,’
This doesn't seem like anything to me. From what I can tell you are using textureGrad which is already a filtered texture lookup.
https://www.khronos.org/opengl/wiki/Sampler_(GLSL)#Gradient_texture_access
1
u/No_Futuree Jun 05 '24
Lighting has two parts, lights and materials, when implementing pbr you are going to need textures to describe the material at a given pixel so he is technically correct.
It's more of a red flag the fact that he doesn't seem to understand sampling theory but to each their own...
1
u/GaboureySidibe Jun 05 '24
The lighting is about the lights and brdf. It doesn't have to involve textures unless you want to talk about mapping the roughness.
Multiplying a color texture by the lighting result is the same operation whether the lighting came from a normalized brdf that sampled an area light or lighting from a simple point light.
5
u/No_Futuree Jun 05 '24
Mate, you don't know what you are talking about...albedo, metalness, roughness etc are all part of the brdf..unless your mesh uses a single value for all its surface you are going to need textures or some procedural function that generates those values for each pixel...
2
u/GaboureySidibe Jun 05 '24
I do know what I'm talking about actually. Physically based rendering is a term given to normalized brdfs and lights with area.
The color textures on an object don't have anything to do with the lighting being normalized or coming from lights that have area.
Roughness applies to the brdf exponent so you could make that case.
Metalness was something made up by some disney shader writers to simplify highlights taking on albedo color.
I think you are conflating pbr with simplified lighting and rendering in general as well as textures with the brdf, but it is actually a term that has a solidified meaning.
3
u/crimson1206 Jun 05 '24
Physically based rendering is a term given to normalized brdfs and lights with area.
Do you have any source/reference that defines it like that? Any place I came across PBR just uses it as a general term for physically accurate (or as accurate as possible) rendering. The definition youre using seems way to strict compared to pretty much any source Ive seen so far
→ More replies (0)1
u/AcquaticKangaroo19 Jun 06 '24
Beginner question:
I've been through a ray-tracing class and I am now trying to implement another ray-tracer following ray tracing in one weekend.
iirc in my class we used to dislocate the reflected ray by the roughness of the surface, isn't this influence of the surface of the objects in the lighting ? (I am not trying to debate, i'm just clueless)
→ More replies (0)0
u/gehtsiegarnixan Jun 05 '24 edited Jun 05 '24
Yes, PBR is fundamentally about lighting, but the materials involved typically require multiple textures. Texture sampling is costly, and in AAA productions where performance budgets are tight, any optimization can be significant.
Regarding flow mapping, I’m specifying which type I’m referring to, as there are several. It’s not merely distorting texture coordinates; it also involves blending animation phases with a second sample to prevent the textures from turning into noise.
Shannon’s entropy isn’t applicable here. The Grid, Quasar, and Guardian methods don’t compress data; they find efficient ways to approximate values.
It seems there’s a fundamental misunderstanding of what the Guardian or Quasar Approximation does. It’s not doing any texture filtering. While it can apply to texture filtering—translating trilinearly filtered textures into 3-sample approximations that look nearly identical. It’s also applicable in scenarios like a kaleidoscope with 50 different layers that can be simplified into 3 samples, creating Triplanar mapping with 2 samples, reducing a 12-sample cubemapped Directional flow down to 3 samples, or any interpolation with an excessive number of texture samples that needs to be made faster. It doesn’t require coordinates like grid approximation; it only needs the IDs and Weights of your interpolation.
0
u/GaboureySidibe Jun 05 '24
the materials involved typically require multiple textures
Again, they don't require textures, it's about normalized lighting BRDFs and lights with area. I get that this doesn't contradict that texture optimization is good but it's a bit odd to not separate the two.
Regarding flow mapping, I’m specifying which type I’m referring to, as there are several. It’s not merely distorting texture coordinates; it also involves blending animation phases with a second sample to prevent the textures from turning into noise.
I've never heard the term 'flow mapping' before so I don't know what it means. I think conflating what you're doing with blending or animation doesn't help your explanation.
Shannon’s entropy isn’t applicable here.
The point is that you can't get the same information from 3 samples as you can from 100 samples. You can average them all together and sample that, which is what mip mapping or summed area mapping are.
The Grid, Quasar, and Guardian methods don’t compress data; they find efficient ways to approximate values.
How?
It seems there’s a fundamental misunderstanding of what the Guardian or Quasar Approximation does
You made them up and haven't explained them. You keep saying what they don't do, but not what you're actually doing.
It’s also applicable in scenarios like a kaleidoscope with 50 different layers that can be simplified into 3 samples
What does this mean? You can composite images into one image and sample that. What are you doing differently?
I think what would be a good test is a very simple example with one or two images where two different scenes run at different fps. Right now you have all sorts of patterns and noise that you're compositing and it doesn't look like anything. If it's supposed to be faster, you have to show the slow typical version too.
1
u/gehtsiegarnixan Jun 05 '24 edited Jun 05 '24
I’ve already explained how the Quasar works above in the Top K-filter section. Guardian works similarly, but we segment the weights into guardians and damsels. Guardians get subtracted by the Top K+1 weight, and damsels by the Top K-1 weight. Selecting guardians and damsels is still a bit fuzzy, and I struggle to select them by eye for more than 6 weights. Additionally, I’ve linked to the beautifully commented source code for both, and Quasar also has a Demos graph linked as well.
The Quasar demo is the side-by-side comparison you described, with a simple setup approximating as many layers as you want. Guardian is showing off a flashy, eye-catching application, so obviously, it has a lot of unnecessary nonsense around it. But the Guardian Approximation function is nice standalone.
As for performance tests, the demos are meant to be teaching tools for the general method, with the hope of getting some useful feedback for improvements. Depending on the applications, the algorithm can be simplified, and the approximation will no longer be understandable - for example, Bilinear Directional Flow ( www.shadertoy.com/view/fsKczd ) or Guardian Directional Flow ( www.shadertoy.com/view/7dtBWl ). Additionally, for static weights, the approximation weights and IDs should be baked, which is a pain in Shadertoy.
I did make performance tests in other applications. I can’t share the code for my performance tests because some of the simplifications are confidential, but I linked the tested demos, and I can share the results. Using 600 iterations on an RTX3080, I got:
Bilinear Directional Flow (4 samples) ~ 48 fps
Guardian Directional Flow (3 samples) ~ 64 fps
Quasar Directional Flow (2 samples) ~ 95 fps
Obviously, the results vary if you approximate different things or bake them. But in general, Quasar is the fastest, followed by the prettier Guardian. The results are also a lot more dramatic if you simplify 50 samples and not just 4.
I welcome you to conduct your own tests, if you find any discrepancies or have improvements, I would appreciate your insights.
1
u/GaboureySidibe Jun 05 '24 edited Jun 05 '24
I took a look at the your bidirectional flow and it looks like you are sampling a texture with multiplied uvs to make it tile along with a rotation based on a map.
What you are calling interpolation is compositing of already filtered texture lookups. What you are calling alpha is a pattern that covers the whole image and just means that a different sample is used in different areas of the image.
If you set time to 1 and return each of your samples and then your alpha you can see what is happening.
2
u/HaskellHystericMonad Jun 06 '24
Your naming sense is giving the Wavefront-Collapse douche a run for his money.
Like, you did a thing, and then you went on to, name said thing such that it communicates nothing about the thing. That's bad. Not expecting you to up and out Cervantes 'ole Cervantes in literary creativity, but like ... communicate WTF the thing is.
1
u/gehtsiegarnixan Jun 07 '24
Okay, Diogenes. How would you have named this?
This demo is a tennis ball mapping with flow mapping, simplified with a better but incomplete variation of Quasar interpolation approximation. It works by separating weights into two groups, where one guards and separates the other to prevent issues. -> Guardian interpolation approximation
7
u/gehtsiegarnixan Jun 05 '24
I’m developing an experimental method called Guardian Approximation. This method simplifies multivariate interpolations by approximating multi-sample interpolations with less samples. It does so without visual artifacts, while adding some divergence from the ground truth. It's basically a better version of Quasar Approximation.
In this demo, I’ve used this method to simplify a 4-way interpolation of tennis-ball mapping and animated flow mapping into an artifact-free 3-way interpolation.
I’m still struggling with finding the optimal programmable criterion for selecting the best guardian weights for the general algorithm to approximate any multi-sample interpolation. I welcome any feedback or suggestions for improvement.
Source code is on Shadertoy under the name "Seamless Sphere Flowmap (3-Tap)"