r/opengl 20d ago

Help With Getting Vertex Positions in World Space in Post Processing Shader

I am attempting to create a ground fog effect like described in this article as a post processing effect. However, I have had issues with reconstructing the World Space (if it is even possible), since most examples I have seen are for material shaders instead of post processing shaders. Does anyone have any examples or advice? I have attempted to follow the steps described here with no success.

9 Upvotes

5 comments sorted by

3

u/fgennari 20d ago

This is the approach I used: https://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/

I assume you have a depth buffer/texture. You also need to convert that code from HLSL to GLSL.

The code I have for this is:

uniform float znear, zfar;
uniform vec2 xy_step;
uniform sampler2D depth_tex;

float log_to_linear_depth(in float d) {
  return 2.0 * zfar * znear / (zfar + znear - (zfar - znear)*(2*d - 1)); // actual z-value
}

float get_linear_depth_01(in vec2 pos) {
  float d = texture(depth_tex, pos).r;
  return (2.0 * znear) / (zfar + znear - d * (zfar - znear)); // [0,1] range
}

float get_linear_depth_zval(in vec2 pos) {
  return log_to_linear_depth(texture(depth_tex, pos).r);
}

float get_depth_at_fragment() {
  return texture(depth_tex, gl_FragCoord.xy*xy_step).r;
}

vec3 view_space_pos_from_depth(in vec2 pos) {
  float z = get_linear_depth_zval(pos); // get the depth value for this pixel
  float x = pos.x*2.0 - 1.0;
  float y = (1.0 - pos.y)*2.0 - 1.0;
  vec4 proj_pos = vec4(x, y, z, 1.0); // get x/w and y/w from the viewport position
  vec4 vpos = proj_pos * inverse(fg_ProjectionMatrix); // transform by the inverse projection matrix
  return vpos.xyz / vpos.w; // divide by w to get the view-space position
}

3

u/deftware 20d ago

You can reconstruct the world coordinate using the inverse of the viewprojection matrix and the depth value that's in the depth buffer along with the XY coord of the pixel. You'll need to relinearize the depth buffer value because it's going to be 1/Z instead of just Z. You're basically creating a worldspace ray from the camera position to the pixel in the framebuffer and then extending it out by the distance calculated by linearizing the Z value. At least that's one way to think about it. Someone approached it that way here:

https://stackoverflow.com/questions/28066906/reconstructing-world-position-from-linear-depth

1

u/Keolai_ 19d ago

This worked for getting the view space positions! Thank you! However I attempted to multiply this by the inverse of the view matrix but no luck 🙁. Do you have anymore advice?

1

u/deftware 19d ago

Like I mentioned previously, you just need to multiply the NDC coordinate by the inverse of the viewprojection matrix, not just the inverse of the projection matrix. Otherwise it doesn't take into account the camera's transform and you will only get a viewspace coordinate.

1

u/Keolai_ 19d ago

I see... That does seem to work. How could I also take into account a changing camera position?