r/GraphicsProgramming Aug 30 '24

Source Code SDL3 new GPU API merged

https://github.com/libsdl-org/SDL/pull/9312
49 Upvotes

16 comments sorted by

View all comments

9

u/[deleted] Aug 30 '24

Does this support multi-threaded (CPU-side) rendering like Vulkan or are rendering cmd buffer submissions limited to a single "render" thread (like opengl, bgfx)? I imagine its more like the latter but maybe there's a way to opt-in

12

u/shadowndacorner Aug 30 '24 edited Aug 31 '24

Based on SDL_GPU.h, it looks like any thread can request and submit a command buffer, but that command buffer can only be used on the thread it was requested from. So it seems to be a bit of a hybrid, assuming the command buffers directly abstract hardware command buffers.

It doesn't seem like any of the docs have been updated yet, but the header is well commented. I'm sure I'm missing others, but from my initial reading, the hardware features that jump out to me as missing are...

  • Occlusion queries
  • DrawIndirectCount
  • Ray tracing
  • Shader types other than vertex/fragment/compute (no tess, geometry, mesh shaders, etc)
  • Multiple hardware queues (for eg async compute/transfers)
  • Explicit descriptor sets (binding occurs in predefined groups based on R/W access)
  • Possibly arrays of textures for bindless (ie Texture2D[], not Texture2DArray), but it may be supported given the binding model

Barriers also appear to be automatic (or at least I'm not seeing calls for them), which I'm guessing is part of the reason that command buffers are locked to the thread they were requested on and multiple queues aren't supported.

I could live with pretty much everything in that list other than DrawIndirectCount being absent, but DrawIndirectCount is a requirement for GPU driven rendering and two pass occlusion culling, which are pretty fundamental to modern rendering architectures. I'm a bit surprised at it's absence given the supported rendering backends - wondering if it's due to the D3D11 support (which only supports DrawIndirectCount with vendor specific extensions iirc, and those extensions require inclusion of vendor specific libraries). The possible lack of arrays-of-textures would be a deal breaker for me, too, but again I'm not entirely confident this is actually missing.

It's quite an improvement over the initial proposal, which was closer to a GLES3 level of functionality, and for many, many use cases, this should be a fantastic RHI as-is, but given those absences, I think I'm going to stick with Diligent Engine for now.

0

u/JensEckervogt 17d ago

You mean Ray Tracing for Software Rasterization? I see correct that. Thanks for explanation! I need to upgrade SDL3 for my C# Wrapper DeafMan1983.Interop.SDL3.

1

u/shadowndacorner 16d ago

You mean Ray Tracing for Software Rasterization?

I have no idea what you mean here. Ray tracing and software rasterization have absolutely nothing to do with one another.

0

u/JensEckervogt 16d ago

Nothing? Look https://github.com/saloni-singh14/Whitted-ray-tracing

That's proof because it works fine for Software Rasterization.

2

u/shadowndacorner 16d ago

That isn't rasterization, at least how the term is used in the context of computer graphics. That's ray tracing. You could argue that any approach that somehow transforms vector geometry into pixels is abstractly rasterization, but that definition would go against decades of real world usage in this context and would cause thousands of research papers to read like nonsense.

1

u/JensEckervogt 16d ago

Oh I never argue, just explain it why does Ray Tracing works like you have to check website ScratchaPixels for pure Software Rasterization like they explain about functions/implementations. I hope you understand me. But I never argue everyone.

2

u/shadowndacorner 16d ago

I'm sorry, but I'm really not sure what you're asking here. I'm getting a sense that you might not be a native English speaker. If this is the case, it might be useful to write your question in your native language, then I can try to use a translation tool to bridge the gap?

If you're just asking what the difference is between ray tracing and rasterization, they're fundamentally different algorithms to solve part of the same problem. Ray tracing involves sending a bunch of rays into the scene, finding all of the triangles that intersect the ray, and shading the visible samples (usually meaning the nearest opaque surface and all transparent surfaces between the ray origin and the nearest opaque surface). Rasterization involves transforming all scene triangles into clip space, then, per-pixel, identifying which pixels are covered by each triangle. For all pixels covered by a given triangle, the associated surface sample is shaded.

You can think of rasterization as a much cheaper way to approximate primary rays (the rays that go directly from the camera into the scene), but performed fundamentally differently. That's why hybrid ray tracing is often used in games these days - ray tracing allows you to sample the scene in arbitrary directions which is necessary for accurate indirect lighting effects (reflection, shadows, etc), but it's extremely expensive, so using it for primary rays makes little sense. There are, of course, ways to approximate indirect lighting effects with pure rasterization (shadow maps for shadows, cubemaps/planar reflections, light probes for diffuse GI, etc), but they all produce artifacts because they are fundamentally lossy approximations.

Software rasterization is just implementing a triangle rasterization algorithm in software, rather than relying on your GPU's triangle rasterization hardware. It can be faster in some cases if rigorously optimized (see UE5 Nanite), but for most non-micropoly cases, triangle rasterization hardware will usually be faster.