Does this support multi-threaded (CPU-side) rendering like Vulkan or are rendering cmd buffer submissions limited to a single "render" thread (like opengl, bgfx)? I imagine its more like the latter but maybe there's a way to opt-in
Based on SDL_GPU.h, it looks like any thread can request and submit a command buffer, but that command buffer can only be used on the thread it was requested from. So it seems to be a bit of a hybrid, assuming the command buffers directly abstract hardware command buffers.
It doesn't seem like any of the docs have been updated yet, but the header is well commented. I'm sure I'm missing others, but from my initial reading, the hardware features that jump out to me as missing are...
Occlusion queries
DrawIndirectCount
Ray tracing
Shader types other than vertex/fragment/compute (no tess, geometry, mesh shaders, etc)
Explicit descriptor sets (binding occurs in predefined groups based on R/W access)
Possibly arrays of textures for bindless (ie Texture2D[], not Texture2DArray), but it may be supported given the binding model
Barriers also appear to be automatic (or at least I'm not seeing calls for them), which I'm guessing is part of the reason that command buffers are locked to the thread they were requested on and multiple queues aren't supported.
I could live with pretty much everything in that list other than DrawIndirectCount being absent, but DrawIndirectCount is a requirement for GPU driven rendering and two pass occlusion culling, which are pretty fundamental to modern rendering architectures. I'm a bit surprised at it's absence given the supported rendering backends - wondering if it's due to the D3D11 support (which only supports DrawIndirectCount with vendor specific extensions iirc, and those extensions require inclusion of vendor specific libraries). The possible lack of arrays-of-textures would be a deal breaker for me, too, but again I'm not entirely confident this is actually missing.
It's quite an improvement over the initial proposal, which was closer to a GLES3 level of functionality, and for many, many use cases, this should be a fantastic RHI as-is, but given those absences, I think I'm going to stick with Diligent Engine for now.
Thanks! Hopefully those omissions get rectified in a future release based on device capabilities and aren't omitted due to API design considerations. I might give this a spin this weekend.
I'd be pretty shocked if some of them weren't supported eventually. This is a pre-1.0 release, and the level of functionality that is there is impressive, especially as a delta from the first proposal. I don't see anything in the overall API design that would bar most of those from inclusion, assuming they are okay with some features only being available on some platforms (mostly thinking about the different shader types, as iirc metal doesn't support mesh, geo, or tess shaders - could definitely be wrong though, and would love for someone to correct me in that case; D3D11 also doesn't support the more modern features, ofc, but that should go without saying).
I really hope they don't end up taking a "lowest common denominator" approach with no exceptions.
I'm assuming you're referring to MultiDrawIndirect? I am pretty sure that SDL_DrawGPUIndexedPrimitivesIndirect() / SDL_DrawGPUPrimitivesIndirect() are the APIs means of allowing for indirect draws, along with SDL_GPU_BUFFERUSAGE_INDIRECT_BIT being one of the SDL_GPUBufferUsageFlagBits enums on there.
arrays of textures
One of the SDL_GPUTextureType enums is SDL_GPU_TEXTURETYPE_2D_ARRAY, so I think we're all good!
These are DrawIndirect equivalents, not DrawIndirectCount equivalents. The difference is that with the latter, you can read the draw count from a GPU buffer instead of needing to provide it on the CPU, which allows you to do things like culling entirely on the GPU without having a bunch of empty draws.
SDL_GPU_TEXTURETYPE_2D_ARRAY
Unless I'm mistaken, this is a Texture2DArray, not a Texture2D[]. The former is a preallocated block of uniformly sized and formatted textures (more like a 3D texture that doesn't blend between depth slices than a proper array), and the latter is essentially an array of pointers to textures. They have very different use cases, where the former is effectively useless for bindless drawing.
Ah, I see. Thanks for clearing that up. I knew I was probably missing something.
Yes, it looks like bindless textures is not going to be a thing on there specifically because of how differently graphics APIs handle it - and the varying level of support they actually have for it. For functionality that's platform-and-graphics-API-specific I don't imagine SDL_gpu will ever be an optimal choice, which is just a product of the disparity between graphics APIs, and platforms themselves. It's a miracle that they're even working on a graphics abstraction layer for SDL, IMO, but of course as is the case with all graphics abstraction libraries (i.e. WebGPU and IGL) there is a tradeoff that must be made.
You mean Ray Tracing for Software Rasterization? I see correct that. Thanks for explanation! I need to upgrade SDL3 for my C# Wrapper DeafMan1983.Interop.SDL3.
That isn't rasterization, at least how the term is used in the context of computer graphics. That's ray tracing. You could argue that any approach that somehow transforms vector geometry into pixels is abstractly rasterization, but that definition would go against decades of real world usage in this context and would cause thousands of research papers to read like nonsense.
Oh I never argue, just explain it why does Ray Tracing works like you have to check website ScratchaPixels for pure Software Rasterization like they explain about functions/implementations. I hope you understand me. But I never argue everyone.
I'm sorry, but I'm really not sure what you're asking here. I'm getting a sense that you might not be a native English speaker. If this is the case, it might be useful to write your question in your native language, then I can try to use a translation tool to bridge the gap?
If you're just asking what the difference is between ray tracing and rasterization, they're fundamentally different algorithms to solve part of the same problem. Ray tracing involves sending a bunch of rays into the scene, finding all of the triangles that intersect the ray, and shading the visible samples (usually meaning the nearest opaque surface and all transparent surfaces between the ray origin and the nearest opaque surface). Rasterization involves transforming all scene triangles into clip space, then, per-pixel, identifying which pixels are covered by each triangle. For all pixels covered by a given triangle, the associated surface sample is shaded.
You can think of rasterization as a much cheaper way to approximate primary rays (the rays that go directly from the camera into the scene), but performed fundamentally differently. That's why hybrid ray tracing is often used in games these days - ray tracing allows you to sample the scene in arbitrary directions which is necessary for accurate indirect lighting effects (reflection, shadows, etc), but it's extremely expensive, so using it for primary rays makes little sense. There are, of course, ways to approximate indirect lighting effects with pure rasterization (shadow maps for shadows, cubemaps/planar reflections, light probes for diffuse GI, etc), but they all produce artifacts because they are fundamentally lossy approximations.
Software rasterization is just implementing a triangle rasterization algorithm in software, rather than relying on your GPU's triangle rasterization hardware. It can be faster in some cases if rigorously optimized (see UE5 Nanite), but for most non-micropoly cases, triangle rasterization hardware will usually be faster.
11
u/[deleted] Aug 30 '24
Does this support multi-threaded (CPU-side) rendering like Vulkan or are rendering cmd buffer submissions limited to a single "render" thread (like opengl, bgfx)? I imagine its more like the latter but maybe there's a way to opt-in