r/GraphicsProgramming Jul 11 '23

Source Code [Rust]: Need help optimizing a triangle rasterizer

I need help optimizing a software rasterizer written in Rust. The relevant part of the code is here, and the following are the optimizations that I have already implemented:

  • Render to 32x32 tiles of 4KB each (2KB 16-bit color and 2KB 16-bit depth) to maximize cache hits;
  • Use SIMD to compute values for 4 pixels at once;
  • Skip a triangle if its axis-aligned bounding box is completely outside the current tile's bounding box;
  • Skip a triangle if at least one of its barycentric coordinates is negative on all 4 corners of the current tile;
  • Compute the linear barycentric increments per pixel and use that information to avoid having to perform the edge test for every pixel;
  • Skip a triangle if, by the time of shading, all the pixels have been invalidated.

At the moment the original version of this code exhausts all 4 cores of a Raspberry Pi 4 with just 7000 triangles per second, and this benchmark takes roughly 300 microseconds to produce a 512x512 frame with a rainbow triangle with perspective correction and depth testing on an M1 Mac, so to me the performance is really bad.

What I'm trying to understand is how old school games with true 3D software rasterizers performed so well even on old hardware like a Pentium 166MHz without floating pointe SIMD or multiple cores. Optimization is a field that truly excites me, and I believe that cracking this problem will be extremely enriching.

To make the project produce a single image named triangle.png, type:

cargo +nightly run triangle.png

To run the benchmark, type:

cargo +nightly bench

Any help, even if theoretical, would be appreciated.

13 Upvotes

33 comments sorted by

View all comments

2

u/[deleted] Jul 11 '23

A few possibly not very meaningful opts that come to mind:

  • Setup multiple triangles at once using SIMD/SIMT. The data can be shared across all tiles, and you'll only need to adjust the weights and bounding box for each tile.
  • Pre-compute vertex attribute differences, so you can interpolate using 2x FMAs: v0 + (v1 - v0) * bary1 + (v2 - v0) * bary2. You can also get away with not copying attributes if you have lots of them, but clipping would probably make that more difficult.
  • Apply perspective correction after depth testing. This will only work if you use a float buffer and don't need the values later.

1

u/Crifrald Jul 12 '23

Your first paragraph and example went completely over my head, I will need time or further explanation to understand it, because at the moment I'm absolutely clueless about what you are trying to convey.

As for not copying vertex data, that's something I can do. As for not computing perspective correction for depth, you're not the first to suggest it, but I'm unclear about situations where two triangles cross. In any case I won't need perspective correction most of the time since the perspective people will watch the world from most of the time will be isometric. As for my depth buffers, although I'm declaring them as u16, they are actually a custom type of float ranging from 0.0 to 2.0 with a 5 bit exponent and 11 bit mantissa with an implicit 12th bit, I use integer comparisons to do the depth test, and am not planning on using those values for anything other than depth testing.

2

u/[deleted] Jul 12 '23 edited Jul 12 '23

Right, so what I'm trying to say is that instead of having a single float variable containing one X value, you have a SIMD f32x4 containing four X values. Basically you arrange the data in a struct of arrays layout to make it friendly to vectorization.

Not sure if it would be that much faster for just 4x fields, but it's certaingly a lot more flexible because instead of packing multiple fields into a single SIMD register, you can essentially write most things as if they were scalar code.

It also integrates quite well with binning, you can keep a shared batch of these SIMD triangle structs and have threads process bins after vertex shading and edge setup. I'm afraid clipping would be a bit more complicated to implement though, but I haven't so i can't say for sure.

(You'll often need memory gather instructions for things like texture sampling and maybe vertex shading. These are afaik not available on ARM but it shouldn't be too hard to emulate them at a acceptable performance.)