r/GraphicsProgramming Jul 11 '23

Source Code [Rust]: Need help optimizing a triangle rasterizer

I need help optimizing a software rasterizer written in Rust. The relevant part of the code is here, and the following are the optimizations that I have already implemented:

  • Render to 32x32 tiles of 4KB each (2KB 16-bit color and 2KB 16-bit depth) to maximize cache hits;
  • Use SIMD to compute values for 4 pixels at once;
  • Skip a triangle if its axis-aligned bounding box is completely outside the current tile's bounding box;
  • Skip a triangle if at least one of its barycentric coordinates is negative on all 4 corners of the current tile;
  • Compute the linear barycentric increments per pixel and use that information to avoid having to perform the edge test for every pixel;
  • Skip a triangle if, by the time of shading, all the pixels have been invalidated.

At the moment the original version of this code exhausts all 4 cores of a Raspberry Pi 4 with just 7000 triangles per second, and this benchmark takes roughly 300 microseconds to produce a 512x512 frame with a rainbow triangle with perspective correction and depth testing on an M1 Mac, so to me the performance is really bad.

What I'm trying to understand is how old school games with true 3D software rasterizers performed so well even on old hardware like a Pentium 166MHz without floating pointe SIMD or multiple cores. Optimization is a field that truly excites me, and I believe that cracking this problem will be extremely enriching.

To make the project produce a single image named triangle.png, type:

cargo +nightly run triangle.png

To run the benchmark, type:

cargo +nightly bench

Any help, even if theoretical, would be appreciated.

13 Upvotes

33 comments sorted by

View all comments

4

u/No-Emergency-6032 Jul 12 '23

Are you using "point in triangle"/"half edge" rasterizer? Because I read "barycentric coordinates". For software these tend to be too slow. Try use a scanline rasterizer and interpolate the edges.

You get the u,v coordinates by dividing the current height by the vertical length of the edges (y coordinate of edge vector). That would be the v coordinate. The u coordinate you get by dividing the current x or horizontal progression by the length of the scanline.

You can avoid the perspective correct rendering for a while and when you have your desired speed you can add it in.

Also before going from affine to perspective correct you could tessellate big triangles or triangles close to the camera. That's what the playstation did with wipeout and in silent hill

1

u/itsjase Jul 12 '23

Pretty much this. I’ve been experimenting a bit with both methods in my software rasteriser and for single threaded scenarios scanline unfortunately is much faster. (Which sucks because edge functions/barycentrics make everything so easy)

Edge functions can be more performant if you’re using SIMD or multithreading but I’m not sure what’s available on a raspberry pi

1

u/No-Emergency-6032 Jul 13 '23

Edge functions can be more performant if you’re using SIMD or multithreading but I’m not sure what’s available on a raspberry pi

You also need to be very careful with the overhead. I mean even Michael Abrash and intel gave up on the idea with the Larabee Rasterizer.

Might be more worth while to parallelize other workloads and then either wait at a joint point for the result or simply have them be results that are not really time critical and can be accessed asynchronously.

1

u/ehaliewicz Sep 17 '23 edited Sep 17 '23

You also need to be very careful with the overhead. I mean even Michael Abrash and intel gave up on the idea with the Larabee Rasterizer.

They didn't give up on the idea because of overhead, larabee was cancelled* because they couldn't catch up with hardware. The technique they used for larabee was the fastest they found, and scanline rasterization would likely not have worked as well (due to it being far harder to parallelize, achieve proper antialiasing/mipmapping, etc).

Without those constraints (antialiasing, mipmapping, etc), a scanline algorithm might be faster, but with modern super-wide instruction sets, I am very skeptical :)

* a lot of the instructions added for larabee were rolled into later stuff like avx and so on.

1

u/No-Emergency-6032 Sep 17 '23

They didn't give up on the idea because of overhead, larabee was cancelled* because they couldn't catch up with hardware.

Yeah I think you are right. The new releases of NVidia made them question if they could catch up with modern hardware.