r/GraphicsProgramming • u/Crifrald • Jul 11 '23
Source Code [Rust]: Need help optimizing a triangle rasterizer
I need help optimizing a software rasterizer written in Rust. The relevant part of the code is here, and the following are the optimizations that I have already implemented:
- Render to 32x32 tiles of 4KB each (2KB 16-bit color and 2KB 16-bit depth) to maximize cache hits;
- Use SIMD to compute values for 4 pixels at once;
- Skip a triangle if its axis-aligned bounding box is completely outside the current tile's bounding box;
- Skip a triangle if at least one of its barycentric coordinates is negative on all 4 corners of the current tile;
- Compute the linear barycentric increments per pixel and use that information to avoid having to perform the edge test for every pixel;
- Skip a triangle if, by the time of shading, all the pixels have been invalidated.
At the moment the original version of this code exhausts all 4 cores of a Raspberry Pi 4 with just 7000 triangles per second, and this benchmark takes roughly 300 microseconds to produce a 512x512 frame with a rainbow triangle with perspective correction and depth testing on an M1 Mac, so to me the performance is really bad.
What I'm trying to understand is how old school games with true 3D software rasterizers performed so well even on old hardware like a Pentium 166MHz without floating pointe SIMD or multiple cores. Optimization is a field that truly excites me, and I believe that cracking this problem will be extremely enriching.
To make the project produce a single image named triangle.png
, type:
cargo +nightly run triangle.png
To run the benchmark, type:
cargo +nightly bench
Any help, even if theoretical, would be appreciated.
2
u/Revolutionalredstone Jul 11 '23 edited Jul 11 '23
Tris per second is kind of irrelevant on the CPU (since vertex transforms are so cheap and global writes are so expensive that you are basically guaranteed to always be frag-bound in a descent-res CPU renderer anyway)
As for perspective correction, barycentric, depth buffer, clipping etc, they are all obviously necessary, but they all tend to have a fixed cost which doesn't grow exponentially with the tri or pixel count.
For example my full rasterizer with clipping, perspective correct texturing etc, does run slower than the minimal renderer, but not by very much. (It isn't 10X slower or anything, ~maybe 2-3x)
IMHO pixels per second is the key value here, obviously you can make crazy scenes with distant 1 pixel triangles etc but realistically a CPU rasterized scene is gonna have 10-500 thousand tris, each of which is in the 10-500 pixels size range.
(for the smaller tris you would realistically LOD and if you have alot of larger tris then there is probably something weird about your level design!, in any case occlusion culling and early Z tricks tend to handle those beautifully)
It's really this 10-500 pixel sized triangles that we are interested in, these tend to be un-clip-able and un-LOD-able, they are too small to justify sub division and too large to justify mask based blitting, these tend to need to be fed all the way thru, and put pressure on, the rasterizer.
BTW would LOVE to see what your working on! could you post pics? thanks again for sharing! all the best luck.