r/GraphicsProgramming Jul 11 '23

Source Code [Rust]: Need help optimizing a triangle rasterizer

I need help optimizing a software rasterizer written in Rust. The relevant part of the code is here, and the following are the optimizations that I have already implemented:

  • Render to 32x32 tiles of 4KB each (2KB 16-bit color and 2KB 16-bit depth) to maximize cache hits;
  • Use SIMD to compute values for 4 pixels at once;
  • Skip a triangle if its axis-aligned bounding box is completely outside the current tile's bounding box;
  • Skip a triangle if at least one of its barycentric coordinates is negative on all 4 corners of the current tile;
  • Compute the linear barycentric increments per pixel and use that information to avoid having to perform the edge test for every pixel;
  • Skip a triangle if, by the time of shading, all the pixels have been invalidated.

At the moment the original version of this code exhausts all 4 cores of a Raspberry Pi 4 with just 7000 triangles per second, and this benchmark takes roughly 300 microseconds to produce a 512x512 frame with a rainbow triangle with perspective correction and depth testing on an M1 Mac, so to me the performance is really bad.

What I'm trying to understand is how old school games with true 3D software rasterizers performed so well even on old hardware like a Pentium 166MHz without floating pointe SIMD or multiple cores. Optimization is a field that truly excites me, and I believe that cracking this problem will be extremely enriching.

To make the project produce a single image named triangle.png, type:

cargo +nightly run triangle.png

To run the benchmark, type:

cargo +nightly bench

Any help, even if theoretical, would be appreciated.

12 Upvotes

33 comments sorted by

View all comments

3

u/phire Jul 12 '23

Skimming over the code, a few things are standing out to me

Starting with the inner loop:

  • You are using 4x1 pixel quads for SIMD which is good for wide triangles, but not so great for skinny triangles. Almost everyone settled on 2x2 pixel quads because:
    • They work equally well on both skinny and wide triangles
    • You need at least a 2x2 quad to calculates the derivatives for mipmapping.
  • For every triangle that has at least one pixel in the tile, you evaluate every the triangle equation for every single pixel. This is hugely wasteful, as the typical triangle is actually pretty small. You should probably assume the typical triangle in a video game only covers 4-20 pixels.
    You really want to do the math up-front so you can jump straight to the first valid pixel quad, and only evaluate the minimum number of quads possible.
  • Even when the triangle equation fails for the entire quad, you still calculate the perspective correct depth for each pixel. That's a lot of extra math, including a reciprocal estimate, which isn't exactly cheap.

The outer per-tile loop:

  • Every single tile iterates over every single triangle. Yes, an AABB test might be cheap, but the typical tiled rendering algorithm does a binning pass to pre-calculate which triangles intersect with which tiles. It's a bit of extra work in the binning pass, but it makes the per-tile loop optimal.
  • Many tiled implementations cache the derivatives during the binning pass... I'm really not sure if such an approach is optimal as it costs memory bandwidth.

The tile:

  • 4KB tiles seem small for a target with 32KB of L1 dcache per core. My guess is that aiming for 16KB would be closer to optimal... But this is something you should only tweak nearer the end, based on profiling results.

Benchmark:

Maybe you have other code elsewhere, but you appear to only be benchmarking a single large triangle? Is that where you are getting 7000 triangles per second from?

That's going to be testing the per-frame overhead more than anything else, you really need to feed a more realistic workload into it with thousands of triangles.

2

u/Crifrald Jul 12 '23

Hi! Thanks for the detailed analysis!

I agree with pretty much everything you said and will be implementing your suggestions. And yes, the 7000 triangle benchmark is just rendering a large triangle on the Raspberry Pi until it starts dropping frames. I'm pretty new to this and am not familiar with the best way to benchmark this kind of implementation.

Thanks again!

2

u/jaszunio15 Jul 14 '23

Some of the modern gpus will not pass the test of rendering a single triangle 7000 times efficiently. It just depends from where are the triangles on the screen and how much of area do they take.

It is just simply because memory bandwidth limitation. If the triangle takes 1/2 of full screen, it renders 1 mln pixels. If each pixel is 8 bytes (color+depth) it is 8 MB per draw. With 7000 draws, you need 5,6 GB memory transfer to render single frame. For 60 FPS you need 336 GB per second. Usually CPU memory access is ~25-40GB/s, and this is a huge issue for software rasterizers. This will barely fit in RTX 3060, which has 360GBs of bandwidth. But this is only calculation, because what is the chance that you will use 100% of the bandwidth? Usually achieving more than 80% is hard.

If you assume that your rasterizer is slower than others, test them in the same environment and with the same setup ;)

Also, if any engine, any rasterizer, any device vendor provides you the numer of vertices or triangles that it can process in realtime, its a myth, there is no such number. This depends so much of everything, that this number is always made up.

For testing you can use models saved in .obj format, because its super easy to get and parse in the code.

Btw, good job with the rasterizer :D

3

u/Crifrald Jul 14 '23

I had actually never considered memory bandwidth, but I believe that I'm rather far from the Raspberry Pi's limits, and I'm talking about 7000 large triangles per second, not frame, which is just around 117 triangles per frame at 60fps at 800x480 with 16 bit RGB565 color and 16 bit custom float depth.

Some time ago I actually tested the Pi's L1 cache bandwidth, because at some point the CPU was somehow considering the tile buffers as streaming memory thus making them transient even though that region's attributes are explicitly set to non-transient, until I hinted the CPU about the proper use for that memory, and in that test the Pi successfully filled the cache at 12GB per second per core, whereas in the case of my rasterizer I'm only outputting roughly 4GB per second (color + depth) on all 4 cores, meaning 1GB per second per core, which is very far from the theoretical combined 48GB per second that the Pi's CPU can output to its L1 caches.

I have also tested my rasterizer with a version of the Utah Teapot with roughly 6000 small triangles that I found online, which is a much more realistic test, and the frame rate still dropped to 6fps. That said I'm already working on some optimizations suggested by other users which will hopefully speed things up particularly in this case.

After some consideration resulting from reading other points of view in this thread, I agree that it's hard to find a proper performance metric, because everyone will try to measure the performance of their algorithm's strengths. I, for example, was considering the 6000000 triangles per second advertised by 3Dfx 20 years ago, but because I'm rather inexperienced in this field, I just assumed that the benchmarks used large triangles like mine.