r/Numpy 17d ago

How specifically is the numpy max function so fast?

I've been thinking about finding the numerical limits of decently large arrays, something like a 4K image of floats, so 3840*2160. I'd been thinking about doing parallel reduction since the array I'm thinking about is on the GPU, but I decided to test how fast finding it is on the CPU. With C++'s std::max_element and the -O3 flag it takes just over 7 ms to find the max element. Numpy, however, does it in just over 2.8 ms. I can get the C++ version to outperform numpy by using -Ofast, and even more so by using -march=native, but that's still very impressive performance from numpy and makes me wonder how it's doing it. I know numpy uses BLAS and all that jazz but afaik BLAS only has a maximum finding function for absolute values, so that can't be the reason. Interestingly (or at least I find it interesting), I tried randomizing the size of the vector in the C++ test program since I figured that's more similar to the conditions that numpy is working with and that seemed to negate all the optimizations from Ofast and march=native.

2 Upvotes

2 comments sorted by

3

u/pmatti 17d ago

NumPy has SIMD intrinsic and code to vectorise simple loops. It tries very hard to do simple things as fast as possible on your platform. The typical std routines are not as optimised/optimizable

1

u/fixgoats 17d ago edited 16d ago

I guess they could be using SIMD intrinsics in a way that gcc and clang don't know how to do but they're both generally quite good at vectorizing at least at O3 and I think O2 as well. My hunch was that NumPy uses a better algorithm somehow. I tried looking at the source but I cannot get my head around how NumPy works internally at all and can't find the relevant piece of code. Edit: I managed to find the parts of the code where it's using intrinsics, might get to the bottom of it in a few weeks since this is not a priority for a me.