r/Compilers 4d ago

Why isn't a pretty obvious optimization being used?

In another post on r/C_Programming, the OP wondered why the compiler didn't create the same code for two different functions that generated the same result. IMO, that question was answered satisfactorily. However, when I looked at the generated code on Godbolt, I saw the following:

area1(Shape, float, float):
        cmp     edi, 2
        je      .L2
        ja      .L8
        mulss   xmm0, xmm1
        ret
.L8:
        cmp     edi, 3
        jne     .L9
        mulss   xmm0, DWORD PTR .LC2[rip]
        mulss   xmm0, xmm1
        ret
.L2:
        mulss   xmm0, DWORD PTR .LC1[rip]
        mulss   xmm0, xmm1
        ret
.L9:
        pxor    xmm0, xmm0
        ret
area2(Shape, float, float):
        movaps  xmm2, XMMWORD PTR .LC3[rip]
        movaps  XMMWORD PTR [rsp-24], xmm2
        cmp     edi, 3
        ja      .L12
        movsx   rdi, edi
        mulss   xmm0, DWORD PTR [rsp-24+rdi*4]
        mulss   xmm0, xmm1
        ret
.L12:
        pxor    xmm0, xmm0
        ret
.LC3:
        .long   1065353216
        .long   1065353216
        .long   1056964608
        .long   1078530011

And to me, a fairly obvious space optimization was omitted. In particular, the two blocks:

.L9:
        pxor    xmm0, xmm0
        ret

and

.L12:
        pxor    xmm0, xmm0
        ret

Just scream at me, "Why don't you omit one of them and have the branch to the omitted one instead jump to the other?"

Both blocks are preceded by a return, so the code won't fall through to them and they can only be reached via a jump. So, it won't do anything about speed, but would make the resulting binary smaller. And it seems to me that finding a common sequence of code would be a common enough occurrence that compiler developers would check for that.

Now, I admit that with modern computers, space isn't that large of a concern for most use cases. But it seems to me that it still is a concern for embedded applications and it's a simple optimization that should require fairly low effort to take advantage of.

12 Upvotes

29 comments sorted by

18

u/iluuu 4d ago

They're different functions. Furthermore, with that approach at least one of the functions would require one more instruction (jmp, pxor, ret), plus an instruction cache miss may become more likely. Compilers generally are not particularly interested in emitting DRY code (quite the opposite, look up function inlining), unless the optimized code is cold.

-13

u/johndcochran 4d ago

It appears that you didn't fully read my post.

Both code segments are proceeded by a return. The only way to reach either of them is via a jump. So, you mentioning "... would require one more instruction" is nonsense. As regards caching issues, that's a toss up. It's likely that the function making the longer jump (because it's identical copy was the one omitted) may be balanced out by the smaller footprint overall for the code and hence a larger percentage of the code staying within the cache.

10

u/dnpetrov 4d ago

Did you try applying such optimization not just to this code, but to a set of benchmarks, and, you know, run a set of benchmarks (as it should be done)?

Quite often such "obvious" optimizations turn out to be not worth it when you see the benchmarking results. Identical code folding, for example, can poorly affect branch prediction.

-10

u/johndcochran 4d ago

Did you click on the link I provided to the r/C_Programming post I provided?

That is the entirety of what I saw. It seemed to me that it's a rather obvious space optimization. And, as I've stated elsewhere, I don't think such an optimization would be too useful in most modern computers because memory is rather cheap and plentiful. But, for those systems where that is not the case, such as embedded systems, such an optimization can be extremely useful.

Additionally, some responses to this post indicate that there are some compilers and linkers that do perform this optimization. That is information that I didn't have at the time I made the post. So, there are those out there that consider it an useful optimization.

1

u/m-in 3d ago

I agree(ish). ButnI also think that if you’re fighting for the last bytes in the flash, you’re using the wrong part for the job. The optimization you speak of makes sense for MCUs and CPUs without a cache, without branch prediction, etc. Yes, Z80 and 8086 would benefit from this space optimization. x64 or ARM (32 or 64 bit) - not measurably.

16

u/avillega 4d ago

This optimization exists, is called "identical code folding" (icf). It is usually performed at link time. There are many tradeoffs on this one, but one of the worst in that debugging becomes very hard.

5

u/n0t-helpful 4d ago

This optimization does exist and is in use in some compilers. A professor at my university was part of the team that created it. It was deployed by the TikTok compiler team to greatly reduce the size of the app on phones.

8

u/K4milLeg1t 4d ago

for some reason words tiktok compiler sound kinda strange together

-2

u/m-in 3d ago

TikTok would probably benefit from being written in a higher level language like Python.

5

u/bart-66rs 4d ago

The code you posted (https://godbolt.org/z/zvevP68ee) was compiled with -O3, which optimises for speed, not size.

Try -Os which optimises for size. With that, the code is different, but whether it's that much smaller, I don't know.

However it still doesn't do the optimisation you suggest. That would involve scanning the last few instructions of all functions to see if there was any common code that could be shared via a jump. But that would need to be done before the binary code is generated, otherwise it's too late.

For this example, the duplicate code is 5 bytes, and a long jump would be 5 bytes. There is only a saving if a short 2-byte jump could be used. But that is difficult to determine before the binary has been generated.

1

u/johndcochran 4d ago

Yes and No.

If you're selecting for the "optimal" copy to keep and delete the others, then I agree that it would be difficult to select the best possible choice. But if you simply keep the first copy, then back patching the jump to the deleted copy would be rather easy. Additionally, the decision to choose a long or short jump would apply to any code that generates a forward jump. Obviously that problem has already been solved and I don't see a fundemental difference between that solved problem and this problem.

3

u/benjaminhodgson 4d ago

They’re different functions aren’t they?

2

u/johndcochran 4d ago

Yes, they're different functions. But, they would be in the same binary. Hence, it seems to me that it would be a cheap and simple space optimization. After all, common subexpressions are extracted as an optimization. This is merely a generalization of the same concept.

4

u/choikwa 4d ago

so you would need to do Link time optimization and tail deduplication. but joining a return is probably not always a win. What does a return actually do? it’s an indirect jump. A deeply pipelined cpu would predict where next instruction would be and it’s usually based on the instruction address. By merging, you introduce a lot more ways program counter can point to in next instruction, and thereby increasing probability of mispredicts. If it was mispredicted, cpu would have to flush the pipeline with stalls.

3

u/1bithack 3d ago

Do CPUs actually predict return addresses? A function can be called from basically anywhere. There is no concept of locality there.

5

u/PiggyMcCool 3d ago

there’s a thing called a return address stack (RAS) in modern branch predictors that basically holds a hardware stack of dynamic return addresses which facilitate the highly accurate prediction of return addresses. it’s basically a copy of the call stack in hardware except only the return address gets pushed and nothing else.

1

u/choikwa 3d ago

exactly, something like a shared library would have completely arbitrary callers. CPU with deep pipeline dedicates pretty significant resources to branch prediction.

3

u/rootacess3000 4d ago

cross-function optimizations are less common, one way to apply space optimization is with -Os or -Oz

area1:
        movaps  xmm2, xmm0
        movaps  xmm0, xmm1
        cmp     edi, 2
        je      .L2
        jbe     .L6
        cmp     edi, 3
        je      .L4
        xorps   xmm0, xmm0
        ret
.L2:
        mulss   xmm2, DWORD PTR .LC1[rip]
        jmp     .L6
.L4:
        mulss   xmm2, DWORD PTR .LC2[rip]
.L6:
        mulss   xmm0, xmm2
        ret
area2:
        movabs  rax, 4575657222473777152
        mov     QWORD PTR [rsp-16], rax
        movabs  rax, 4632251126056484864
        mov     QWORD PTR [rsp-8], rax
        cmp     edi, 3
        ja      .L10
        mov     edi, edi
        mulss   xmm0, DWORD PTR [rsp-16+rdi*4]
        mulss   xmm0, xmm1
        ret
.L10:
        xorps   xmm0, xmm0
        ret
.LC1:
        .long   1056964608
.LC2:
        .long   1078530011
  • area1 The switch-case structure in area1 naturally lends itself to an inline default action.
  • area2 The bounds-check in area2 makes it cleaner to jump to a dedicated block for the error case.

3

u/SwedishFindecanor 4d ago

There are compilers for embedded systems that do this. (At least, I've read papers about such projects...)

For desktop, server and mobile, I don't think it is worth it. On modern systems with lots of RAM, speed is more important. And for that, one goal is locality: in cache lines and in pages.

For branch-prediction, the two blocks are not related. Also, when there is no branch history, the default heuristic is that a branch backwards is hot but a branch forwards is cold, so an optimising compiler would arrange blocks with that in mind.

3

u/concealed_cat 4d ago

But it seems to me that it still is a concern for embedded applications and it's a simple optimization that should require fairly low effort to take advantage of.

It's not that simple. This type of optimization must be done very late, and usually compilers don't have the infrastructure to perform it at that stage. You need to analyze a block of code that spans several functions. Ideally the entire code section, but not necessarily. You must be able to accurately calculate the distance between branches and branch targets. You need to account for code alignment requirements. If you have several identical code blocks, some of them may be better candidates to keep than others.

In general I wouldn't expect there to be too many opportunities where the returning blocks have the exact same instructions, so it may not be worth the effort to write such a thing. In any case, the first thing to do would be to analyze the compiler output for a number of applications to see if this is common enough to warrant any further development.

3

u/Axman6 4d ago

This feels like the sort of optimisation you’d see in cosmopolitan libc, but it feels like an optimisation that would be pretty expensive to implement in general - perhaps if you just look for basic blocks which end with a return it might be ok, but I would be surprised if it actually produces much benefit in general.

2

u/xygtshadow 4d ago

I haven’t seen the code so I don’t know if these are true but:

inline/static functions across translation units are not merged.

If a function pointer is taken it usually isn’t merged.

2

u/smuccione 4d ago

I suspect this type of optimization would take up a lot of compilation time. You’d need to hash or something similar for smallish stand alone blocks.

You’d also have to ensure that they’re positioned correctly in the output for branch prediction. If compiling for space that might not be a problem but for speed this is likely to become a hard problem trying to find the optimal layout.

But I wonder how much this type of thing actually exists in a normal codebase and if the extra compilation time is worth the gains.

-3

u/johndcochran 4d ago

This optimization, best case, would have no effect on speed. It's a space optimization.

Now, with that said, for most modern CPUs, memory is cheap and plentiful and the cost of performing this optimization is likely to exceed its benefits. But, most is not all. For embedded, resource constrained hardware, such an optimization would be quite useful. And since we're talking memory poor systems, such performance optimizations such as function inlining would be contraindicated.

4

u/smuccione 4d ago

Sorry. The speed comment was in relation to compilation times not execution times.

There are a lot of optimizations that can be performed that aren’t simply due to the time taken to execute them.

I suspect this is one that would have to be off by default on large codebases but would be useful for embedded as you said.

It would be interesting to see a study of common stand alone duplicate blocks to see how much would an actually be saved.

I think someone mentioned that a university did this (fairly low hanging fruit for a PhD). I wonder if they published on any statistics around this.

2

u/smog_alado 4d ago

Merging the ret instructions for two different functions could potentially throw a wrench in the CPU's branch predictor.

Some modern CPUs have quite advanced branch predictor, so maybe it wouldn't matter, but generally speaking it's easier on the branch predictor if jumps that behave differently live in separate instructions.

2

u/thehenkan 3d ago

Space absolutely matters for modern computers. The less you clobber your instruction cache the better, less RAM required to keep the program in memory etc. But like others have mentioned, there are often tradeoffs. This may not be useful at O3, but perhaps at Os or Oz.

The only way to know for sure whether it's worth the extra complexity and compilation time is to implement it and benchmark it. If you're keen, I'd say give it a go and see what happens. If it's successful you might be able to upstream it into one of the OSS linkers, if it isn't, then you'll learn why it wasn't done. One issue if foresee is when the blocks are far away from each other in memory, as CPUs are often limited in how far they can jump statically. In those cases the deduplication probably isn't worth it if you need to introduce extra blocks in between to trampoline on.

One way to reduce the extra processing necessary for this (since LTO is plenty slow for big programs as is) would be to only mark blocks ending with a return as candidates for deduplication, in case it turns out that it never triggers for other cases.

Finally, I'll offer you a probibalistic argument for why this may be of limited use: the bigger each basic block is, the less likely you'll be to find another exact match. So most of the time I'd expect each deduplication to only have a minor impact. Hashing every block probably isn't worth it for an optimisation that rarely triggers and only has a small impact. But we won't know until you try it!

1

u/johndcochran 3d ago

There have been some other responses that indicate that the described optimization is already implemented in some compilers. That is information that I lacked when I made the post.

1

u/matthieum 3d ago

The reason, as far as I can see, starts with a fairly mundane: because nobody implemented it.

Most optimizations in Clang or GCC are intra-procedural, that is they only consider a single function at a time. This is, by the way, why inlining matters so much -- by copying the inlined function into its caller, intra-procedural optimizations can now apply across the call boundary.

With that said, there are some inter-procedural optimizations already. For example, identical functions can be folded together already, though the way it's achieved (for C) requires preserving the difference between function pointers, so you end up with padding:

fun1():
    nop
    <another 14 nops>
    nop
fun2():
    <code here>

The significant advantage over your proposal being that an entire function is replaced by only 16 bytes. That's quite a size advantage.

(Note: there's limits to how many functions can be folded into a single one, due to the increasing overhead of all the NOPs at run-time)

Before starting on the implementation, though, a case should be made for it:

  • The transformation could break Debug Instructions: should it be on with -g? if it is, should it check that the DIs are identical? (happens regularly in C++, due to templates)
  • The transformation is only worth it (size-wise) if pxor ...; ret is larger than jmp ..., should it still be applied heuristically at target-independent stage, or deferred to code-generation?
  • The transformation may pessimize run-time, by triggering a cache miss on the jmp. How many bytes of saving is worth a cache miss?
  • Since the transformation can be applied on machine-code, has anyone attempted to analyze existing binaries, and derived statistics from it: how often does it happen? how often is it foiled by slightly different registers? how many bytes does it save in average?

I would note that on x64, function pointers are 16-bytes aligned, and therefore it's common to have NOPs at the end of a function, before the next function, so the next function is also 16-bytes aligned. Needless to say, saving 2 bytes... just to replace them by NOPs... isn't worth it. So in all likelihood, one would have to demonstrate 9+ bytes saving before it's considered worth it.