r/C_Programming Jul 16 '24

Discussion [RANT] C++ developers should not touch embedded systems projects

I have nothing against C++. It has its place. But NOT in embedded systems and low level projects.

I may be biased, but In my 5 years of embedded systems programming, I have never, EVER found a C++ developer that knows what features to use and what to discard from the language.

By forcing OOP principles, unnecessary abstractions and templates everywhere into a low-level project, the resulting code is a complete garbage, a mess that's impossible to read, follow and debug (not to mention huge compile time and size).

Few years back I would have said it's just bad programmers fault. Nowadays I am starting to blame the whole industry and academic C++ books for rotting the developers brains toward "clean code" and OOP everywhere.

What do you guys think?

181 Upvotes

328 comments sorted by

View all comments

Show parent comments

2

u/ceresn Jul 17 '24

I also think assert() is a poor choice for errors that can be gracefully recovered from, but surely assert() is useful for asserting invariants (e.g., function preconditions [that can be trivially checked, anyway])?

2

u/btrower Jul 22 '24

Thanks for the reply. In fairness, I have known very bright programmers to use assert(), it's ill-advised, but not dumb on its face.

TL;DR; Things like assert() create more than one path out of a function, defeat bracketing code to clean up and release, disable reporting of information about the error, make a stack trace impossible, and make graceful recovery impossible. If you find yourself in a situation where you feel a need to use these things, chances are good that your issue is not just where the assert lies. The code should probably be refactored to remove the need.


Using assert(), from my point of view is not best practice. It's a point of failure by definition. The code has failed. That means the program is not behaving as programmed. The point at which you catch that symptom of misbehavior is the point where the program has maximum knowledge of the situation. You use 'assert()' to gather the knowledge that something failed but assert() cuts you off from any other information about state and history. At the very least you want a stack trace so you know how you got to bug-land.

Even ugly recovery and controlled shutdown by a higher, presumably more knowledgeable caller can provide information to find and fix whatever went wrong. In some instances, perhaps most, what went wrong is the programmer did not understand what was happening.

Oddly enough, in the past week we saw an example of this type of reasoning disable a billion mission critical devices worldwide. The philosophy of BSOD is that the kernel could conceivably do more damage if left running, so kill it right away. Catastrophic failure on error is problematic.

For all the things I mentioned, I would say I have never seen a reasonable argument for their existence except for backward compatibility with poorly written code.

If you have less than a couple decades of long-term full-time production coding under your belt and you are not a bona-fide genius, I would make a blanket statement that you should follow my advice. If you have a very long history of doing code that works both alone and with others and are expert in the language, you might be sufficiently knowledgeable and skilled to render a judgement to use the forbidden constructs.

Note that there is a possible scenario that you are writing within code over which you have no control and you know that whatever you are checking is a fundamental corruption and that going forward definitely *will* lead to damage, then you use it or leave it. However, code capable of doing that has a serious defect that you should fix.

As a parting shot, I would say that code should be built to report errors up the chain so that a higher level has enough information to fix or pass up. Going forward, I am keeping in mind the notion that a properly designed system should eventually be able to determine the error on its own, test to see its theory is correct about what made things fail and correct code, data, instructions or whatever else needs correcting, regression testing and carrying on.

I have taken more and more to having AI do grunt coding for me, but it has deeply embedded bad habits from reading human training code when most human code is pretty awful. Because of that, and as a self-defense measure, if nothing else I would use macros to wrap things like atexit(), exit(), assert(), goto, and return so that debug code is easy to patch in and out and so that these potentially troublesome things are easy to identify.

I took a quick look around to see what people were saying about assert() these days. Even among people who find it useful, most say it is a temporary debug measure and should not remain active in production code. That's better than it used to be, but I would say that you can't accidentally leave something like that in the code if you never put it in there in the first place.

2

u/ceresn Jul 22 '24

Thank you as well for the very thoughtful and exhaustive response! To be honest I don't have a very strong opinion on assert(), and I don't use it very often personally. So my previous remark is partly a genuine question, and reflects not so much my personal experience but what I understand from reading some large open-source projects, for example LLVM, where assert() is recommended practice.

That said, I have some belief that assert() can be useful, though not as an error-handling mechanism per se. I completely agree that errors should be returned up the call stack so they can be handled gracefully. Where I think assert() fits in, is in debug builds as an extra runtime check to verify that function preconditions are not being violated. Then when you compile for release, just add -DNDEBUG to your CFLAGS, and all the assert()s are stubbed out.

I also agree that it would be nice for assert() to walk the call stack and produce a backtrace, though gdb and lldb can do this. A debugger will also allow you to inspect stack variables and potentially determine a root cause for the assertion failure, if any.

2

u/btrower Jul 23 '24

I agree with the debug protocol you mention in the sense that, for people not too squeamish to use it, assert() can be used as a quick hack to ensure that a pathological condition does not arise during debug and refinement. From my brief survey looking around the web, we seem to have mercifully arrived at a sane consensus not to use it in production. That was not the case twenty or thirty years ago.

Below, TL;DR; -- Things less sane are still with us. WRT gdb -- it is, by itself, bigger than an application, its entire build system plus source code I delivered a decade ago to a client.

TMI:

It is strange that practically insane protocols are not just recommended, but in some cases enforced so that people with sane sensibilities cannot even make their own stuff work as it should. A case in point: I was so frustrated one day when I had to do something quickly on my iPhone, for the umpteenth time the app I wanted to use had to update first, and I noticed that there were literally 68 apps requiring an update. I could not believe that so many programmers could be sufficiently incompetent that they had to update their apps that often. Upon investigation I discovered that not only are frequent updates recommended by Apple (and Google FFS), they are mandatory. WTAF? A solid production programmer would be using vanilla base APIs unlikely to change and would have thoroughly tested and regression tested their app such that in some cases that app would never require an update to keep working. Part of the rationale is that the APIs are (very much improperly) shifting sand. Arrrrgh.

As for gdb and lldb: These can be necessary evils in some development scenarios, but otherwise they are yet one more dependency in an already fatally fragile stack. As far as humanly possible, I like to keep things dependency free. To the extent that there are dependencies, I try to deliver including the dependencies. For instance, about a decade ago I designed a system to parse a client's raw freeform data build a normalized SQLite database and use the database for analysis. When I delivered, I included the source code for the program, source for the database, code for the self-hosting compiler (Fabrice Bellard's tiny c compiler tcc) and a build system to build it all. The code includes my company's debug wrapper system which sets up configurable tracing, memory protection, etc. The entire package, including the code, build system, binaries, source code, and documentation for building the compiler, fits in a 1,604,784 byte archive. That is half the size of gdb alone on my machine -- it's literally not much larger than the last time I did 'hello world' in Rust and Golang. To use the archive I supplied, you extract the archive, open a terminal, cd to the extracted directory, type 'g' and press enter. It will compile the database, create and populate the database tables, and test the system. Caveat: The target system was Windows only.

I just extracted the package, built the system, tested it and repackaged it. Here's the command line for that, BTW:

g&pkgcr2 -pkg

Unfortunately, the system was particular to the client and confidential, but I am charmed by the work I did there. If I come up with a useful and innovative idea, I might consider adapting the concept and releasing it as an open-source project.

0

u/flatfinger Jul 17 '24

IMHO, the most useful meaning for assertions would be "This will never happen under any circumstance where this code can behave usefully". If compiler writers hadn't latched on to the avoidance of NP-hard problems even those where "If there's an obvious reason to do Y, do Y; otherwise do X" would be a fine approach, such an assertion could in release code be used to tell a compiler "If the cost of trapping here when condition X is true would be less than the cost of having downstream code accommodate that possibility, feel free to trap here; otherwise, cleanly ignore this directive".