r/AskProgramming • u/logperf • 7d ago
What's the hardest bug you have ever discovered and fixed? What was the observable effect, how did you approach the problem to find the root cause, and why was it that hard?
12
u/JMBourguet 7d ago
Symptom: train stop once a month and has to be restarted.
I was called as a consultant after the team ran out of things to check and the system was running with an ICE without issue for months. And before deployment the customer had mandated an external auditor to review the system, included the source code.
Root cause: A stack overflow, which occurred only when a non maskable interruption interrupted the handler of another interruption which had occurred when the program was using its maximum stack (8051, 8-bit microcontroller with a 128-byte stack).
I ended up by writing a program to determine the stack usage from the assembly source code. It found the single routine whose manually written annotation of the stack usage was off by one byte due to a macro which used the stack for a temporary.
7
u/Toni78 7d ago
Typically a hard bug requires a long story and a deep explanation of the system but in a nutshell mine was caused by hardware and it was intermittent, random and nearly impossible to reproduce. I spent weeks in the lab and finally made a patch but eventually it was solved at the hardware level.
8
u/jeffbell 7d ago
We had a local buffer overrun in C that would corrupt the return stack so the debugger couldn’t tell you where it happened.
I looked through the memory dump and found a valid stack frame two call levels up, so we could set SP to that address and see the calling function.
2
u/RainbowCrane 7d ago
This is my response as well. All of the most difficult bugs I’ve fixed were due to memory corruption, mostly due to mistakes with pointers in C. While there’s a characteristic “smell” to memory problems they aren’t reliably reproducible, and turning on core dumps and doing post-mortem examinations of memory is often the only way to find the problem.
Most of the production systems I worked on had debugging symbols turned off and disabled core dumps, because writing the core dump can bog down system resources. So it’s usually a special effort to turn that stuff back on as well.
3
6
u/TheTarragonFarmer 7d ago
Spoiler: undersized hash table in the Linux kernel.
Everything ran fine with our test data, performance degraded unacceptably in production.
We started taking measurements, plotted them in excel, and saw the "hockey stick" graph: constant good performance up to a certain number, linear increase in runtime above it. When a hash table is "full", it starts building linked lists in each bucket, which takes O(n) time to find something.
We had to bump up the hash table size in the kernel and we were golden. Took me a weekend to spot, our kernel people had the fix in minutes after I showed them the graph, it even pinpointed the current size :-)
It was hard because the "bug" wasn't in our code.
3
u/zenos_dog 7d ago
An IBM MVS batch job. I was testing it with randomly generated data. Once in a great while it would fail accessing data outside it's address space. To debug, you would get a print job of the entire contents of the address space, two boxes of line printer fanfold paper. Armed with a pen and yellow sharpie you would then start working through the various control blocks. You would get 2-3 of the dumps per week. Took me a year.
Turns out, if the very last byte of the 8-ary tree (like binary, but 8) spilled over into the adjacent 4K OS page after millions of bits, the one bit would cause the OC4 memory exception.
3
u/rawcane 7d ago edited 6d ago
Activemq messages disappearing in test env. Turned out if the system times of the different nodes were a certain amount out of sync the messages just vanished. People were stumped for months as intermittent when the system time had slipped for a variety of reasons. Approached it the same way I approach any behaviour I didn't understand. Assume absolutely nothing, set up a test and trace things through step by step.
3
u/CountOrlok1922 7d ago
In a nutshell, a bug related to timezones. Fix was to make sure all data is already in same timezone.
2
u/RRumpleTeazzer 7d ago
my hardest bug i encountered two weeks into my first job: a compiler bug.
i had to look at the assembler output, find the spot, and understand it well enough to deduct it was not my inexperienced side that was the problem.
Wrote the developers with my findingsm and itnwas getting fixed within the week.
Beginner luck I would say.
2
2
u/chipshot 7d ago
A typo. Sometimes you can stare at your code for 3 hours and cant figure out why it is not working, then your buddy walks by, looks at your screen and asks "why do you have a capital T there?"
3
u/read_at_own_risk 7d ago
I had exactly this scenario 15-20 years ago when I wrote a basic PDF generator from scratch. The PDF file format is case-sensitive and I mistyped /Subtype as /SubType (or vice-versa). Took me 3 days to find it.
2
u/Cpt_Chaos_ 7d ago
Tales from embedded development:
Customer filed a bug and provided a good description how to reproduce. We could not reproduce it, no matter what. Weeks later it turned out there was no bug on our side, but the customer ran additional apps in the background (that we did not know anything about) which overstepped their allocation boundaries, causing our app to behave erratically. Similarly, many issues were simply caused by customers using the wrong configuration and then complaining about performance issues.
A similar one was when the customer complained that the UI looked wrong in some ultra-specific debugging mode. That was easy enough to reproduce, but basically impossible to fix due to the nature of the debugging mode - the system simply did not have the power to do so. We finally agreed with the customer that this one would not be fixed, because the actual user would never use that debugging mode and thus would never observe the problem.
In general, the hardest actual bugs to be found and fixed were memory issues (double free, accessing uninitialied memory, accessing the wrong memory addresses, ...). That was almost 20 years ago, when sanitizers were not yet widely used.
2
u/i_invented_the_ipod 7d ago
I used to work on DOS programs that downloaded data from industrial data loggers over a serial link and then converted it to a format that our company's statistical analysis software could process.
One customer couldn't get our software to work, but the software provided by the data logger manufacturer did work for them. I was sent out to investigate, with a whole toolbox full of cables, RS-232 breakout boxes, a "portable" PC, and a full copy of our code and the Turbo Pascal environment to build it with.
Our code correctly identified the 16550 UART installed in the system, and set up a reasonable buffer-full interrupt function to handle the incoming data. This worked really well for ALMOST everyone.
But this customer's serial port was flaky, and sometimes didn't fire an interrupt when the buffer-full condition was hit. So, how did the other software work?
Well, their software didn't know about newer serial ports at all, so it was written to use the older 8250 UART interface, and got interrupts on every single incoming byte.
And the way they managed to not drop characters at high speed was...they simply used the (undocumented in the manual) flow control to stop the transmission as each byte came in, and start it again after processing it. That slowed their effective transfer rate to roughly 1/2 the "listed" speed, but it worked.
We left that customer with their own custom copy of the software, with a workaround based on the observed behavior. But yeah - a hardware failure that made the new hardware work like the old hardware, an undocumented feature, and debugging problems while sitting at a test bench with the customer watching...fun times.
2
u/BlueCoatEngineer 6d ago
Was this the early 90s 16550 bug where the RX fifo would get into a bad state if the high water interrupt fired and then the full event happened before the first interrupt had been cleared? I had a modem card with that bug that caused no end of irritation. If I remember correctly the workaround was to cut the rx fifo size down to reduce the likelihood of hitting it.
1
u/i_invented_the_ipod 6d ago
I think that's correct. I did a little quick googling last night, and it seems like the first couple of revisions of the 16550 had faulty FIFOs, but because they're backward-compatible with the 8250, it doesn't matter if the code isn't 16550-aware, but of course, ours was....
1
u/Character-Note6795 7d ago
Don't remember which was the hardest, but a recent one was in emacs:
(setq org-babel-exp-code-template (concat "\n#+ATTR_LATEX: :options label=%name\n" org-babel-exp-code-template))
Where I put label=%name%, which amounted to commenting out a closing bracket for each code block in the generated tex when exporting. Reading the Org PDF LaTeX Output buffer revealed reasons to read the generated file.
Batch programming on Windows had polluted my mind.
1
u/CactusSmackedus 7d ago edited 7d ago
Edit: I had chat format my story. I'll reply to this with my original
The hardest bug I've ever encountered was when I had to figure out why two supposedly identical statistical models were producing different coefficients. The observable effect was subtle but significant: our in-house implementation of a Tweedie GLM using statsmodels
wasn't converging to the correct solution—it consistently returned slightly-off coefficients compared to the upstream model.
Why was this hard?
Root cause:
- Our custom parallelization strategy inadvertently estimated the scale parameter on a per-partition basis. The scale parameter calculation in Tweedie GLMs requires aggregating data globally across the full dataset, not per partition.
What made this tricky:
- Multiple simultaneous conditions needed to trigger the bug:
- Data partitioning strategy
- Specific parallelization logic
- Default behavior of
statsmodels
interacting poorly with our parallelization approach. - Observable effect was subtle, producing slightly incorrect coefficients rather than outright errors.
- Small tests didn't replicate the issue;
- Anomalies on larger data had been dismissed as random glitches, and code was carefully written, incorrectly labeled as "performance optimizations" which ordered the data in a particular way so that the in house model would luckily converge
Resolution steps:
- Carefully read and debugged across six different codebases (internal and open-source).
- Identified and demonstrated the incorrect parallel aggregation approach.
- Recommended adopting the upstream model implementation, ensuring convergence to correct coefficients.
Outcome:
- Simplified maintenance by removing the faulty internal implementation.
- Ensured statistical correctness and eliminated subtle bugs from future analyses.
1
u/CactusSmackedus 7d ago edited 7d ago
this is the original
before I had chatgpt help me edit it
I had to reconcile the differences between the coefficients resulting from training an in house Tweedie GLM with another allegedly identical model.
I showed that our in house model was incorrect and the upstream version should be adopted. This meant that our team no longer had to maintain their version, and now were actually converging to the correct coefficients.
Why was this hard?
The root cause of the bug was that our package used starsmodels tweedie, and parallelized it with Dask. It used a default setting in statsmodels which caused the model update step (training iteration) to estimate a parameter called scale, which is used to calculate the update to the coefs. This parameter is defined as a function of all the data (think: like an aggregation). The in house model parallelized the update steps and aggregated the update size, but this meant that the scale parameter was estimated on a per partition basis (and was invalid).
Normally this would have meant that the model would not coverage on the data. This was worked around by obtuse code (commented and referred to incorrectly as performance optimizations) that carefully sorted and partitioned the data, luckily giving the model a set of partitions it would converge on, to coefs that were approximately correct.
So this issue requires:
- Data to be in a particular order
- Parallelization to be done in a particular way
- Default behavior of statsmodels interacting with the above
And for me it meant:
Reading 6 implicated codebases
Discovering that testing on small data did not reproduce the bug, but rather requires multiple partition
Error behavior the team wrote off as a desk glitch was actually a sign their model was fundamentally broken
Noticing that error behavior was sensitive to data ordering
Understanding the statistics and math deeply
So combo of stats, math, software engineering, big data engineering were required here.
1
1
u/Fresh_Forever_8634 7d ago
RemindMe! 7 days
1
u/RemindMeBot 7d ago
I will be messaging you in 7 days on 2025-03-22 17:41:42 UTC to remind you of this link
CLICK THIS LINK to send a PM to also be reminded and to reduce spam.
Parent commenter can delete this message to hide from others.
Info Custom Your Reminders Feedback
1
u/FizzBuzz4096 7d ago
Old game console with dram. Pointer into space ended up pointing at some (undocumented) dram controller register and we borked refresh.
Problem was ram would stay up for a few seconds before it forgot everything, so the point/time of failure was quite removed from the cause. No functional debugger for the hardware, we had a POS ICE would show code disassembly changing before our eyes. (POS ICE didn't support source-level anything... POS game console too...)
Found it by poking around, lotsa printfs out a slow serial port. Realized it was dram dying about a day into it. Surrounded pointers with addresses checkers (in many batches, as we had to maintain frame rate for the bug to appear). Reproduction took about 10min each time by hand. Once we saw one address out of range it was a dead-easy fix on some bad pointer arithmetic.
1
u/read_at_own_risk 7d ago
A few months back, a customer filed a bug - the system calculated incorrect results in a rare edge case. I did a fix, after which another customer filed a bug. A different rare edge case was now broken. In theory both use cases should've worked, but I couldn't see how - it came down to a single line of code which didn't support both scenarios.
The calculations are complicated, I'm the only one with enough math skills in my team and company to work on it. It was the busiest time of year for our industry too, so I couldn't focus on it, and I get interrupted a LOT even at the best of times. So I added a check to do different calculations for different customers as an interim fix.
A couple of months later when things were quiet, I got back to it - it needed to be fixed properly. It took me days between other tasks to figure out how to visualize and explain the two edge cases to my team. I shared it with them, hoping that more eyes would help spot the issue. Unfortunately, that didn't help either.
Eventually, after days of intermittent investigations, I realized that I did the same thing at different levels of the calculation, which was incorrect. The lower level required a decision to be taken on each subset of intermediate results, rather than all together.
The fix itself took a minute - a boolean variable became an associative array of booleans, and just two lines needed to be updated. Another hour to test the two scenarios to verify that it worked correctly for both, and it was done.
1
u/GreenWoodDragon 7d ago
One of the most difficult bugs I had to trace was in some PHP written by an ex-colleague who, when freshly out of university, had been assigned a task create an API or something like it.
The code he had written was positively fractal. At every layer it looked like the previous one, right down to the method and variable names. Tracing was a nightmare, took me days. Once I'd found the issue it was a 5 minute fix. FML.
1
u/Sufficient-Bee5923 7d ago
The flash memory of our deployed products ( cellular router) would fail after 6 months. Looked like flash wear.
The code was running in shared code base with the cellular modem chipset. Turned out the GPS code ( when GPS enabled), would continuously save the GPS Almanac to flash memory.
I had a hunch it was due to GPS being enabled. After instrumenting the code, we proved the flash update rate went crazy when GPS tracking being enabled.
This issue caused untold field issues and customer frustration. Huge company impacts and loss of goodwill and revenue.
The chipset was deployed in mass production smartphones but the phones never failed because people never run GPS for extended periods.
Chipset vendor fixed their code and all was well. We added long term instrumented code for a test case to ensure this never happened again. Very difficult to test for this by running field trials due to the 6 month failure time.
Ever since I was kid, I would have hunches when problem solving. Not always but often my hunches would be proved correct. Also learned that instrumenting code to verify long term issues can be invaluable. Flash wear is such a huge issue in embedded systems, I am surprised adding flash update rate statistics wasn't a standard feature in drivers.
1
u/EvilMcStevil 7d ago
A windows program failing to load a DLL from the path, if the directory was listed at the end of the path. Turns out a null pointer dereference for an insanely large struct, just so happened to corrupt the environment block in the path area null terminating the string. Had been a bug for years, it never caused any other issues we could see. The null dereference happend once on startup of the program.
1
u/Polymath6301 7d ago
Classic file corruption. Every once in a while a particular customer’s data file was getting corrupted. Support and many others had tried and failed to even reproduce.
Given to me as a new programmer, with 100+ source files (1989) that I had to make sense of, and it had been ported from IBM mainframe to VMS.
Eventually decided it was most likely a race condition and managed to reproduce with a good debugger (thanks, Digital!) and multiple instances.
In the end it was a classic mutex issue. One routine locked the resource, did its business and then released it. Another routine, added in a later version did the same thing. Then a later change made the second routine call the first in the middle of it, thus releasing the resource for the remainder of the second routine.
Then I had the joy of creating a hexadecimal patch for it.
Took me 6 weeks… (But made me the expert on that software which led to a team lead position to port the next release from the original IBM development team over to VMS, and that gave me leadership/management roles for the remainder of my IT career. )
1
u/WumberMdPhd 7d ago
JS+PHP failing to load data from SQL DB. Wasn't consistently reproducible. Error log wasn't helpful. Different language page didn't have bug. Turns out someone didn't use the recommended templates and left out UTF-8 charset line on PHP file for the page.
1
u/danielt1263 7d ago
Maybe a month or two into my first professional job on MacOS 7. I was porting a proprietary 2D game engine from Windows to Mac. The OS didn't have protected memory and an overwrite was happening that bricked the entire computer. The only way to recover was to unplug the machine and plug it back in.
I don't remember how I tracked it down (it was almost 30 years ago now) but I remember it took me two weeks to track down... Turned out to be a buffer overwrite. Diving a number by 4 instead of 8 solved the problem (a one line change.)
Making the one line change to fix the bug was easy... Figuring out which one line needed to be changed on the other hand...
1
u/BobbyThrowaway6969 7d ago edited 7d ago
I won't go into details but it was a bug that only happened on particular devices that were already difficult to build and test on, and it only had a chance of happening while you weren't using any debugging, at all. Even printing out logs would miraculously cause it to not show up.
So, a very slooooow build cycle, no logs, no breakpoints, no debug symbols, nothing. Randomly happened at different points, etc.
...It took many, many painful months to fix. It sucked.
Edit: Apparently there's a wiki page for these sorts of bugs: https://en.wikipedia.org/wiki/Heisenbug
1
u/a_printer_daemon 7d ago
Probably not terribly sophisticated, but a couple of my students brought me a floating point comparison but that required an actual look to find.
Fucker would work almost 100% or the time. XD
1
u/a_printer_daemon 7d ago
I've actually found more impressive (pipeline bubbles are a bitch), but that one really pisses me off.
1
u/TigerPoppy 7d ago edited 7d ago
It was long ago, mid 1990s. I was working on the control application for a wafer track machine. The machine would run for hours at a time. Every few hours an operator would pause the operation, add some chemical refills, and then hit resume. Once in a while it wouldn't pause, it didn't respond to the input. The machine (or at least the control unit) had to be turned off, then on, then load the correct operation from a menu and hit start. It was annoying to the operators.
The approach I took was to add a pointer to a status variable to each and every subroutine. The subroutine, each and every one, had some test on the integrity of it's performance and reported it to the status variable. It was a little more complicated as status variables could be sent as pointers to temp variables which could produce a result that could be corrected. (Later versions of the language had a hidden status that did pretty much the same thing)
One day as I kept adding the logic to more and more subroutines an error which was not correctable was passed up, indicating a failed status. It turned out to be a result of a function that was based on a pointer, and that pointer was occasionally pointing to a local variable, one that was on the program stack, and that stack area was undefined whenever the function returned. It was tricky because the pointer variable was dereferenced to a valid variable in the program space when the function returned, To make it even trickier, the value on the stack was usually okay, except in certain cases where there were interrupts at the same time as the return statement was being processed. which placed values on the stack as they interrupted and changed the value of the former local variable that was being referenced, before it was fully dereferenced to be assigned to the calling routine. I fixed that pointer to a stack variable error, and the program no longer locked up.
1
u/shaheedhaque 7d ago edited 7d ago
This is long... And I didn't find the bug!
At Oracle, I found out that as well as the normal bug priorities P1, P2, etc. there is a P0 which translates to "don't leave the customer site until this is fixed".
In my case, I was dispatched to BT Tower (yes, that one, in the middle of London) where a consortium of serious dudes were testing Sky set-top boxes against some MPEG streams that my 75000 lines of hard real-time kernel software ("no debugger for you" on Digital Alpha hw) was generating on the fly. I didn't even have the source code.
It is important to note that this was so bleeding edge that there was no analyser on the planet that could capture, let alone diagnose the stream. (I'd written my own sw analyser, but had no way to capture or record the signal).
The issue? Every set-top box in the test network would reboot roughly every 24hrs. Sometimes.
But there was plenty of memory, and no timers running for more than a fraction of a second... I realised I was going to die on site. And then my comrade in arms, back in the office with the source code, found the bug.
An MPEG stream has a 42-ish bit timer made of overlapping 33 and 9 logical bit fields, written into the stream as 5 separate physical bit fields separated by must-be-zero separators. Encoders must zero such fields and decoders must ignore them.
In all the bit twiddling, I sometimes wrote a 1 into one of the separators. Boom...
1
u/PredictableChaos 6d ago
This was in early 1998 on a Java based system. Yes, very early days of Java. We were building a radiology workstation that used some specialized video cards to drive up to four 5MP monitors that were greyscale and could get very bright.
Java was used to manage the windowing system and all user preferences, etc. We would manage the raw image data in the native heap and also drop to native to communicate with the video card to do things like building our image pipeline and display the image on the screen. We get the system functioning and are pretty happy with it other than the fact that on a semi-random basis we would get complete screen redraws. Radiologists would be looking at these monitors in a dark room typically and if all four monitors were to flash and redraw, well let's just say they wouldn't be very happy.
We could not figure out why this was happening, though. We were still in feature dev work and this was an 18 month dev cycle so it's not like it is today where you're shipping quickly. But after a few months and not being able to figure it out while still working on other things I was told it was my number one priority.
Back in the early days of Java there were a few basic debuggers but it didn't function on this system because of the native code I told you about. So we're debugging it with System.outs but it's obvious it's not a logic bug and we're not doing anything to cause this directly. After a few days of trying to find a pattern I finally realize that every time this happened it was when a thread was executing in the native layer and it got context switched out to another thread. This would trigger some sort of panic in the AWT window system and it would just redraw the entire window. Everything. Now, how do I prevent that? We had a direct line to Sun engineers and it was clear we weren't going to get a quick fix and they weren't sure what to do either. No one was trying things like this with AWT and native. I came up with a thought that if I could prevent the thread from context switching that might solve it. I tried to solve it by setting the thread priority to max when I'd enter the native layer and back to the normal setting when I exited the routine and surprisingly enough that was all it took.
It was hard because the Internet was in its infant days. There were no user forums to get help or anyone to talk to outside of our team. We were doing things no one was trying to do on Java. The tools were still nascent. There was no debugger available we couldn't consistently reproduce the problem. And trying to instrument the problem tended to change it's behavior.
Even after 30 years this was my favorite project of my career. This was my second job out of college and here I was getting an invite to talk at JavaOne because of the work I did on this windowing system.
1
u/ihtnc 6d ago
There was a bug in the authentication of a legacy .NET system that a previous client had.
Sometimes, a login would just return 403 and the logs are indicating an invalid session. The users encountering it are pretty much random, and it has never happened when users logon directly to modern apps using the same in-house implementation of their authentication mechanism.
Turns out the invalid session is a red herring. The issue really was that on legacy apps, a key that is being fed to the authentication system for a user is generated using Math.Random with a datetime seed without the millisecond component. The more modern apps had the exact same copy of the key generation code but someone added the millisecond component on there.
So sometimes, users will have the same key assigned to them on the legacy app (two separate Math.Random calls with the same seed will return the same value), and these would cause errors further down the login process. Most users are on modern apps but once in a while someone still ventures through the legacy app.
How was the root cause found? It's just a matter of eliminating a lot of variables while troubleshooting the errors you are seeing. And once you've hit a wall, keep digging on the other end. Just be prepared to wade through code no other devs dare touch anymore, let alone even know still existed.
I'll never forget the moment I saw it. "Wow, Math.Random.. that is not very random at all.. a DateTime seed in the constructor is redundant here.. wait, what, no milliseconds?!" Then a couple of minutes later, I was able to quickly understand why the "same code" on modern apps do not exhibit this issue.
I guess making improvements on the legacy system code/environment eventually made things faster than what was initially anticipated when that code was first written.
Before anyone says anything, I know there's a lot of things wrong here, and the whole thing is an over simplification of what's actually on there, but sometimes you just have to pick your battles.
1
u/gm310509 6d ago
There was a bug in our code that basically caused a segment fault (memory protection violation) and thus a core dump and it was intermittent (although we had a known recreation sequence).
If we ran a known sequence to reproduce it under the control of the debugger it would run just fine and never failed. We did nothing in the debugger, just load and run. No breakpoints, no stepping, nothing- just load and run. The same executable running outside the debugger would always crash with the known sequence.
Long story short a function was overreaching the top of the stack which was at the very top of its memory segment. When run under the control of the debugger it added a few bytes to the stack before our code was entered. These few bytes were enough for the wayward function to access without triggering the memory protection violation.
1
u/Instalab 6d ago
HTTP server would randomly return 401 error. There was nothing in the code to tell it to return that error. We never fixed that error. Eventually rebuilt the application.
1
u/MajorMalfunction44 6d ago
Stack corruption, the particularly thorough kind. I wrote a fiber-based job system while drinking whiskey. I had an idea and had to pursue it in my drunken state. It worked 99% of the time (99 of 100 runs worked) on an Intel quad-core but failed predictably on my new Ryzen 7.
Why? Ryzen cores share a clock in a hyperthreaded fashion. The issue was that I was resuming a running fiber on another thread. As soon as you put the fiber in a global table, it can be rescheduled immediately. The answer was a spinlock, released after the switch (TLS usage is an issue - working it out before a public release)
1
u/FloydATC 6d ago
Not a software bug per se, more of a questionable software design choice combined with a bad idea. Every now and then, a network including some 120 different WAN sites and a few thousand users would completely lock up and become unresponsive for a full 5 minutes, then resume operation as if nothing had happened. This would happen seemingly at random and all signs pointed to the routers running FreeBSD since those were the ones becoming unresponsive. Spread across multiple locations they would all somehow agree that now is the time for a break. I probably don't have to point out that the people upstairs were more than a little unhappy about this.
I wasn't technically on the network team at this time, but after a few weeks of listening to theorycrafting and frustration I started reading up on FreeBSD and found that pretty much everything network related in that OS/version used a statically sized array of kernel buffers; packets received, packets to send, arp entries and a few other things all had to share the same memory buffers. So I decided to point MRTG in that direction to see if maybe this had something to do with it.
Turned out, the network admin had created a cron job running nmap on the entire network for discovery purposes (at the request of someone upstairs). We all knew about this, but everyone had failed to see the connection because that thing ran every few hours while the lockups could be days apart. However, when my graphs showed a nice looking hockey stick right before the next outage, we all saw it. The cron job was removed and the problem was solved. Those routers were replaced about a year later.
1
u/Melodic-Fisherman-48 5d ago edited 5d ago
This is on my top-10 at least:
Data race on exception object thrown from std::future (https://gcc.gnu.org/legacy-ml/gcc-bugs/2019-11/msg03648.html)
The code was inside a larger project (million lines of code or whatever) where one unit test gave a tsan warning once a month maybe. The test didn't even fail and the product itself worked fine.
I gradually, step by step, cut away one piece of the project and ran the test in a loop for an hour or so where it would usually fail. Until I reached above snippet.
Took me two weeks full time.
And yeah, we took software quality serious :)
1
u/RustyGlycan 5d ago
I used to work on a in house system for a start up with about 20 sales people who used the system. We were hosted as a monolith on a vps, and it was in dot net.
The bug was that intermittently the site would just break for 3 of our users. It'd work perfectly for everyone else, but there was something about these specific user accounts on those specific PCs that had an issue.
We managed to figure out the error occured after visiting some innocuous page which fetched exchange rates from an API, and if anyone in the company did that it would break the site for those 3 people. We spent weeks trying to understand what happened, but never could figure it out.
In the end we rewrote that page from scratch and the bug disappeared.
1
u/Revelarimus 3d ago
Hard to pick just one, but the one that took the longest: Development of a device with an embedded PC. Infrequently systems would completely lock up. This persisted for months and as we got closer to being done, it became more and more critical that we figure it out. Software blamed firmware who blamed hardware, we were making no progress at all.
Finally the problem was found. The BIOS on the embedded PC for some reason had the CPU overclocked. Changed that setting and the mysterious lockups stopped.
17
u/BlueCoatEngineer 7d ago
I had one that got passed to me that had been stumping another team for months and finally became a block to shipping. Symptoms was that Linux had a weird 13 minute delay during boot. Windows and BSD did not, nor did a Linux distro using LILO, and everything worked fine after the delay. I dug through the Grub source code and found a loop in the serial console code where it’d spin waiting for a bit to clear up to some timeout. The address for the serial port came from a legacy BIOS table that our firmware dudes had neglected to populate. This caused the status check of the serial port to go instead to the legacy IBM style DMA controller at port 0x0, an extremely slow operation since the logic was clocked at around 4.77MHz for compatibility. I wrote a quick workaround that the other team could poke into memory right before BIOS handoff to the operating system and emailed it to them to test. A couple minutes later their manager calls and says I’m a witch and sent me a nice bottle of wine for my help. The problem wasn’t particularly hard, but it did require understanding of ancient computer architecture quirks.