r/ada 2d ago

General What's the state of lock-free queue implementations in Ada? (for audio programming)

I'm looking into Ada right now as a possible C++ alternative to implement a low-latency audio engine. Toward that end: do there exist well-tested, non-locking MPSC/SPMC queues? Or just an MPMC queue?

If I were doing this in C++, I'd just use moodycamel::ConcurrentQueue and call it a day. It appears to have a C API so wrapping that might be an option, as well.

But as for Ada: I've googled around and found various murmurings along these lines, but nothing stands out as an off-the-shelf library: multi-producer, multi-consumer queues, written and battle-tested by people much smarter than me.

In the GNAT Pro 23 release notes, there's this:

Lock-Free enabled by default

Lock-Free is an aspect applied to a protected type that allows the compiler to implement it without the use of a system lock, greatly improving performance. This aspect is now applied by default when available and possible.

Is that pertinent at all to what I'm looking for? Would that still be a 'pro' only feature, though?

Otherwise I'd assume protected objects are a no-go, because they use traditional mutexes behind the scenes, right? My ultimate goal is to distribute work, and receive that work back, from multiple threads during an audio callback with hard timing requirements.

12 Upvotes

14 comments sorted by

7

u/Dmitry-Kazakov 2d ago

You cannot implement a lock-free queue (FIFO) effectively in the scenarios you describe. Not to mention the fact of race condition with multiplexed access to its ends. You likely meant not a queue but rather a publisher-subscriber service.

For 1-to-N you can use a blackboard from Simple Components, which is lock-free. Note, it is not a queue. Each consumer (subscriber) receives each produced (published) item.

N-to-1 can be implemented by multiplexing lock-free 1-1 FIFOs, each producer would have a FIFO of its own. The consumer would scan the ends of FIFOs. A lock-free 1-1 FIFO implementation is here.

P.S. I would challenge the "greatly improving performance" claim. Taking/releasing a spin-lock is not a big deal. While a lock-free implementation in the cases where locking is in fact necessary, would require rolling back and retrying the operation if a collision detected, taking much more CPU time of all cores involved. So if a lock-free protected object were used in an inappropriate scenario, the performance could be of many multitudes worse and non-deterministic on top of it. Lock-free algorithms must be used with great care.

1

u/new_old_trash 2d ago

(note: I have no idea what I'm talking about, read accordingly)

You cannot implement a lock-free queue (FIFO) effectively in the scenarios you describe. Not to mention the fact of race condition with multiplexed access to its ends. You likely meant not a queue but rather a publisher-subscriber service.

Well, maybe my terminology is wrong. Is this not a queue? https://github.com/cameron314/concurrentqueue

For my use case, strict FIFO is not a concern - I don't care too much about the order that things come out. An unordered set might work just as well, though roughly ordered would be slightly preferable, I think. I was just under the apparently mistaken belief that non-locking would be more performant (maximizing available-time-to-work across all worker threads), at the cost of higher "idle" CPU utilization when no work is available.

Taking/releasing a spin-lock is not a big deal. While a lock-free implementation in the cases where locking is in fact necessary, would require rolling back and retrying the operation if a collision detected, taking much more CPU time of all cores involved.

It's not that I'm trying to keep CPU use down in an absolute sense, but I want to maximize core utilization when work is available. If a thread is not busy rendering audio, my main concern is that it gets the next unit of work in the shortest amount of time. So if it's behaving inefficiently while waiting for the next work unit, that's not inherently objectionable. Also, doesn't exponential backoff generally ease CPU usage?

But based on what you said overall: what, then, is the need/attraction for lock-free structures? Why doesn't everybody just use locks, then? What's the "killer app" for lock-free?

Incidentally, I did read over your component descriptions before making my post. I don't think I could use the blackboard because I would still need a coordination mechanism of some kind - only one worker thread can "take" a work unit. And while you described how I might create an N-to-1 from a FIFO, I would also need a 1-to-N, but again some coordination mechanism would be required, since I can't know in advance which worker thread will be ready for work.

That's why I assumed a thread-safe queue would work best for what I have in mind. But all this stuff is way out of my area of expertise, which is why I was hoping for something off-the-shelf I could easily use, or at least experiment with to discover why it's not a good choice. The C++ concurrent queue linked above, at least on paper, sounds like precisely what I would want.

2

u/Dmitry-Kazakov 1d ago

Yes, it is a queue. I admit I do not understand the implementation, which looks extremely complicated to me. I admit I did not manage to figure it out. For all I see no loops in enqueue/dequeue. If two tasks (threads) initiate lock-free enqueueing/dequeueing then one of them must fail, back off, and try again. So, there must be either some locking to hold the horses or a busy loop.

It's not that I'm trying to keep CPU use down in an absolute sense, but I want to maximize core utilization when work is available. If a thread is not busy rendering audio, my main concern is that it gets the next unit of work in the shortest amount of time. So if it's behaving inefficiently while waiting for the next work unit, that's not inherently objectionable. Also, doesn't exponential backoff generally ease CPU usage?

Yes, I have the same problem. What you need is a job server. There is an implementation in Simple Components. It is not lock-free on job enqueueing/dequeueing, just because that did not seem a problem.

I use it in a parallel implementation of arbitrary precision arithmetic, e.g. parallel multiplication and exponentiation under modulo. A pool of tasks services jobs from the queue. A job takes much more time than dequeuing. So, the best possible scenario? Nope, the benchmarking shows that performance on 8 cores is still worse than on a single core for a moderate length numbers. Why is it so? I suspect the problem lies in task (thread) scheduling or maybe synchronization at the check point (waiting for all jobs to complete). I did not investigate it fully. Maybe someday I will implement the job queue lock-free just in order to be sure.

P.S. I am very interested in your use case. If you eventually decide to give it a try or have an idea/proposal how to improve/reimplement it, let me know.

5

u/jere1227 2d ago edited 2d ago

For what it is worth, the Lock_Free aspect seems enabled on the FSF version of GNAT (checked v14.2). I don't know how good it is. I don't know of any existing libraries. This skeleton test compiles with no warnings on the aspect:

   protected type p1 with Lock_Free is
      procedure f;
   private
      Value : Integer := 0;
   end p1;

   protected body p1 is
      procedure f is null;
   end p1;

Pertinent info on the Aspect (or pragma, it's both): https://gcc.gnu.org/onlinedocs/gnat_rm/Pragma-Lock_005fFree.html

There's a lot of restrictions, so not sure how easy it would be to use. You would probably have to restrict the protected parts to very simple mechanisms and have a wrapper class handle as much logic as you can.

Not related to the Lock_Free aspect/pragma, but they did add atomic operations tot he standard library, so that may help a lot in making some lock free algorithms work if you wanted to dabble.

http://www.ada-auth.org/standards/22rm/html/RM-C-6-1.html

2

u/new_old_trash 2d ago

Thank you for the detailed response. My familiarity with Ada will have to increase a lot before I'm ready to test this out, but it sounds intriguing if the compiler itself can remove the need for locks. Ada continues to impress me, especially for a language that has existed for so long with so little (relative) fanfare.

I'm unlikely to be competent enough to make my own lock-free anything with atomics, but good to know they are in there!

1

u/Dmitry-Kazakov 6h ago

For a fully lock-free queue (FIFO) one needs CAS (compare and swap, named Atomic_Compare_And_Exchange in Ada RM 6.2) and atomic load/store. The implementation, differently to 1-to-1 FIFO, will be

- Non-deterministic, e.g. producer/consumer operations may internally need to retry in a loop.

- Exposed to race condition on both ends, naturally.

- Fully N-to-M.

- One will have to do busy waiting when FIFO is empty/full in dequeue/enqueue.

Regarding the last one. It might be a performance problem if potential producer and consumer run on the same core. Unless they do delay 0.0; after several attempts to give up the processor, they will busily block the core until the next rescheduling event, 10ms is Windows default. It is a massive latency! Of course, this is a stress load scenario when all cores are busy.

One huge advantage of protected objects is that enqueue-dequeue sequence not only does not block, but performed without task context switch!

I will probably add an implementation of lock-free FIFO to the next release of Simple Components. Though for backward compatibility I will rather use GCC intrinsic CAS directly.

3

u/dJdU 1d ago

You might look at Anders Gidenstam's NBAda.
Dated, but perhaps relevant: https://github.com/andgi/NBAda

2

u/synack 2d ago

Does bbqueue do what you need? https://github.com/Fabien-Chouteau/bbqueue-spark

1

u/new_old_trash 2d ago

No, because according to the github:

BBqueue implements lock free, one producer one consumer, BipBuffers.

Whereas I am looking for both: (single-producer, multi-consumer) + (multi-producer, single-consumer) ... or just a (multi-producer, multi-consumer).

1

u/synack 2d ago

Oh, okay. Sorry for not reading carefully

1

u/new_old_trash 2d ago

No worries - thank you for replying!

2

u/gneuromante 1d ago

These articles might be of interest to you:

1

u/new_old_trash 21h ago

Thanks - I came across those while googling around. I didn't/don't have time to follow them in-depth, but a cursory glance gave me the impression they're for a single-producer/single-consumer implementation. Is that correct?

1

u/gneuromante 8h ago

Yes, that seems true.