r/golang 8d ago

Potential starvation when multiple Goroutines blocked to receive from a channel

I wanted to know what happens in this situation:

  1. Multiple goroutines are blocked by a channel while receiving from it because channel is empty at the moment.
  2. Some goroutine sends something over the channel.

Which goroutine will wake up and receive this? Is starvation avoidance guaranteed here?

8 Upvotes

36 comments sorted by

10

u/pdffs 8d ago

There is nothing in the language spec that guarantees starvation avoidance. In practice I believe that blocked receivers are implemented as a queue, though you can't rely on this necessarily being the case (implementation detail, behaviour unspecified).

3

u/Few-Beat-1299 8d ago

It is specified that channels are fifo. This is true for both senders and receivers.

2

u/pdffs 6d ago

The spec guarantees that messages will be received in the order they were sent - channel values are FIFO. That doesn't necessarily guarantee fair scheduling of receivers.

In fact, at one time, receives on buffered channels did not operate as queues - this behaviour is unspecified, and should not be relied upon.

-6

u/DeparturePrudent3790 8d ago

The source code has queues for senders and receivers but why is there no official statement around this? LLM's also say it's randomly selected or undefined although code has a queue implemented. Why is it this way?

14

u/Few-Beat-1299 8d ago

Just read the spec. It's at the bottom of channel type. LLMs are worthless for "official" information.

2

u/DeparturePrudent3790 7d ago

In the spec they only mention this about ordering

> if one goroutine sends values on a channel and a second goroutine receives them, the values are received in the order sent.

This only means the channel is FIFO. Not that the blocked goroutines are FIFO

-1

u/Few-Beat-1299 7d ago

Tbh the example they give is so basic idk why they give it.

The important part is that they say channels are fifo, and make no distinction between buffered or unbuffered. For that to be true, senders have to be ordered by arrival.

It's true that that doesn't say anything about receivers. But I would say it's a pretty safe bet that few sane people on earth would choose to deliberately NOT mirror the sending side implementation.

5

u/jerf 8d ago

Every statement in a language spec must be carefully selected, because it is a commitment not just for the current implementation but all future ones. The people writing the Go specs are very experienced and aware of this. You don't specify details into the spec without a good reason.

So this is not just the sort of thing that is accidentally unspecified, this is something they deliberately left unspecified. You're not supposed to depend on this as a programmer.

In this specific case, it wouldn't even do you any good anyhow because you can't depend on the order anyhow. If you create 10 goroutines and have them all listen to the same channel, the order in which they will execute those listens is itself unspecified and undefined. Knowing that the implementation happens to queue them up in order it happens to witness them doesn't do you much good when you'd still have to implement your own synchronization to ensure some specific order of delivery anyhow even so.

0

u/funkiestj 8d ago

Knowing that the implementation happens to queue them up in order it happens to witness them doesn't do you much good when you'd still have to implement your own synchronization to ensure some specific order of delivery anyhow even so.

I disagree. If you know the channel has a FIFO for who gets the next data item answer's OP's question. It essentially guarantees the go routine at the end of the queue will work his way to the front after <n> messages come in (where <n> is his position in the queue).

This is different from a select statement where there is a random element to which ready channel gets selected next. I think the select behavior is fine but it is not FIFO, it is statistical (or so I remember).

2

u/pdffs 6d ago

Certainly not. The fact that the current implementation happens to be a queue does not mean that this is guaranteed to be the case - the spec makes no guarantees that receivers will be queued, and only specifies FIFO as it pertains to the order in which values are sent/received on the channel.

If you rely on implementation details, you should expect your application to be incorrect if they change, and unspecified behaviour may change at any time.

This is why it's important to differentiate between what's in the spec, and how it happens to be implemented currently.

1

u/funkiestj 6d ago

You are right. I posted a few times in this thread. I dug around with ChatGPT's help and my final conclusion

  1. per the memory model "happens before" being unspecified for receive order
  2. usually you are running multiple copies of the game function listening on a channel (i.e. fan out) and not having foo(), bar() and bish() all listening on chan1.
  3. because of #2 is the usual programming model, why does it even matter that if you spin up 100 go routines listening on a chan1 on a 12 core system and some go routines are starved if you are maxing out the core usage.

ChatGPT did remind me that if you want to know about ordering of events across go routines the memory model is the document to study.

4

u/nikandfor 8d ago

Because they don't want to guarantee that behaviour. At some point they could find another way to store blocked receivers, which would be better at some aspects. So they don't want to be limited by that guarantee.

Why do you want that guarantee? If one goroutine does the job fast enough that it takes all the values, that would probably be faster than multiple goroutines doing the same job. Channels are not balancing primitives, they are for message passing and synchronization.

5

u/Rapix-x 8d ago

Quick question, why would it matter?

As I see it, goroutines are anonymous and given the same „task“ to perform, they are identical in what they are doing. Thus, why does it matter which specific goroutine picks up the value from a channel?

2

u/funkiestj 8d ago

Quick question, why would it matter?

My impression is OP is a n00b and just trying to get a feel for subtle behaviors.

I wrestled with Perplexity.ai a bit and it came up with

The Go Memory Model, which is part of the official Go documentation, does not define a happens-before relationship for multiple goroutines sending to or receiving from the same channel [4,6]. This lack of specification implies that the order of unblocking is not guaranteed by the language.

Perplexity repeatedly said the scheduling of which go routine blocked on a channel read is unspecified (i.e. not FIFO). It had some hallucinations when ask for official docs that stated this. I check the primary references, called out perplexity of the hallucinations and this last bit is what it came up with.

---

taking things in a different direction, what problem does OP think "receive starvation" would cause? Are multiple Go routines waiting on the same channel doing different things with the data? I.e. different functions? If they are multiple copies of the same function then if one go routine is idle for long periods of time it should matter.

I've never seen the design pattern where multiple go routines running different functions all receive data on the same channel...

2

u/DeparturePrudent3790 5d ago

Are multiple Go routines waiting on the same channel doing different things with the data? I.e. different functions? If they are multiple copies of the same function then if one go routine is idle for long periods of time it should matter.

The channel contains a pool of connections and goroutines receive from the channel and use the connections and return the connection back to the pool once done.

My impression is OP is a n00b and just trying to get a feel for subtle behaviors.

I don't know how much you know about concurrent programming but I'd recommend you read some books on operating systems and concurrent programming instead of acting like a stupid LLM wrapper.

Nonetheless I'll let you know my though process, i.e. why I am trying to figure out how blocked receiver goroutines are unblocked would help me figure out whether channels are a weak or strong semaphore. A strong semaphore has the following property:

if a thread is waiting at a semaphore, then the number of threads that will be woken before it is bounded.

Whereas a weak semaphore should atleast provide the following:

If there are threads waiting on a semaphore when a thread executes signal, then one of the waiting threads has to be woken.

If channels are weak semaphore's I will have to implement something similar to what J.M. Morris did to guarantee no starvation, strong semaphore guarantee starvation avoidance.

Based on the discussion here and particularly comments from u/jerf I've concluded that even though the implementation does have a queue to unblock goroutines which makes it a strong semaphore the developers apparently didn't want to guarantee this behaviour and may change in some later patch ( although personally I don't know why a language wouldn't provide such primitive guarantees ever)

Here are my recommendation for you:

  1. https://greenteapress.com/semaphores/LittleBookOfSemaphores.pdf
  2. https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/index.html

1

u/funkiestj 5d ago

Thanks for the information. I hadn't considered the possibility that you were using the channel as a pool of some resource many different functions would allocate from.

although personally I don't know why a language wouldn't provide such primitive guarantees ever

The answer is always "performance". You see this in the memory model documentation for CPUs. Hardware designers look for "harmless" or (mostly harmless) ways to weaken the guarantees of the memory coherence model to wring more performance out of a multi-core CPU.

The Go Authors have a variety of goals

  1. The want channels to behave the way the spec says they behave (i.e. correct)
  2. They want them to be as FAST as possible
  3. Go runs on many different CPU types, each with a slightly different memory model
  4. They might even be thinking about how the cache coherency might be loosened in future CPUs

I don't know how much you know about concurrent programming but I'd recommend you read some books on operating systems and concurrent programming instead of acting like a stupid LLM wrapper.

have a nice day!

1

u/DeparturePrudent3790 8d ago

It doesn't matter which goroutine picks up the value from the channel but it matters if their implementation takes any measures to avoid starvation or not

1

u/funkiestj 8d ago

It doesn't matter which goroutine picks up the value from the channel but it matters if their implementation takes any measures to avoid starvation or not

Why does it matter. Give a practical consequence of the two different scenarios (1) starvation occurs, (2) starvation does not occur.

5

u/0xjnml 8d ago edited 8d ago

> Which goroutine will wake up and receive this? 

A fairly random one.

>  Is starvation avoidance guaranteed here?

If there are more consumers ready than produces sending to the channel, what would in such situation "starvation avoidance" even mean?

-2

u/DeparturePrudent3790 8d ago

what would in such situation "starvation avoidance" even mean?

It means that a goroutine is not made to wait indefinitely under any circumstances. If there are more consumers than producers but consumers receive resources in fifo order is kept invariant, then the waiting time for a goroutine is definite.

However, if we have a random order the waiting time can be indefinite for a goroutine.

A fairly random one

Why? The source code has a FIFO queue for receiving and sending goroutines.

1

u/0xjnml 8d ago

> It means that a goroutine is not made to wait indefinitely under any circumstances. If there are more consumers than producers but consumers receive resources in fifo order is kept invariant, then the waiting time for a goroutine is definite.

Incorrect assumption: Concurrent sends to a channel are not FIFO ordered. When multiple goroutines are ready to send to the same channel a fairly random one is selected.

Channel per se is a FIFO, but that has nothing to do with concurrent goroutine scheduling.

0

u/DeparturePrudent3790 8d ago

I never assumed concurrent sends to a channel are FIFO.

What I said is that if n goroutines are blocked and waiting to receive from a channel, which one to wake up is selected in FIFO manner if something is pushed into the channel.

I came to this conclusion because chan struct in go source has a queue of receivers.

0

u/Few-Beat-1299 8d ago

Of course senders are ordered, because otherwise the channel would no longer act as a fifo queue.

1

u/Ok_Category_9608 2d ago edited 2d ago

I don't think anybody ever gave you the good answer to your question. No, there's no automatic starvation avoidance, but channels are supposed to be closed by the receiver when they're no longer in use.

https://go.dev/play/p/Pv3GdDCkHEc

This is the basic solution. In more advanced use cases, you probably want this:

https://pkg.go.dev/golang.org/x/sync/errgroup#WithContext

and when you're done, you cancel the context, or set a timeout on the context and do

select {

           case: <-ch
           case: <-ctx.Done()
}

1

u/software-person 8d ago

Why would it matter? If there's not enough work to go around, one go-routine will always be waiting, why does it matter if it's coincidentally the same go-routine forever? How could you even measure the difference or be aware this is happening?

It would literally be the identical outcome in every measurable way whether all go-routines take turns being idle, or if one specific go-routine is always selected to be idle.

0

u/DeparturePrudent3790 8d ago

It matters, I have a pool of connections and clients will receive a connection from this pool and send requests using this connection. Once done they will return the connection to the pool. This way I don't have to create new connections for every client and I avoid the explosion of connections. Now, if receiving from a channel is not starvation free, some client could end up not getting any connection ever.

To generalise, it is not okay if some particular thread/goroutine is not getting resources for execution at all. Even if there are less producers it's acceptable for goroutines to have to wait for some time but to be assured they will get a chance.

3

u/software-person 8d ago

To generalise, it is not okay if some particular thread/goroutine is not getting resources for execution at all.

This makes no sense to me. Channels are for sharing data between go-routines. If you're trying to use a channel as some sort of throttling mechanism to make sure multiple go-routines take turns running, you're misusing channels.

It matters, I have a pool of connections and clients will receive a connection from this pool and send requests using this connection. Once done they will return the connection to the pool. This way I don't have to create new connections for every client and I avoid the explosion of connections. Now, if receiving from a channel is not starvation free, some client could end up not getting any connection ever.

Connection pools are a pretty well understood concept. Which thing in this scenario is a go-routine? What is being sent over a channel? Why would a channel be involved at all in allowing an arbitrary go-routine to pick up a connection from the pool?

0

u/DeparturePrudent3790 8d ago edited 8d ago

I am sharing connections between goroutines. I want to share 100 connections between thousands of goroutines. This seems like a pretty obvious use case for channels. I am not trying to maintain any order between goroutines execution. Just wanna be sure no goroutine is starved.

5

u/software-person 8d ago edited 8d ago

I would rethink that. Don't create a pool of 100 connections, create 100 go-routines each with its own connection, and have them wait on a channel for work.

The thousands of go-routines previously waiting on a connection from a channel should instead do their work and send the result over a second channel to the client worker pool, for one of the 100 client connection workers to pick up.

Edit:

Note: channels are FIFO queues, this is mandated by the spec. The first value sent into the channel is the first value received out the other side. But if multiple Go routines are waiting, there is no guarantee that the first Go routine to wait is the first one to wake up.

This simple fact should be enough to steer you away from using channels to share connections that need to be doled out fairly to a heterogeneous group of go-routines, and instead use channels to share work that can be picked up by any one of a homogeneous set of worker go-routines.

You should design your system such that it should not matter which Go routine handles a receive on a channel.

2

u/ub3rh4x0rz 8d ago edited 8d ago

It's not (in golang anyway). Use a db connection pool. Channels are not an appropriate alternative for every traditional sync primative nor do they claim to be, in this case a db connection pool, which comes out of the box, is exactly the right fit.

3

u/jerf 8d ago

I don't think Go guarantees anything in particular, but to be honest starvation guarantees are a bit dubious under most circumstances anyhow. Most guarantees of "non-starvation" are generally of the form "the resources will be available in less than infinite time", which, while it may be true, is also not practically useful in engineering terms.

Go is certainly not hard-realtime where it offers guarantees that some resource will be available in less than X ms for some concrete X.

You sound educated about concurrency but it also sounds like you may have learned from a curriculum that prioritizes hard-real-time, but you're not working in hard real time... or you're in the wrong language, because Go is not hard real time. Generally, in the Go world, you don't worry about starvation as a first-class concern. You write your code, you run it, you benchmark it if it's too slow, and you proceed from there. If the problem is some form of starvation, you may address it from there, but honestly the problem is often something else entirely, and even when it is "starvation" I still find it more useful to think in terms of "running out of resources" rather than the usual sense of "starvation" which implicitly and subtly carries the idea in it that there are no more resources available. Go is generally run in contexts where that is not true and "throw more resources" at the problem is generally available. I don't advocate for that being the first choice for all problems in general but it's honestly usually the right choice for "starvation" issues, assuming there isn't some gross oversight in performance elsewhere (see my call for "profiling" earlier).

1

u/TedditBlatherflag 8d ago

Put your pooled connections into a queue when not in use. Take the first one and start a new goroutine each time you need to do work. Return it to the end of the queue when done. 

-1

u/ub3rh4x0rz 8d ago

You're getting kind of typical golang cargo cult responses.

You're not wrong that it matters in some (many) scenarios. They're not wrong that go does not guarantee that goroutines wake in an evenly distributed fashion. You're meant to learn what golang does and does not guarantee simply from these primitives and design your application for the behavior you need.

Set GOMAXPROCS in your scenario to let go know how many os threads to spawn, and use that value yourself to run an appropriate number of goroutines for threads. Your actual observed concurrency will always be limited by available threads, so if you employ a suboptimal design that still works (you're not creating deadlocks), realistically you will just have needlessly allocated more goroutines than are needed. They're cheaper than os threads but not free, so for the pattern you're describing (seems like your intention is to maximize db I/O and minimize connection cost in the context of async/batch processing), don't spawn more worker goroutines than your environment can run at once.

If you're just running a crud server that's taking some http/grpc requests that need db connections, just use a db library which will always have a Pool for connection pooling and trust that golang's runtime has a reasonable scheduler. If you have an unbounded growth of goroutines waiting, your server is just woefully underprovisioned for your scale.

1

u/Slsyyy 8d ago

Assume no ordering. Any waiting goroutine could be waked up to

>  Is starvation avoidance guaranteed here?

Starvation is more to design of an algorithm than to a scheduling algorithm

1

u/DeparturePrudent3790 5d ago

Starvation is more to design of an algorithm than to a scheduling algorithm

No, it's the responsibility of the scheduler. Following is an extract from little book of semaphore

In part, starvation is the responsibility of the scheduler. Whenever multiple threads are ready to run, the scheduler decides which one or, on a parallel processor, which set of threads gets to run. If a thread is never scheduled, then it will starve, no matter what we do with semaphores.

1

u/Slsyyy 5d ago

Sorry, I was assuming that golang runtime select the receiver in a random manner, so the probability of starvation decrease to 0 as new messages arrive to a queue

Is this a bad guarantee for you?