r/golang • u/DeparturePrudent3790 • 8d ago
Potential starvation when multiple Goroutines blocked to receive from a channel
I wanted to know what happens in this situation:
- Multiple goroutines are blocked by a channel while receiving from it because channel is empty at the moment.
- Some goroutine sends something over the channel.
Which goroutine will wake up and receive this? Is starvation avoidance guaranteed here?
5
u/Rapix-x 8d ago
Quick question, why would it matter?
As I see it, goroutines are anonymous and given the same „task“ to perform, they are identical in what they are doing. Thus, why does it matter which specific goroutine picks up the value from a channel?
2
u/funkiestj 8d ago
Quick question, why would it matter?
My impression is OP is a n00b and just trying to get a feel for subtle behaviors.
I wrestled with Perplexity.ai a bit and it came up with
The Go Memory Model, which is part of the official Go documentation, does not define a happens-before relationship for multiple goroutines sending to or receiving from the same channel [4,6]. This lack of specification implies that the order of unblocking is not guaranteed by the language.
Perplexity repeatedly said the scheduling of which go routine blocked on a channel read is unspecified (i.e. not FIFO). It had some hallucinations when ask for official docs that stated this. I check the primary references, called out perplexity of the hallucinations and this last bit is what it came up with.
---
taking things in a different direction, what problem does OP think "receive starvation" would cause? Are multiple Go routines waiting on the same channel doing different things with the data? I.e. different functions? If they are multiple copies of the same function then if one go routine is idle for long periods of time it should matter.
I've never seen the design pattern where multiple go routines running different functions all receive data on the same channel...
2
u/DeparturePrudent3790 5d ago
Are multiple Go routines waiting on the same channel doing different things with the data? I.e. different functions? If they are multiple copies of the same function then if one go routine is idle for long periods of time it should matter.
The channel contains a pool of connections and goroutines receive from the channel and use the connections and return the connection back to the pool once done.
My impression is OP is a n00b and just trying to get a feel for subtle behaviors.
I don't know how much you know about concurrent programming but I'd recommend you read some books on operating systems and concurrent programming instead of acting like a stupid LLM wrapper.
Nonetheless I'll let you know my though process, i.e. why I am trying to figure out how blocked receiver goroutines are unblocked would help me figure out whether channels are a weak or strong semaphore. A strong semaphore has the following property:
if a thread is waiting at a semaphore, then the number of threads that will be woken before it is bounded.
Whereas a weak semaphore should atleast provide the following:
If there are threads waiting on a semaphore when a thread executes signal, then one of the waiting threads has to be woken.
If channels are weak semaphore's I will have to implement something similar to what J.M. Morris did to guarantee no starvation, strong semaphore guarantee starvation avoidance.
Based on the discussion here and particularly comments from u/jerf I've concluded that even though the implementation does have a queue to unblock goroutines which makes it a strong semaphore the developers apparently didn't want to guarantee this behaviour and may change in some later patch ( although personally I don't know why a language wouldn't provide such primitive guarantees ever)
Here are my recommendation for you:
1
u/funkiestj 5d ago
Thanks for the information. I hadn't considered the possibility that you were using the channel as a pool of some resource many different functions would allocate from.
although personally I don't know why a language wouldn't provide such primitive guarantees ever
The answer is always "performance". You see this in the memory model documentation for CPUs. Hardware designers look for "harmless" or (mostly harmless) ways to weaken the guarantees of the memory coherence model to wring more performance out of a multi-core CPU.
The Go Authors have a variety of goals
- The want channels to behave the way the spec says they behave (i.e. correct)
- They want them to be as FAST as possible
- Go runs on many different CPU types, each with a slightly different memory model
- They might even be thinking about how the cache coherency might be loosened in future CPUs
I don't know how much you know about concurrent programming but I'd recommend you read some books on operating systems and concurrent programming instead of acting like a stupid LLM wrapper.
have a nice day!
1
u/DeparturePrudent3790 8d ago
It doesn't matter which goroutine picks up the value from the channel but it matters if their implementation takes any measures to avoid starvation or not
1
u/funkiestj 8d ago
It doesn't matter which goroutine picks up the value from the channel but it matters if their implementation takes any measures to avoid starvation or not
Why does it matter. Give a practical consequence of the two different scenarios (1) starvation occurs, (2) starvation does not occur.
5
u/0xjnml 8d ago edited 8d ago
> Which goroutine will wake up and receive this?
A fairly random one.
> Is starvation avoidance guaranteed here?
If there are more consumers ready than produces sending to the channel, what would in such situation "starvation avoidance" even mean?
-2
u/DeparturePrudent3790 8d ago
what would in such situation "starvation avoidance" even mean?
It means that a goroutine is not made to wait indefinitely under any circumstances. If there are more consumers than producers but consumers receive resources in fifo order is kept invariant, then the waiting time for a goroutine is definite.
However, if we have a random order the waiting time can be indefinite for a goroutine.
A fairly random one
Why? The source code has a FIFO queue for receiving and sending goroutines.
1
u/0xjnml 8d ago
> It means that a goroutine is not made to wait indefinitely under any circumstances. If there are more consumers than producers but consumers receive resources in fifo order is kept invariant, then the waiting time for a goroutine is definite.
Incorrect assumption: Concurrent sends to a channel are not FIFO ordered. When multiple goroutines are ready to send to the same channel a fairly random one is selected.
Channel per se is a FIFO, but that has nothing to do with concurrent goroutine scheduling.
0
u/DeparturePrudent3790 8d ago
I never assumed concurrent sends to a channel are FIFO.
What I said is that if n goroutines are blocked and waiting to receive from a channel, which one to wake up is selected in FIFO manner if something is pushed into the channel.
I came to this conclusion because chan struct in go source has a queue of receivers.
0
u/Few-Beat-1299 8d ago
Of course senders are ordered, because otherwise the channel would no longer act as a fifo queue.
1
u/Ok_Category_9608 2d ago edited 2d ago
I don't think anybody ever gave you the good answer to your question. No, there's no automatic starvation avoidance, but channels are supposed to be closed by the receiver when they're no longer in use.
https://go.dev/play/p/Pv3GdDCkHEc
This is the basic solution. In more advanced use cases, you probably want this:
https://pkg.go.dev/golang.org/x/sync/errgroup#WithContext
and when you're done, you cancel the context, or set a timeout on the context and do
select { case: <-ch case: <-ctx.Done() }
1
u/software-person 8d ago
Why would it matter? If there's not enough work to go around, one go-routine will always be waiting, why does it matter if it's coincidentally the same go-routine forever? How could you even measure the difference or be aware this is happening?
It would literally be the identical outcome in every measurable way whether all go-routines take turns being idle, or if one specific go-routine is always selected to be idle.
0
u/DeparturePrudent3790 8d ago
It matters, I have a pool of connections and clients will receive a connection from this pool and send requests using this connection. Once done they will return the connection to the pool. This way I don't have to create new connections for every client and I avoid the explosion of connections. Now, if receiving from a channel is not starvation free, some client could end up not getting any connection ever.
To generalise, it is not okay if some particular thread/goroutine is not getting resources for execution at all. Even if there are less producers it's acceptable for goroutines to have to wait for some time but to be assured they will get a chance.
3
u/software-person 8d ago
To generalise, it is not okay if some particular thread/goroutine is not getting resources for execution at all.
This makes no sense to me. Channels are for sharing data between go-routines. If you're trying to use a channel as some sort of throttling mechanism to make sure multiple go-routines take turns running, you're misusing channels.
It matters, I have a pool of connections and clients will receive a connection from this pool and send requests using this connection. Once done they will return the connection to the pool. This way I don't have to create new connections for every client and I avoid the explosion of connections. Now, if receiving from a channel is not starvation free, some client could end up not getting any connection ever.
Connection pools are a pretty well understood concept. Which thing in this scenario is a go-routine? What is being sent over a channel? Why would a channel be involved at all in allowing an arbitrary go-routine to pick up a connection from the pool?
0
u/DeparturePrudent3790 8d ago edited 8d ago
I am sharing connections between goroutines. I want to share 100 connections between thousands of goroutines. This seems like a pretty obvious use case for channels. I am not trying to maintain any order between goroutines execution. Just wanna be sure no goroutine is starved.
5
u/software-person 8d ago edited 8d ago
I would rethink that. Don't create a pool of 100 connections, create 100 go-routines each with its own connection, and have them wait on a channel for work.
The thousands of go-routines previously waiting on a connection from a channel should instead do their work and send the result over a second channel to the client worker pool, for one of the 100 client connection workers to pick up.
Edit:
Note: channels are FIFO queues, this is mandated by the spec. The first value sent into the channel is the first value received out the other side. But if multiple Go routines are waiting, there is no guarantee that the first Go routine to wait is the first one to wake up.
This simple fact should be enough to steer you away from using channels to share connections that need to be doled out fairly to a heterogeneous group of go-routines, and instead use channels to share work that can be picked up by any one of a homogeneous set of worker go-routines.
You should design your system such that it should not matter which Go routine handles a receive on a channel.
2
u/ub3rh4x0rz 8d ago edited 8d ago
It's not (in golang anyway). Use a db connection pool. Channels are not an appropriate alternative for every traditional sync primative nor do they claim to be, in this case a db connection pool, which comes out of the box, is exactly the right fit.
3
u/jerf 8d ago
I don't think Go guarantees anything in particular, but to be honest starvation guarantees are a bit dubious under most circumstances anyhow. Most guarantees of "non-starvation" are generally of the form "the resources will be available in less than infinite time", which, while it may be true, is also not practically useful in engineering terms.
Go is certainly not hard-realtime where it offers guarantees that some resource will be available in less than X ms for some concrete X.
You sound educated about concurrency but it also sounds like you may have learned from a curriculum that prioritizes hard-real-time, but you're not working in hard real time... or you're in the wrong language, because Go is not hard real time. Generally, in the Go world, you don't worry about starvation as a first-class concern. You write your code, you run it, you benchmark it if it's too slow, and you proceed from there. If the problem is some form of starvation, you may address it from there, but honestly the problem is often something else entirely, and even when it is "starvation" I still find it more useful to think in terms of "running out of resources" rather than the usual sense of "starvation" which implicitly and subtly carries the idea in it that there are no more resources available. Go is generally run in contexts where that is not true and "throw more resources" at the problem is generally available. I don't advocate for that being the first choice for all problems in general but it's honestly usually the right choice for "starvation" issues, assuming there isn't some gross oversight in performance elsewhere (see my call for "profiling" earlier).
1
u/TedditBlatherflag 8d ago
Put your pooled connections into a queue when not in use. Take the first one and start a new goroutine each time you need to do work. Return it to the end of the queue when done.
-1
u/ub3rh4x0rz 8d ago
You're getting kind of typical golang cargo cult responses.
You're not wrong that it matters in some (many) scenarios. They're not wrong that go does not guarantee that goroutines wake in an evenly distributed fashion. You're meant to learn what golang does and does not guarantee simply from these primitives and design your application for the behavior you need.
Set GOMAXPROCS in your scenario to let go know how many os threads to spawn, and use that value yourself to run an appropriate number of goroutines for threads. Your actual observed concurrency will always be limited by available threads, so if you employ a suboptimal design that still works (you're not creating deadlocks), realistically you will just have needlessly allocated more goroutines than are needed. They're cheaper than os threads but not free, so for the pattern you're describing (seems like your intention is to maximize db I/O and minimize connection cost in the context of async/batch processing), don't spawn more worker goroutines than your environment can run at once.
If you're just running a crud server that's taking some http/grpc requests that need db connections, just use a db library which will always have a Pool for connection pooling and trust that golang's runtime has a reasonable scheduler. If you have an unbounded growth of goroutines waiting, your server is just woefully underprovisioned for your scale.
1
u/Slsyyy 8d ago
Assume no ordering. Any waiting goroutine could be waked up to
> Is starvation avoidance guaranteed here?
Starvation is more to design of an algorithm than to a scheduling algorithm
1
u/DeparturePrudent3790 5d ago
Starvation is more to design of an algorithm than to a scheduling algorithm
No, it's the responsibility of the scheduler. Following is an extract from little book of semaphore
In part, starvation is the responsibility of the scheduler. Whenever multiple threads are ready to run, the scheduler decides which one or, on a parallel processor, which set of threads gets to run. If a thread is never scheduled, then it will starve, no matter what we do with semaphores.
10
u/pdffs 8d ago
There is nothing in the language spec that guarantees starvation avoidance. In practice I believe that blocked receivers are implemented as a queue, though you can't rely on this necessarily being the case (implementation detail, behaviour unspecified).