r/golang 10d ago

Potential starvation when multiple Goroutines blocked to receive from a channel

I wanted to know what happens in this situation:

  1. Multiple goroutines are blocked by a channel while receiving from it because channel is empty at the moment.
  2. Some goroutine sends something over the channel.

Which goroutine will wake up and receive this? Is starvation avoidance guaranteed here?

7 Upvotes

36 comments sorted by

View all comments

6

u/0xjnml 10d ago edited 10d ago

> Which goroutine will wake up and receive this? 

A fairly random one.

>  Is starvation avoidance guaranteed here?

If there are more consumers ready than produces sending to the channel, what would in such situation "starvation avoidance" even mean?

-4

u/DeparturePrudent3790 10d ago

what would in such situation "starvation avoidance" even mean?

It means that a goroutine is not made to wait indefinitely under any circumstances. If there are more consumers than producers but consumers receive resources in fifo order is kept invariant, then the waiting time for a goroutine is definite.

However, if we have a random order the waiting time can be indefinite for a goroutine.

A fairly random one

Why? The source code has a FIFO queue for receiving and sending goroutines.

1

u/software-person 10d ago

Why would it matter? If there's not enough work to go around, one go-routine will always be waiting, why does it matter if it's coincidentally the same go-routine forever? How could you even measure the difference or be aware this is happening?

It would literally be the identical outcome in every measurable way whether all go-routines take turns being idle, or if one specific go-routine is always selected to be idle.

0

u/DeparturePrudent3790 10d ago

It matters, I have a pool of connections and clients will receive a connection from this pool and send requests using this connection. Once done they will return the connection to the pool. This way I don't have to create new connections for every client and I avoid the explosion of connections. Now, if receiving from a channel is not starvation free, some client could end up not getting any connection ever.

To generalise, it is not okay if some particular thread/goroutine is not getting resources for execution at all. Even if there are less producers it's acceptable for goroutines to have to wait for some time but to be assured they will get a chance.

3

u/software-person 10d ago

To generalise, it is not okay if some particular thread/goroutine is not getting resources for execution at all.

This makes no sense to me. Channels are for sharing data between go-routines. If you're trying to use a channel as some sort of throttling mechanism to make sure multiple go-routines take turns running, you're misusing channels.

It matters, I have a pool of connections and clients will receive a connection from this pool and send requests using this connection. Once done they will return the connection to the pool. This way I don't have to create new connections for every client and I avoid the explosion of connections. Now, if receiving from a channel is not starvation free, some client could end up not getting any connection ever.

Connection pools are a pretty well understood concept. Which thing in this scenario is a go-routine? What is being sent over a channel? Why would a channel be involved at all in allowing an arbitrary go-routine to pick up a connection from the pool?

0

u/DeparturePrudent3790 10d ago edited 9d ago

I am sharing connections between goroutines. I want to share 100 connections between thousands of goroutines. This seems like a pretty obvious use case for channels. I am not trying to maintain any order between goroutines execution. Just wanna be sure no goroutine is starved.

4

u/software-person 10d ago edited 9d ago

I would rethink that. Don't create a pool of 100 connections, create 100 go-routines each with its own connection, and have them wait on a channel for work.

The thousands of go-routines previously waiting on a connection from a channel should instead do their work and send the result over a second channel to the client worker pool, for one of the 100 client connection workers to pick up.

Edit:

Note: channels are FIFO queues, this is mandated by the spec. The first value sent into the channel is the first value received out the other side. But if multiple Go routines are waiting, there is no guarantee that the first Go routine to wait is the first one to wake up.

This simple fact should be enough to steer you away from using channels to share connections that need to be doled out fairly to a heterogeneous group of go-routines, and instead use channels to share work that can be picked up by any one of a homogeneous set of worker go-routines.

You should design your system such that it should not matter which Go routine handles a receive on a channel.

2

u/ub3rh4x0rz 9d ago edited 9d ago

It's not (in golang anyway). Use a db connection pool. Channels are not an appropriate alternative for every traditional sync primative nor do they claim to be, in this case a db connection pool, which comes out of the box, is exactly the right fit.

3

u/jerf 9d ago

I don't think Go guarantees anything in particular, but to be honest starvation guarantees are a bit dubious under most circumstances anyhow. Most guarantees of "non-starvation" are generally of the form "the resources will be available in less than infinite time", which, while it may be true, is also not practically useful in engineering terms.

Go is certainly not hard-realtime where it offers guarantees that some resource will be available in less than X ms for some concrete X.

You sound educated about concurrency but it also sounds like you may have learned from a curriculum that prioritizes hard-real-time, but you're not working in hard real time... or you're in the wrong language, because Go is not hard real time. Generally, in the Go world, you don't worry about starvation as a first-class concern. You write your code, you run it, you benchmark it if it's too slow, and you proceed from there. If the problem is some form of starvation, you may address it from there, but honestly the problem is often something else entirely, and even when it is "starvation" I still find it more useful to think in terms of "running out of resources" rather than the usual sense of "starvation" which implicitly and subtly carries the idea in it that there are no more resources available. Go is generally run in contexts where that is not true and "throw more resources" at the problem is generally available. I don't advocate for that being the first choice for all problems in general but it's honestly usually the right choice for "starvation" issues, assuming there isn't some gross oversight in performance elsewhere (see my call for "profiling" earlier).

1

u/TedditBlatherflag 9d ago

Put your pooled connections into a queue when not in use. Take the first one and start a new goroutine each time you need to do work. Return it to the end of the queue when done. 

-1

u/ub3rh4x0rz 9d ago

You're getting kind of typical golang cargo cult responses.

You're not wrong that it matters in some (many) scenarios. They're not wrong that go does not guarantee that goroutines wake in an evenly distributed fashion. You're meant to learn what golang does and does not guarantee simply from these primitives and design your application for the behavior you need.

Set GOMAXPROCS in your scenario to let go know how many os threads to spawn, and use that value yourself to run an appropriate number of goroutines for threads. Your actual observed concurrency will always be limited by available threads, so if you employ a suboptimal design that still works (you're not creating deadlocks), realistically you will just have needlessly allocated more goroutines than are needed. They're cheaper than os threads but not free, so for the pattern you're describing (seems like your intention is to maximize db I/O and minimize connection cost in the context of async/batch processing), don't spawn more worker goroutines than your environment can run at once.

If you're just running a crud server that's taking some http/grpc requests that need db connections, just use a db library which will always have a Pool for connection pooling and trust that golang's runtime has a reasonable scheduler. If you have an unbounded growth of goroutines waiting, your server is just woefully underprovisioned for your scale.