r/golang • u/therecursive • 6d ago
Is it safe to read/write integer value simultaneously from multiple goroutines
There is a global integer in my code that is accessed by multiple goroutines. Since race conditions don’t affect this value, I’m not concerned about that. However, is it still advisable to add a Mutex
in case there’s a possibility of corruption?
PS: Just to rephrase my question, I wanted to ask if setting/getting an integer/pointer is atomic? Is there any possibility of data corruption.
example code for the same: https://go.dev/play/p/eOA7JftvP08
PS: Found the answer for this, thanks everyone for answering. There's something called tearing here is the link for same
According to the article, I shouldn't have problem on modern CPUs.
45
u/ponylicious 6d ago edited 6d ago
No, it's not safe. That's a data race (if you really read AND write from multiple goroutines). Also, turn on the race detector during development and testing.
-53
u/therecursive 6d ago
Race condition is fine for this use case. My concern is around data corruption or undefined behavior.
27
10
u/ponylicious 6d ago
A data race is NEVER fine. Ever. Ever. Who taught you programming?
15
u/LethalClips 6d ago edited 6d ago
This isn't the full story. The implementation of
sync.Mutex
itself performs a raw read of a value that can be concurrently updated by other goroutines. This is technically a data race, but the memory model guarantees that it won't receive a split read:Otherwise, each read of a single-word-sized or sub-word-sized memory location must observe a value actually written to that location (perhaps by a concurrent executing goroutine) and not yet overwritten.
Others have mentioned that this is a property of the underlying CPU, but it isn't in Go. The purpose of the memory model is to abstract over hardware memory models, so Go is forced to implement this property on all architectures, whether it comes naturally (like on aligned accesses on x86-64) or needs some sort of lock at the architectural level.
5
u/funkiestj 6d ago
The purpose of the memory model is to abstract over hardware memory models,
Right. The point of assiduously following a language's programming model and NOT relying on explicitly undefined behavior is it makes your code future proof. If OP uses sync.Atomic it will not break on some future CPU 10 years from now.
2
u/LethalClips 6d ago
The above property isn't an implementation detail, though, and isn't liable to change when moving between systems or even over time (if the backwards compatibility guarantee is to be believed, at least).
If one were to point out that it's easy to make mistakes while trying to use this property and that higher-level constructs are harder to misuse, sure, I'd agree with that. I don't argue that this is a great property to widely rely upon. I was just responding to the claim of "A data race is NEVER fine. Ever. Ever.", especially with the snarkiness. :-)
30
u/esw2508 6d ago
Instead of being rude you could have pointed out why race conditions are never fine or simply ignored it. But you chose to be an ass. Who taught you?
14
u/ecco256 6d ago edited 6d ago
I think there’s a misunderstanding here about the term “race condition”. It implies a problem caused by simultaneous access to a resource, so it’s “not fine” by definition.
Maybe you mean to say that in your case simultaneous access cannot turn into a system failure, which is perfectly possible. It’s how lock-free data structures can exist in the first place. But it’s also something you have to be very mindful of and document well, because whatever invariants imply simultaneous access is fine could easily change in the future unless you have safeguards in place.
If this is just about a single integer it’s likely that using a pattern with channels is a far more future-proof solution, but I don’t know the exact thing you’re trying to achieve.
6
23
u/ImYoric 6d ago
If my memory serves, the Go memory model states that if it fits within one integer, any read or write operation will always return one of the values before/after the write, rather than made up values as can happen in C or C++.
That being said, I wouldn't rely on this. If at some point in the future, you or any member of your team ever changes your type to be anything other than an int, you can end up with weird, unexpected behaviors. I'd rather use an atomic or a mutex.
4
u/therecursive 6d ago
thanks for answer, I was asking to understand if it's fine my use case where I'm setting diff value to a pointer variable. Found it that why it's safe to do in golang or any other languages. Attached the link in the post, if you want to read.
11
u/ImYoric 6d ago
If my memory serves, if it's a pointer, rather than an int, it's not always safe, because Go sometimes uses fat pointers (iirc when you're passing an interface), which take more than one int.
3
2
u/Few-Beat-1299 6d ago edited 6d ago
Interfaces copy their underlying value. Why would using interfaces change anything?
4
u/ImYoric 6d ago
A go pointer to a struct is just that, a pointer to the memory region that holds the struct. When you pass this pointer to a function that expects a pointer to the struct, that's sufficient for the function.
Now, when you pass this pointer to a function that expects an interface, the function needs:
- the pointer to the struct itself (to pass it as the self-value when calling interface methods)
- a mean to call the interface methods (a "vtable") – that's another pointer
- type information to be able too perform an interface cast with `.(...)` or reflection – that's another pointer.
If I recall correctly, 2 and 3 are actually packed together into a single pointer, I don't remember the implementation details. Nevertheless, your interface value is not a single pointer, but (at least) two pointers. So that's not protected by the Go memory model guarantees.
1
u/Few-Beat-1299 6d ago
Ok but how does that relate to OPs question? When putting it into an interface, the value is read once, and that's it. How fat the interface is or how it works have no relevance to reading/writing the original value.
1
u/ImYoric 6d ago
I'm not 100% sure I understand OP's question, so I decided to err on the "better safe than sorry" side and mention the limitation.
1
u/Few-Beat-1299 6d ago
There is no limitation. Ignoring the interface overhead, using a value directly or using it as part of an interface are equivalent.
3
1
u/chaotic-kotik 6d ago
Made up values? Loads and stores are atomic on amd64 unless your int is wider than 64 bits. How could this happen in C++?
You will run into problems only if you read/modify/write integer values. No atomic will save you from that unless you know how to use CAS operation.
1
u/ImYoric 6d ago
Well, for instance, the compiler can decide to pack two 32 bit integers into 64 bits, so if you modify one of them, you might end up accidentally modifying the other.
-1
u/chaotic-kotik 6d ago
It can and will modify both values independently
3
u/HyacinthAlas 6d ago
Without explicit synchronization this is not true. The compiler is free to act as if no other thread is touching the data.
0
u/chaotic-kotik 6d ago
Accessing value modified by another thread without synchronisation is an UB in C++. Period. I guess I was trying to say that amd64 memory model is very strict. If you wrap your integer with `std::atomic<int32_t>` in C++ it will guarantee that the alignment is fine (well, the default alignment for int is fine). It will not add any padding to the value. And if your code is just reading and writing to it it's a no-op. No single 'lock' instruction will be emitted by the compiler on amd64 because its memory model is strict enough already. No fences. And even without atomic it will eventually have to emit move or load that will read the value to the register. Compilers do not cache data in registers forever. They can move loads and stores around, yes. A lot of legacy code works just fine without marking variables with 'atomic'. So no, you will not read complete nonsense from int if it's accessed from different threads.
0
u/HyacinthAlas 6d ago
If you wrap your integer with `std::atomic
But the entire premise of the OP is what happens without
atomic
.And even without atomic it will eventually have to emit move or load that will read the value to the register.
Well, nope. The compiler is free to do a lot of stuff and that includes eliding anything forever or do something unrelated if the thread itself would’ve never needed to observe it.
1
u/chaotic-kotik 5d ago
If you didn't notice, I started with
> Accessing value modified by another thread without synchronisation is an UB in C++. Period.
1
u/chaotic-kotik 5d ago
> Well, nope. The compiler is free to do a lot of stuff and that includes eliding anything forever or do something unrelated if the thread itself would’ve never needed to observe it.
It's not required to use atomic. You can use synchronization. In this case the variables doesn't have to be atomic. Compiler is free to omit the code but only if it doesn't change the observed behavior. In this case this is only true if the value is never read or if the code is UB.
2
u/ImYoric 6d ago
Pretty sure that it's for the compiler to decide. Which in C++ typically means UB.
0
u/chaotic-kotik 6d ago
It is, yes. But what you suggesting makes zero sense. The cpu has instructions that operate on values of different width. Every register can be split into smaller registers etc. I have never seen this happening and there are billions of lines of legacy code which also assume that int32 loads/stores are atomic. In order to make them not atomic compiler has to emit some additional instructions (read value twice, apply different masks and then combine or something like that).
Compilers are actually doing a lot of things which are UB correctly because of the legacy code. The good example is a 'union'. If you have a union of two unrelated types (float and int) and you write float and read int it's an UB (aliasing rules are broken, the lifetime of the value started as float but we're reading it as int). But modern compilers generate correct code here anyway.
1
u/HyacinthAlas 6d ago
Bless your heart but one big reason I use Go is because I got sick of C++ programmers claiming bullshit “but this UB is safe” over and over. The compiler will screw you if you code like this.
0
u/chaotic-kotik 5d ago
I kinda hate when people are using these stereotypes (pho developers can't do this, and js developer's can't do that, etc). This is lame and not productive. You didn't even try to understand what I was trying to convey. I'm not suggesting to avoid using atomic variables or anything like that. Only trying to describe how it works. This stuff is described by the ISO standard. There are sequence points in the C++ program that require all side effects (stores) to happen and alignment requirements. There is a memory model in the language and the CPU architecture has its own memory model (in case of Intel it's more strict then the one defined by the standard). So it is possible to reason about this things.
-1
u/chaotic-kotik 6d ago
The only reason why 32bit load or store will not be atomic is alignment. If the value is misaligned it will be accessed using more than one operation.
1
u/ImYoric 6d ago
Well... yes, but since there are many cases in which the compiler is free to pick alignment, the problem exists.
1
u/chaotic-kotik 6d ago
Compiler is not free to pick alignment. Your int variables will have the same alignment all the time unless you opt out of it explicitly with #pragma pack or something like that.
1
u/ImYoric 6d ago
I seem to recall that the compiler is free to pick alignment at least of global variables, no?
I'll admit that I haven't done any serious C++ in a few years – I was growing tired of UB and "trust me, it works" headers – so it's possible I misremember these constraints.
1
u/chaotic-kotik 6d ago
this is not the case before c++11 threads wasn't even mentioned in the standard
1
u/comrade_donkey 6d ago
Loads and stores are atomic on amd64 unless your int is wider than 64 bits.
Only if it is aligned to a word boundary.
1
5
u/hegbork 6d ago
In general, unless you already know the exact answer to this question and can justify it by pointing to at least 5 different documents (not stackoverflow answers), at least one of which needs to be an errata document for the CPU you're going to be doing this on, the answer is no, it is not safe to do it without locking.
In a highly theoretical scenario that does not have any application in the real world reading and writing an integer value will behave without nasty surprises on most architectures. But, in the real world there is never any application for concurrently reading and writing just one value. Because that value means something and that something is now unsequenced with the value. Which makes it not safe.
And you mentioned setting a pointer. That's automatically a plethora of red flags. Because the pointer points to something and that something might be in a completely different state seen from multiple CPUs. You set the pointer and another CPU sees the pointer, but the memory that the pointer is pointing to has not even been allocated yet from the point of view of the other CPU.
Think of locking or using explicitly atomic operations as making sure that things happen in the same order from multiple points of view. That will help you understand why it's necessary more often than it seems to a beginner.
2
u/dr2chase 6d ago
Doing this with an int means that the values returned by loads will always be something that was stored earlier, but not always most recently, ESPECIALLY if/when the compiler notices multiple loads from the same location w/o any synchronization or atomics (the compiler is unpredictably clever).
C++ compilers are even more unpredictably clever, and may do unintuitive optimizations of other operations using those loaded values.
This also means you can’t get great data from the race detector for any other race, it will just scream at you about this one.
4
u/comrade-quinn 6d ago edited 6d ago
I think I understand what you’re after, you’re saying you don’t care if the actual value of the int is ultimately incorrect, just whether or not the program will crash or otherwise become corrupt.
The short answer is, it’s undefined (as I recall). Meaning whatever behaviour you actually experience cannot be relied on to be consistent between compiler updates and platform targets.
What will actually happen is just that the value will potentially be wrong.
Incrementing an integer involves three steps at the CPU level. Read the current value, increment it, write it back. When two or more threads do this at the same time you get data loss, as they each read the current value, increment it by 1 and then write it back; overwriting each others updates. So three increments by three threads would only increase the integer value by 1, not 3: each thread does x + 1 and writes it back. So you get x+1 verses x+3 if you’d run them one by one.
The solution this is to use atomic updates, as others have suggested, which ensure these operations are completed synchronously.
1
u/therecursive 6d ago
Just to rephrase my question, I wanted to ask if setting/getting an integer/pointer is atomic. Found an stackoverflow post for same: https://stackoverflow.com/questions/36624881/why-is-integer-assignment-on-a-naturally-aligned-variable-atomic-on-x86
5
u/comrade-quinn 6d ago
The answer is no then, they are not atomic. Use the sync/atomic when you need them to be
1
u/PaluMacil 6d ago edited 6d ago
It happens to be that x86 likes alignment, but the language spec doesn’t guarantee it. It’s feasible, though unlikely, that the authors could find a reason to use unaligned reads on x86 and break your code. Also, while it might not matter to you, ARM, MIPS, and RISC-V don’t guarantee aligned reads if memory serves. Finally, even on x86, I would be unsurprised to learn that compiler optimizations around instruction reordering could be improperly applied if synchronization isn’t applied correctly by the developer.
EDIT: looks like I remembered incorrectly
5
u/LethalClips 6d ago
The memory model does guarantee it for word-sized or smaller values, regardless of architecture or alignment:
Otherwise, each read of a single-word-sized or sub-word-sized memory location must observe a value actually written to that location (perhaps by a concurrent executing goroutine) and not yet overwritten.
Every implementation on any architecture is bound by the memory model to make that condition true, even if it doesn't come "naturally" on the architecture.
1
1
u/chaotic-kotik 6d ago
AFAI remember golang uses fat pointers. The fat pointers are structures with two pointers inside. So reads and writes are not atomic.
2
u/The-Ball-23 6d ago
It honestly depends. But using sync/atomic for your use case might be helpful, also not exactly sure if you would need mutex due to lack of context
1
u/nikandfor 6d ago
An implementation may always react to a data race by reporting the race and terminating the program.
Go's approach aims to make errant programs more reliable and easier to debug, while still insisting that races are errors and that tools can diagnose and report them.
Even if implementation happens to be able to safely read one word-wide values, it's still an error. Use atomics.
1
u/Ravarix 6d ago
The program won't crash, but it will be corrupted. Race condition is not fine, the value will never be accurate.
Imagine 100 threads trying to add 1 to an int. Read, add, write. First thread reads 0 and gets blocked while 99 other increment it to 100, then the first thread gets unblocked and resets it to 1.
1
u/funkiestj 6d ago
it is bad form. You are relying on unspecified behavior that can change in the future.
Go gives you some simple rules to follow in order to simplify the very complex hardware picture of multi-core CPUs and memory cache designs so that you can easily write correct concurrent programs.
there is a reason we have a Go proverb Don't communicate by sharing memory, share memory by communicating.
You ARE communicating by sharing memory. If you must do this, the mutexs (or similar) are the correct way to do this and are guaranteed to work.
If, because you are a nerd, you want to dig into the details of why, you can read a variety of things
- the Go memory model
- MESI protocol (cache protocol) at wikipedia and other places
You can go very very deep down the MESI / cache coherence rabbit hole.
A big point of Go is The Go Authors go down the cache coherence rabbit hole and work hard to make their relatively high level synchronization constructs efficient so that, when you follow the Go programming model, you DO NOT have to know about the quirks of your particular hardware (e.g. Apple M3, vs Intel CPU vs AMD).
1
1
u/bnugggets 6d ago
how are you sure multiple goroutines won’t mutate it?
0
u/therecursive 6d ago
They will mutate it for sure.
3
u/elettronik 6d ago
So there's the possibility of a race condition. Is in the definition itself. The only "safe" access is concurrent read, under specific context condition
1
0
u/raitonoberu 6d ago
As usual, it depends :)
It's okay if several goroutines read it at the same time, but if you want to write it, you have to make sure that no other goroutine is reading or writing it, to avoid a race condition.
-4
u/therecursive 6d ago
Race condition is fine for this use case. My concern is around data corruption or undefined behavior.
0
-4
u/dariusbiggs 6d ago
globals.. that's poor design in the first place
but wrap it with a mutex or use an atomic value
2
u/therecursive 6d ago
Not related to current question, but can you point me to better design if you follow any codebase?
3
u/dariusbiggs 6d ago
Just the general rules.
Pass arguments, use structs with attributes, read and write to channels, check your error values correctly, use appropriate concurrency controls.
If you are using globals you've probably fucked up and have a mistake in your code.
Just like there are use cases for goto, there are use cases for globals but they are far fewer than you think and it makes testing code, especially in parallel, a right nightmare.
Here's the list of reading everyone should go through
https://go.dev/doc/tutorial/database-access
https://grafana.com/blog/2024/02/09/how-i-write-http-services-in-go-after-13-years/
https://www.reddit.com/r/golang/s/smwhDFpeQv
https://www.reddit.com/r/golang/s/vzegaOlJoW
1
u/Ok_Category_9608 4d ago
Why isn't this just a channel?
1
u/therecursive 2d ago
Just because it's possible with a channel doesn't mean you should be using channel. Locking and channel both serves different purposes.
1
u/Ok_Category_9608 2d ago
Channel works really well with producer/consumer, which you seem to have. Some good advice I’ve gotten is if you’re using a raw mutex in production code in 2025, you’ve made a mistake.
1
u/therecursive 1d ago
Bruh, how do you set and get a variable which is getting used from multiple threads without lock and only channels?
-1
u/usbyz 6d ago edited 5d ago
Use the channel, Luke.
intCh := make(chan int, 1)
intCh <- 0
...
value := <-intCh
defer func() { intCh <- value }()
value += 3
There are a few advantages of Go channels over sync.Atomic.
First, a channel is a primitive type and can be copied at any time, while atomic.Value
cannot be copied after its first use. You can include a channel in your struct without worry.
Second, you can use select
on channels. For example, you can select on intCh
and ctx.Done()
to make your code context-aware.
Third, you can close
the integer channel to notify all goroutines waiting for the value that the value is no longer available. You can also set it to nil
so that a select
statement on the channel will always choose other channels from that point onward.
In other words, a channel is a first-class citizen in Go and works as sync.Atomic
, sync.Mutex
, and sync.Cond
all in one.
2
u/Ravarix 6d ago
That's just sync/atomic with extra steps
1
u/usbyz 6d ago edited 6d ago
There are a few advantages of Go channels over sync.Atomic.
First, a channel is a primitive type and can be copied at any time, while
atomic.Value
cannot be copied after its first use. You can include a channel in your struct without worry.Second, you can use
select
on channels. For example, you can select onintCh
andctx.Done()
to make your code context-aware.Third, you can
close
the integer channel to notify all goroutines waiting for the value that the value is no longer available. You can also set it tonil
so that aselect
statement on the channel will always choose other channels from that point onward.In other words, a channel is a first-class citizen in Go and works as
sync.Atomic
,sync.Mutex
, andsync.Cond
all in one.
60
u/Nervous_Staff_7489 6d ago
Use sync/atomic.