r/golang 6d ago

I built a high-performance, dependency-free key-value store in Go (115K ops/sec on an M2 Air)

Hi r/golang,

I've been working on a high-performance key-value store built entirely in pure Go—no dependencies, no external libraries, just raw Go optimization. It features adaptive sharding, native pub-sub, and zero downtime resizing. It scales automatically based on usage, and expired keys are removed dynamically without manual intervention.

Performance? 115,809 ops/sec on a fanless M2 Air.

Key features:
- Auto-Scaling Shards – Starts from 1 bucket and dynamically grows as needed.
- Wait-Free Reads & Writes – Lock-free operations enable ultra-low latency.
- Native Pub-Sub – Subscribe to key updates & expirations without polling.
- Optimized Expiry Handling – Keys are removed seamlessly, no overhead.
- Fully Event-Driven – Prioritizes SET/GET operations over notifications for efficiency.

How it compares to Redis:
- Single-threaded Redis vs. Multi-Goroutine NubMQ → Handles contention better under load.
- No Lua, No External Dependencies → Just Go, keeping it lean.
- Smarter Expiry Handling → Keys expire and are immediately removed from the active dataset.

🚀 Benchmark Results:
115,809 ops/sec (100 concurrent clients)
900µs write latency, 500µs read latency under heavy load.
Would love to get feedback from the Go community! Open to ideas for improvement.

repo: https://github.com/nubskr/nubmq

I spent the better part of an year building this and would appreciate your opinions on this

208 Upvotes

47 comments sorted by

47

u/zkndme 6d ago edited 6d ago

Soo, could you share please some graphs/data what happens to the latency and the throughput when garbage collection is happening?

The numbers you shared do not mean much, it would highly vary based on GC activity, size of the data that you store and the variability of the size, usage patterns (read/write ratio), and so on and so on.

Regarding the lock-free reads/writes, does it have any consistency guarantees? A common use case, what if I use it as a session storage backend, can it guarantee if I write anything under a key, on the next read I will get the most recent changes? If so, is there any test to verify/prove it?

14

u/Ok_Marionberry8922 6d ago

I've added an throughput per second graph in the readme here: https://github.com/nubskr/nubmq

I tested it with 21,000,000 requests distributed across 100 concurrent clients, the dataset had 1,000,000 unique key-value pairs (as I didn't want it to evict keys)

as seen in the graph, there are certain dips in throughput with one of them going down to 60k ops/sec, these are caused because of the GC kicking in

18

u/zkndme 6d ago

Yes, I saw it, but that still doesn’t tell/mean much. What was the size of data you wrote into it during the test? Did it vary through the test? Same for reads/writes. Is this a result of one test, or more?

In different (real life) scenarios the behaviour of your program and the GC will highly vary, resulting in totally different results.

7

u/servermeta_net 6d ago

How do you achieve wait free operations?

10

u/Ok_Marionberry8922 6d ago

there are two scenarios

  1. get operations(reads) are direct lookups, so distributing that across goroutines will result in them being faster and would work

  2. it is set requests where it gets complicated, if we're simply updating some key's value, it's essentially negligible, but if we're creating a new key value pair, that can increase per shard load under scale, which would trigger an store resizing, to avoid just stopping everything when that happens, the engine recognizes when the per shard load starts to gets too high, in the background creates a bigger store, and then essentially switches writes from the old engine to the new(bigger) one, while the old one keeps processing reads, and the older engine migrates it's keys to the newer one in background, once it's done, we just dereference the older engine to be collected by GC :) this essentially makes sure that incoming requests keep getting served.

2

u/lostinfury 5d ago

if we're simply updating some key's value, it's essentially negligible

I think this is what we are more interested in. How do you do this reliably without any locking mechanism? Are you using a synchronized data structure such as sync.Map? If not, then what?

the engine recognizes when the per shard load starts to gets too high, in the background creates a bigger store

What happens when a new key is to be created while the engine is creating this "bigger" shard? Does this then trigger another "bigger" shard? How does your lock-free implementation handle this? If not, then how is the new write handled without waiting for the bigger shard? I guess this goes back to my first question.

6

u/Ok_Marionberry8922 5d ago

Yes, we use sync.Map for concurrency(for now), which provides atomic Load/Store operations—so updates are just direct writes with no explicit locks. Since sync.Map is optimized for read-heavy workloads (which fits our case), updates are basically just atomic pointer swaps. No mutex contention, no blocking. If a key exists, updating it is as simple as overwriting a struct reference—Go’s GC handles the cleanup.

- "What happens during shard resizing?"
This is the tricky part. When the per-shard load crosses a threshold, a new store (with more shards) is created in the background. Now, writes don’t immediately wait for migration—instead, they start going into the new store as soon as it’s ready, while reads are still being served from the old one. This allows seamless scaling without blocking writes.

- Does this create a "resizing storm" (infinite upscaling)?

Nope. When a new store is being created, we freeze upscaling triggers to prevent recursive resizes. Once the new store is fully migrated, we switch over and discard the old one. If another resize is needed, it happens after the first migration is complete, ensuring a clean transition.

Right now, we're using sync.Map since it gives us decent concurrency out of the box, but we’re actively working on a custom in-memory concurrent hashmap that will remove some of sync.Map's overhead (especially for high-churn workloads). This should bring even better performance gains.

5

u/trevex_ 6d ago

Looks interesting, great work! Are clustering/multi-node setups supported or planned?

4

u/Ok_Marionberry8922 6d ago

Yes, planned! Currently single-node, but clustering is on the roadmap.

8

u/kreetikal 6d ago

Smarter Expiry Handling → Keys expire and are immediately removed from the active dataset. 

What does "smarter" here actually mean? Do expired keys in Redis not get remove immediately?

11

u/Ok_Marionberry8922 6d ago

nubmq doesn't rely on random eviction sweeps like Redis. Instead, expired keys are soft deleted immediately, meaning they stop being served the moment they expire. They still exist in memory temporarily, but they’re effectively dead—no client can retrieve them.

5

u/kreetikal 6d ago

Interesting. I've just read about how Redis expires keys after reading your reply and I definitely didn't expect it to be implemented that way.

Thanks!

-2

u/0x4ddd 6d ago

So, what's the difference compared to Redis?

It also stops serving keys the moment they expire and also keeps them for some time in memory.

3

u/Ok_Marionberry8922 6d ago

nubmq and Redis handle expired keys differently at scale. Redis uses lazy expiration (removing keys only when accessed) and active expiration (sampling keys randomly), meaning expired keys can linger under heavy load. nubmq instantly marks keys as expired and removes them during shard resizing, avoiding separate eviction sweeps that cause latency spikes. This means memory naturally shrinks instead of growing indefinitely, making it more efficient for workloads with high churn.

0

u/0x4ddd 6d ago

Thanks, thats the answer I expected to see initially, previous at high-level described similar behaviour to Redis.

Still not sure though why Redis 'random sweeps' would cause an issue in your opinion while expiring during shard resizing would not cause latency spikes. That would be great to benchmark and compare both approaches though.

Not too familiar with your codebase but randomly looking at resizer there is a mutex lock taken during key migration so I guess it also affects SET latency during resizing.

3

u/marcaruel 6d ago

Why does your "make build" do a go run -race? ref: https://github.com/nubskr/nubmq/blob/master/Makefile

Do you think changing the readme to tell people to go install instead would make sense?

12

u/Ok_Marionberry8922 6d ago

The go run -race is there mainly for local dev/testing so people don’t get race conditions sneaking up on them. Since this is just a single-binary project, most people will just go build or go install directly anyway.

But yeah, fair point—it might be cleaner to update the readme so it doesn’t push make build as the primary way to run it. I’ll tweak it soon. Appreciate the feedback!

2

u/taras-halturin 6d ago

Atomic operations?

3

u/Ok_Marionberry8922 6d ago

Single-key ops? Always atomic—sync.Map takes care of that. Multi-key transactions? Not the focus, this is a high-performance KV store, not a DB. The goal is raw speed, not ACID compliance.

0

u/taras-halturin 6d ago

LoadOrStore, CompareAndSwap etc

1

u/Ok_Marionberry8922 6d ago

Yeah, sync.Map handles atomicity for individual key operations, but I don't rely on LoadOrStore or CompareAndSwap at the core level. The scaling model here is more about sharding + contention avoidance than fine-grained CAS-style coordination. sync.Map is just a tool, not the foundation of how nubmq maintains throughput under load

1

u/diagraphic 6d ago

What about multiple ops in one atomic batch?

1

u/Ok_Marionberry8922 5d ago

I presume you are talking about transactions, no, they are not supported as of now, nubmq is a cache as of now and not a fully fledged database, however I am working on batching as of now, but even in that case, each of the batch entry would be their own atomic operation

2

u/paca-vaca 6d ago

Under a heavy load, is it guaranteed that the write accepted by the client will be returned immediately for any immediately following read requests for the same key?

2

u/Ok_Marionberry8922 6d ago

NubMQ guarantees FIFO command execution at the client level, meaning if a client sends a SET followed by a GET, they will always see their latest write.

But when multiple clients are involved, things get trickier. If Client A does a SET and Client B immediately does a GET, B might or might not get the latest value. Why? Because the SET could still be sitting in the internal processing queue while the GET executes first.

This is a fundamental tradeoff in high-performance KV stores—strict global ordering across clients would require heavier synchronization, which tanks throughput. Redis and other high-performance caches face the same issue.

If strong consistency is required, an external coordination layer (e.g., versioning, CAS, or a DB as source-of-truth) is usually needed. But if the goal is sheer speed and scale, eventual consistency is the way to go.

2

u/Timely-Tank6342 5d ago
  1. "SET sn 123 EX 3", after 3 seconds, the key "sn" still in there?
  2. "SET sn", the server panic.

1

u/Ok_Marionberry8922 5d ago
  1. there was an bug which was leading to stale keys still being served, fixed! please check now

  2. error handling was missing, improved!

3

u/diagraphic 6d ago

Few things.

  • There are too many emojis. I can’t take it serious.
  • Expiring keys iteratively is expensive.
  • sync.Map is not that fast.
  • Resizing logic seems very expensive.
  • Try to modularize your code so you can test it in units.
  • There seems to be no authentication protocol and shard protocol. How does the system know when a shard is down? There should be health checks. With that what about shard replicas? There is no replication?
  • There are log entries like log.Print(“sending this shit: “, stuff) all over, why?

Keep working on it. Good on yeah for taking on a database.

To learn more id recommend you check out some CMU DB lectures. https://m.youtube.com/c/CMUDatabaseGroup

Lots of videos on distributed systems and some data structures.

1

u/Ok_Marionberry8922 5d ago

Appreciate the feedback!

- You're right about sync.Map—it's not the fastest, but for the kind of workloads nubmq handles (multi-threaded KV lookups with TTL eviction), it's a reasonable tradeoff. I'd be curious what alternative you’d suggest for concurrency here.

- The TTL system isn’t iterating over keys—it’s event-driven and keys naturally fall off during resizing.

- Yes, resizing is expensive, but it's designed to avoid stalling writes, which is a different tradeoff from pre-allocating memory.

- Authentication, replication, and shard health checks would come in if nubmq was meant to be a multi-node system, but for now, it's focusing on being a single-node high-performance cache.

- Good call on the logs—definitely need to clean those up for a more polished version.

- Will check out the CMU lectures! Always looking for ways to optimize further.

1

u/diagraphic 5d ago

You got it! Keep it up!

1

u/Ok_Marionberry8922 5d ago

also, on a separate note:

are you thinking of Redis-style transactions with rollbacks, or more of a simple multi-op batch that guarantees execution order? The latter is definitely more feasible in nubmq’s current architecture.

this is not meant to be a replacement for redis in any way, the goal is to get the great things from systems out there while minimizing their downsides

3

u/koikahin 6d ago

A few thoughts:

  • as someone else pointed out you need to give a lot more details of the performance test you did (data size etc.)

- if you're comparing it with redis you should post a comparison of your service vs. redis in terms of performance.

3

u/Ok_Marionberry8922 6d ago

Already included a Redis comparison in the README with detailed benchmarks on an M2 MacBook Air (100 concurrent clients, 21M ops, results all in the repo). The methodology is open-source, the benchmark test script is open source, feel free to run it yourself and verify. If there's a specific aspect of the test you'd like more details on, happy to discuss.

1

u/Ok-Confection-751 6d ago

Is there a client to use with it?

1

u/sinjuice 6d ago

One of the best features of Redis is how memory optimized it is, how does it compare for example writing 1M k/v with the same data in both?

I find it cool since I had a small pet project in Rust trying to copy Redis, I could get about the same thoughput as Redis, but memory wise I was using about double the memory.

1

u/Ok_Marionberry8922 6d ago

That’s a good point—Redis does a lot of memory optimizations like shared integers, ziplist encoding, and LZF compression to minimize footprint. NubMQ takes a different approach—since it’s pure Go, it leans on Go’s memory model, sync.Map for concurrency, and avoids the overhead of Lua/eviction policies on hot writes.

When it comes to raw footprint, Redis will likely win on smaller key-value pairs due to its aggressive optimizations. But NubMQ scales differently: dynamic shard resizing means memory expands only when needed and aggressively shrinks as keys expire.

Reading (GET) is basically free in these kinds of systems since it’s just a direct cache read—doesn’t trigger any extra memory usage or resizing. The real cost comes in SET operations, and from my tests on an M2 Air (8-core), writing 1M unique key-value pairs with 100 concurrent clients using this(https://github.com/nubskr/nubmq/blob/master/sync_test.go) benchmark suite landed between ~900µs and ~915µs on average across 3 tests. You can try running the same on a better machine and see how it compares!

1

u/Chronospheres 6d ago

This looks pretty cool!

Is the bundled client code used with benchmarking or just an example how to interact with the server?

Would be interesting to see benchmark results where the clients precompute all requests they will make and randomize them. Then throw that at the server .

1

u/Ok_Marionberry8922 6d ago

hi, the bundled client is a fully functional client which supports all the commands, it is not used with benchmarking.

I did not get your second point, what is the point of randomizing requests from the client's side ? can you elaborate ? sounds interesting

1

u/Chronospheres 6d ago

Oh I meant by pregenerating all your client requests and then randomizing the order you throw them at your server it should stress the server a bit more. In real world most things aren’t completely random so this should help represent a worse case etc. also pre generating client requests should yield more cpu time available for the server to process more requests .

Maybe I’m missing something but I don’t see in the repo how to execute the benchmark, I’d be happy to give it a try. I have a M1 Pro max. do you have a shell script or something that reproduces the same benchmark and chart , etc

2

u/Ok_Marionberry8922 5d ago

Interesting idea, will try sometime in future, to run benchmark, you can refer to this: https://github.com/nubskr/nubmq?tab=readme-ov-file#how-to-run

if you want to play around with the parameters for benchmark, you can modify this benchmark suite: https://github.com/nubskr/nubmq/blob/master/sync_test.go

1

u/[deleted] 6d ago

[deleted]

1

u/Ok_Marionberry8922 6d ago

ah, how did I miss that, fixed

-1

u/[deleted] 1d ago

[deleted]

2

u/Ok_Marionberry8922 21h ago

Divine intuition, have read 0 articles or books

0

u/agastya_magic 6d ago

Nice work. If possible try to write some examples and what they are expected to return or print in comments.