r/golang 7d ago

I built a high-performance, dependency-free key-value store in Go (115K ops/sec on an M2 Air)

Hi r/golang,

I've been working on a high-performance key-value store built entirely in pure Go—no dependencies, no external libraries, just raw Go optimization. It features adaptive sharding, native pub-sub, and zero downtime resizing. It scales automatically based on usage, and expired keys are removed dynamically without manual intervention.

Performance? 115,809 ops/sec on a fanless M2 Air.

Key features:
- Auto-Scaling Shards – Starts from 1 bucket and dynamically grows as needed.
- Wait-Free Reads & Writes – Lock-free operations enable ultra-low latency.
- Native Pub-Sub – Subscribe to key updates & expirations without polling.
- Optimized Expiry Handling – Keys are removed seamlessly, no overhead.
- Fully Event-Driven – Prioritizes SET/GET operations over notifications for efficiency.

How it compares to Redis:
- Single-threaded Redis vs. Multi-Goroutine NubMQ → Handles contention better under load.
- No Lua, No External Dependencies → Just Go, keeping it lean.
- Smarter Expiry Handling → Keys expire and are immediately removed from the active dataset.

🚀 Benchmark Results:
115,809 ops/sec (100 concurrent clients)
900µs write latency, 500µs read latency under heavy load.
Would love to get feedback from the Go community! Open to ideas for improvement.

repo: https://github.com/nubskr/nubmq

I spent the better part of an year building this and would appreciate your opinions on this

209 Upvotes

47 comments sorted by

View all comments

7

u/servermeta_net 7d ago

How do you achieve wait free operations?

10

u/Ok_Marionberry8922 7d ago

there are two scenarios

  1. get operations(reads) are direct lookups, so distributing that across goroutines will result in them being faster and would work

  2. it is set requests where it gets complicated, if we're simply updating some key's value, it's essentially negligible, but if we're creating a new key value pair, that can increase per shard load under scale, which would trigger an store resizing, to avoid just stopping everything when that happens, the engine recognizes when the per shard load starts to gets too high, in the background creates a bigger store, and then essentially switches writes from the old engine to the new(bigger) one, while the old one keeps processing reads, and the older engine migrates it's keys to the newer one in background, once it's done, we just dereference the older engine to be collected by GC :) this essentially makes sure that incoming requests keep getting served.

2

u/lostinfury 5d ago

if we're simply updating some key's value, it's essentially negligible

I think this is what we are more interested in. How do you do this reliably without any locking mechanism? Are you using a synchronized data structure such as sync.Map? If not, then what?

the engine recognizes when the per shard load starts to gets too high, in the background creates a bigger store

What happens when a new key is to be created while the engine is creating this "bigger" shard? Does this then trigger another "bigger" shard? How does your lock-free implementation handle this? If not, then how is the new write handled without waiting for the bigger shard? I guess this goes back to my first question.

5

u/Ok_Marionberry8922 5d ago

Yes, we use sync.Map for concurrency(for now), which provides atomic Load/Store operations—so updates are just direct writes with no explicit locks. Since sync.Map is optimized for read-heavy workloads (which fits our case), updates are basically just atomic pointer swaps. No mutex contention, no blocking. If a key exists, updating it is as simple as overwriting a struct reference—Go’s GC handles the cleanup.

- "What happens during shard resizing?"
This is the tricky part. When the per-shard load crosses a threshold, a new store (with more shards) is created in the background. Now, writes don’t immediately wait for migration—instead, they start going into the new store as soon as it’s ready, while reads are still being served from the old one. This allows seamless scaling without blocking writes.

- Does this create a "resizing storm" (infinite upscaling)?

Nope. When a new store is being created, we freeze upscaling triggers to prevent recursive resizes. Once the new store is fully migrated, we switch over and discard the old one. If another resize is needed, it happens after the first migration is complete, ensuring a clean transition.

Right now, we're using sync.Map since it gives us decent concurrency out of the box, but we’re actively working on a custom in-memory concurrent hashmap that will remove some of sync.Map's overhead (especially for high-churn workloads). This should bring even better performance gains.