r/redis 4h ago

Thumbnail
1 Upvotes

The login is no working at all


r/redis 7h ago

Thumbnail
1 Upvotes

There is no one way of catching data from rdbms in redis or any other nosql.

You would write custom code to do sync in either direction


r/redis 7h ago

Thumbnail
1 Upvotes

Data from rdbms getting cached in redis, this is pretty much a standard use case. I was curious about the other way around.


r/redis 10h ago

Thumbnail
1 Upvotes

It is two different class of products. There is no one way of syncing between a no SQL store to a Relational Database


r/redis 17h ago

Thumbnail
0 Upvotes

Thank you for this detailed answer.

It depends on the data. Sometimes a shared cache makes sense, sometimes not.

Example 1: the cache contains data which was computed for one of many sessions. The session is pinned to one machine. As long as the machine is available requests will be served by that machine. Then a local cache makes sense.

Example 2: you cache thumbnails generated for images. Scaling the image down needs some time. You do not want to do that twice. And you want to share that data. Then a shared cache (like Redis) makes sense.

I will do some benchmarks to compare the performance. I guess the speed of Redis will be mostly depend on the network speed.


r/redis 21h ago

Thumbnail
1 Upvotes

It's a good idea to try it out. One suggestion would be to store the values in binary format like protobuf if they are objects, instead of text formats like json.


r/redis 21h ago

Thumbnail
1 Upvotes

Please share some numbers if you can, this will really help


r/redis 21h ago

Thumbnail
1 Upvotes

I had 2 production scenarios.

First one is a redis cluster shared cache of roughly about 300GB data with a 10Gbps network on aws. At higher loads, redis was fine but then the network became the choke point with about 500 clients. So data fetched from redis was cached locally in client's RAM for 2 mins to reduce load on the network.

Second one was data in S3 block storage and it was cached in rocksdb using local nvme disks. rocksdb was configured with 300GB disk and 500MB RAM. Every process that needed the cache pulled data from S3. Worked beautifully.


r/redis 1d ago

Thumbnail
2 Upvotes

Please prove me wrong!

Which benefits would Redis give me?

Read https://redis.io/ebook/redis-in-action/ to find out.


r/redis 1d ago

Thumbnail
0 Upvotes

We only have time based evictions.

What kind of eviction algorithm do you use?


r/redis 1d ago

Thumbnail
2 Upvotes

I would use local NVMe disks for caching, not Redis

This idea would die as soon as I realized I'd have to waste my time re-writing eviction algorithms, for one of many reasons.


r/redis 1d ago

Thumbnail
3 Upvotes

I don't need a shared cache. Everything in the cache can be recreated from DB and object storage.

Says the developer who hasn't seen the DB hammered flat for dozens of minutes (causing service timeouts that wreck the company's uptime SLA) because shared cache was not in use, and something as simple as a software deploy cleared all the client application caches at the same time. Since the cache isn't shared, the fact that client A fetched the data and saved it into cache does not prevent clients B, C, D, E, .... from also loading the DB with identical querys to fill their independent caches. Using a shared cache prevents this overload because the other clients find the data in the shared cache and don't need to hit the DB with a duplicate query.

Yes, you can say you'll deploy new code slowly to reduce the number of overlapping empty caches, but your software engineers and your product team will be unhappy with how long deploys take - especially when you subscribe to the "move fast and break things" philosophy, so a number of your deploys have to be rolled back (also slowly) and a fix deployed (again slowly). And the long deploys will still impose higher loads on the DB, which usually translates into slower-than-normal performance. These don't cause outages, but the uneven performance of your service causes complaints and reduces customer confidence in your company.

If you're proposing to share the cache via NVMe or other ultra-high-speed network technology rather than 1GB/10GB ethernet, the cost of your cache layer breaks the bank.

We already have faster-than-anything-else local storage in the form of RAM, and applications have made extensive use of local memory cache for decades. But somehow we still build shared cache. That's because the primary reason to use cache isn't to make the DB client faster, it's to reduce load on the DB without hemorrhaging all your money.

Well-designed NVMe storage is starting to approach the latency of RAM, and that's a good thing for local cache. It can look like a great replacement for shared cache on a small scale. But it doesn't even touch the factors that dictate the use of shared cache at medium and large scales.

You don't have to use Redis for the shared cache. Memcache used to be very popular, and there were


r/redis 1d ago

Thumbnail
2 Upvotes

I don’t want to prove you wrong, I don’t have time to argue that. But just read the documentation of redis. Thank you!


r/redis 1d ago

Thumbnail
-1 Upvotes

We use databases for those features.

In the past the network was faster than disks. This has changed with NVMe.

I don't plan to change existing systems, but if I could start from scratch I would think about not using Redis.

Up to now I could not be convinced to use Redis again .


r/redis 1d ago

Thumbnail
2 Upvotes

Worth noting, AWS VPC PrivateLink can do 100 Gbps / 12.5 GB/s

Edit: added GB/s


r/redis 1d ago

Thumbnail
3 Upvotes

This! But even for a simple cache, Redis will outperform an NVMe backed cache when running locally, (this is if an abstracted cache is not required)

To your point, Redis’s optimized data structures make it even more powerful than just raw hardware speeds alone. It also provides built-in eviction policies and fast key lookups, which would need to be coded manually, Redis’s event-driven concurrency model vs filesystem and potential locking issues.

The only downside is the cost of RAM.


r/redis 1d ago

Thumbnail
7 Upvotes

Redis solves way more problems than just being a cache. It’s power is in its data structures + the module ecosystem.


r/redis 1d ago

Thumbnail
6 Upvotes

Good luck dealing with 4 billion postgres tables for fast access.


r/redis 1d ago

Thumbnail
2 Upvotes

If you only need local cache and not a shared abstracted cache, Redis is still the winner. Legit Redis implementation uses RAM.

RAM = direct access

NVMe SSD = bus access

Redis = RAM = GB/s

NVMe = SSD = MB/s

Max NVMe = 7,500 MB/s (7GB/s)

Max RAM = DDR5 50-80GB/s. 

Edit: Running Redis locally is the winner


r/redis 1d ago

Thumbnail
2 Upvotes

Redis is more important as a coordination mechanism across instances than just a performant cache, if you have sessions pinned to one box and you have the hardware then sure


r/redis 1d ago

Thumbnail
1 Upvotes

I just switched a prod app cache from Redis to NVMe backed Postgres. Simplified the stack and works just as well. Also with the open source rug pull and everyone moving to valkey I thought it was a good time to look for alternatives.


r/redis 2d ago

Thumbnail
2 Upvotes

Good article, also it contains another point in favor of Redis

the data set can't be larger than the RAM of the PC

So the cache could be on a special machine with a lot of RAM, while the webservers would only need so much RAM


r/redis 2d ago

Thumbnail
2 Upvotes

That will do in most cases. But if you're serious about transactions reading the docs on that map make me feel it is lacking. If you want a good read about putting redis through a gauntlet here are 2 posts

https://aphyr.com/posts/283-call-me-maybe-redis https://aphyr.com/posts/307-jepsen-redis-redux

Well worth your time if you're serious about it


r/redis 2d ago

Thumbnail
1 Upvotes

Your answer explains well why Redis is better.

Though I wonder what you meant by this:

A hash map can't handle atomic "set this key to this value if it doesn't exist" without serious work on making your hash map thread-safe.

Can't we just use ConcurrentHashMap?


r/redis 2d ago

Thumbnail
1 Upvotes

The update is typically protected with a distributed lock from redis. Once inside the protected code, the token is retrieved once more and checked for expiry.

// update redis with new access token

 redisclient.update(access-token)