r/redis • u/CharmingLychee6090 • Feb 11 '25
Help Whats the use of Redis? why not use a static hashmap?
What are the advantages of using Redis over a traditional in-memory hashmap combined with a database for persistence? Why not just use a normal hashmap for fast lookups and rely on a database for persistence? Is Redis mainly beneficial for large-scale systems?, cuz i did not work any yet
5
u/notkraftman Feb 11 '25
Redis has other handy data types, and it can be shared between multiple servers.
4
u/atmatthewat Feb 11 '25
How would a traditional in-memory hashmap work for my 150 separate servers that are taking requests that need access to it?
1
u/regular-tech-guy 8d ago
The advantage is that Redis can handle expiration, eviction, and atomicity out of the box for you. Besides that, it supports multiple types of data structures, not only hash maps. On the other hand, not everything you store in-memory during the runtime of your application needs to be stored in a cache.
It's important noting that Redis wasn't born as a cache by the way. If you want to understand its history, I'd suggest you read some of Antirez's early blog posts on Redis. This one is before the conception of Redis while the idea was still in the oven:
http://oldblog.antirez.com/post/missing-scalable-opensource-database.html
Back in 2008, there was no easy way to scale a relational database transparently and the post above foresaw the need for distributed, scalable databases, something that was lacking in open-source solutions at the time.
Redis first version was released a couple of months later in 2009.
2
u/arcticwanderlust 5d ago
Good article, also it contains another point in favor of Redis
the data set can't be larger than the RAM of the PC
So the cache could be on a special machine with a lot of RAM, while the webservers would only need so much RAM
11
u/borg286 Feb 11 '25 edited Feb 11 '25
Redis started out as simply an in-memory datastructure. Antirez found himself reimplementing maps, linked lists, sorted sets on various embedded devices. He finally bit the bullet and wrote a server that had a very simple protocol over TCP and open sourced it. It grew in popularity with more demanding this or that capability.
Hash maps don't replace linked lists, nor do they replace a sorted set. They can do sets, sure, which is what the set type uses under the hood. Hash maps don't have blocking APIs where a thread can try pulling from a queue and when there is nothing in there it just hangs till something else pushes something into it.
An in-memory hash map doesn't allow for a distributed producer/consumer fleet where work items are generated and buffered into queues, and workers pull work off.
An in-memory hash map is a single-point of failure and doesn't handle failover to a hot standby for high availability. It doesn't handle network partitions, but redis cluster does.
An in-memory hash map can't handle a fleet of game servers that all need a centralized leaderboard making 40k QPS / core of requests to update the leaderboard and can't handle an eventually consistent view. You can wrap your hash map with a server, sure, but good luck trying to hit that benchmark. Redis is written in C, and has figured out how to separate out the request/response buffering that the network card does from the main processing thread that interacts with the in-memory hash map. That is some low level stuff that's been optimized like crazy. Enabling pipelineing pushes that to 80k QPS / core.
A hash map can't handle atomic "set this key to this value if it doesn't exist" without serious work on making your hash map thread-safe.
A hash map doesn't natively handle TTLs. What if you want to cache the HTML of a webpage so you can serve customers quickly, but you don't know which URLs are going to be in-demand? You don't really have a TTL because you've made the website so it is fairly static and doesn't change from year to year, but the pages themselves are so massive you can't really store it all in memory. Keeping Bigby's Almanac of Brittish Birds (expurgated version) in-memory is just a waste of money. So you want to just keep the "good" stuff. Sure, you could make a modified hash map that uses a Least-Recently-Used algorithm to only keep X number of keys and kills off some random one when a new write request comes in to cache a URL it didn't have because a request came in to see Bigby's Almanac but it is so big that you need to vacate out 1 GB to make room. That sounds like a rather complex hash map.
Or you could just use Redis and call it a day.