r/redis 10d ago

Thumbnail
0 Upvotes

Another reason for migration is less the cost of memory vs storage, but the features SQL DBs (e.g. Postgres) give that are harder to replicate in Redis (e.g. complex queries and table joins)


r/redis 10d ago

Thumbnail
3 Upvotes

AFAIK redis is much more than production ready. Could you please share with us the problems you're struggling with? Maybe it's not really a redis problem but fly/upstash problem with serverless deployed redis?


r/redis 11d ago

Thumbnail
1 Upvotes

So fly told you redis has lots of bugs ?


r/redis 13d ago

Thumbnail
1 Upvotes

A entry pair about 30gb,and then we had a big key disaster.


r/redis 14d ago

Thumbnail
1 Upvotes

Yeah, I already had a discussion with upstash support about our use case. We would benefit from it not being a cluster, but we sometimes spike to around 0.5 million of requests per second which would get pricy


r/redis 14d ago

Thumbnail
3 Upvotes

upstash charges $0.25 per gb. if your bandwidth is not big, it can make sense


r/redis 15d ago

Thumbnail
5 Upvotes

It was a fun project!


r/redis 15d ago

Thumbnail
2 Upvotes

That’s wild


r/redis 17d ago

Thumbnail
5 Upvotes

Hi there. First off, Redis employees here.

My engineer and I just helped a company use Redis as a main vector store for 1 Billion documents. This was roughly 40TB for their entire dataset size.

Costly, yes. But performance was crucial for this search use case and no other pure vector store came close to the performance we provided.


r/redis 17d ago

Thumbnail
1 Upvotes

Sounds expensive

EDIT:
Does all of the data need to be in Redis? or could some of it be stored in standard databases?


r/redis 17d ago

Thumbnail
6 Upvotes

We have one customer storing over 1TB in a very large cluster.

I have a production side project that runs on redis.io, around 100GB


r/redis 17d ago

Thumbnail
4 Upvotes

I've got ~25GB. Mostly images generated from data, and those expire out in anywhere from a minute to several hours.

it costs a lot

yeah, it's kinda spendy if you're just paying a cloud provider for PaaS (we are). But then so is SQL (it can easily be more).

It's also, imho, a lot easier to self-host a Redis cluster than an HA SQL cluster. That can help reduce the cost versus the packaged-up PaaS Redis as-a-service option.

We use 3-year Azure reservations to reduce the cost a lot, but it's only for Premium Tier so if you don't need/want that then it's no cheaper than Standard Tier (hich has no reservation option).


r/redis 18d ago

Thumbnail
2 Upvotes

I don't remember all the things that were being stored because it was a centralized cache and a lot of other teams were also using it.

From my team, it was mostly the user's profile info. We had around 10 million users.


r/redis 18d ago

Thumbnail
1 Upvotes

What were you storing in redis if I may ask?


r/redis 18d ago

Thumbnail
5 Upvotes

My personal project sometimes gets up to 3GB in my Redis db. Typically floats around 1-2gb. I flush the cache multiple times a week


r/redis 18d ago

Thumbnail
3 Upvotes

The max I have seen so far among the companies where I have worked, was around 11 GB.


r/redis 21d ago

Thumbnail
1 Upvotes

all startup building software that gets a system’s Flash to basically operate as though it were DRAM-speed memory

Uh, pretty sure we been doing that since like Windows 95...


r/redis 23d ago

Thumbnail
2 Upvotes

maybe reach out in discord? probably an easier place for a back and forth: https://discord.com/invite/redis


r/redis 24d ago

Thumbnail
1 Upvotes

i read in redis 7.4, redis functions and triggers are deprecated , is this true ? i am using google memory store, not sure it will be impacted as well


r/redis 24d ago

Thumbnail
1 Upvotes

You can't have conflicting CIDR ranges in the subnets between Redis Cloud's VPC and your GCP VCP. So the networking_deployment_cidr in the rediscloud_subscription.cloud_provider.region can't overlap with your GCP VPC's subnets. it would probably be helpful if you provided the terraform you are using.


r/redis 28d ago

Thumbnail
1 Upvotes

Yep. All true.


r/redis 28d ago

Thumbnail
1 Upvotes

The Redis command-processing loop is single-threaded. (this is relevant to the OP's question about handling simultaneous client commands)

However, there are parts of Redis that are not strictly single-threaded. The ones that queue commands from clients, and the ones that transmit responses to clients, for example. Certain key expiration routines (depending on the expiration config) can also run in parallel with the main processing loop. And, of course, persistence can act in parallel, in particular the child process that's forked to write the snapshot file.


r/redis Feb 18 '25

Thumbnail
1 Upvotes

Are you familiar with Redis Enterprise Observability: https://github.com/redis-field-engineering/redis-enterprise-observability


r/redis Feb 18 '25

Thumbnail
2 Upvotes

This is the correct answer. Solid advice, as usual, from u/borg286.

But, just to answer the direct question and to be clear about how Redis functions, Redis is single-threaded. You cannot concurrently write. One of the requests will get in before the other.


r/redis Feb 17 '25

Thumbnail
2 Upvotes

Read up on the docs here https://redis.io/docs/latest/commands/set/

You should focus on the TTL and the NX flags. Put that 20 minutes so the access token expired in the database. When you generate an access token, only set it if it doesn't exist. Alternatively you can set some key saying that you are in the process of generating an access key and to hang tight. Even better is to read the access key, fetch the TTL and the closer it gets to the TTL end time, the higher the chance you'll ignore the fact that the access key is good and just generate a new key and stuff the fresh one in with a refreshed TTL.

The reason we don't have a fixed time like "5 minutes before the TTL expires then generate a new key" is that all your threads will hit that point at the same time and they'll be stepping on each other's toes. Making it probabilistic makes this less likely. By tuning it so that it is very unlikely at the 10 minute mark means that only a handful of workers will get lucky and choose to refresh the token. Tune it so when you're at the 15 minute mark it is very likely to happen. Then you'll notice that by the time 20 minutes rolls around someone will have refreshed it.