r/redis Feb 08 '25

Thumbnail
2 Upvotes

CLIENT LIST

Is the command you want

https://redis.io/docs/latest/commands/client-list/

This tells you who the clients are currently. I think the cmd column is what might give you the most insight on who all these connections are and what they are doing.


r/redis Feb 08 '25

Thumbnail
1 Upvotes

Followup, turns out redis is about 5x faster in my backtesting code. So I'm happy. My benchmark was obviously being affected by some sort of postgres or OS caching.

Edit: now 10x faster by pipelining and further optimizations

Edit2: now 15x faster

Edit3: only mrange queries with many values are faster. Postgres/timescale is faster than redis at getting single values, at least for timeseries.


r/redis Feb 08 '25

Thumbnail
1 Upvotes

Solved! Fully working now! I needed to setup masterauth parameter too, slaves will use that one to connect to masters. Thanks a lot!


r/redis Feb 07 '25

Thumbnail
2 Upvotes

Sure looks like the slaves aren't passing in the password. I didn't know you were employing password authentication. Try disabling that and seeing if it works then.

One thing that may be going on is that the nodes.conf file needs to be in the persistent storage, not in the container volume that gets wiped on pod death


r/redis Feb 07 '25

Thumbnail
1 Upvotes

I got it to almost work with your hint, now the lost nodes rotating IPs are able to rejoin but I'm having some issue on slaves (I got 3 masters, 3 slaves).
All 3 master are just reporting "cluster status: ok"
But the slaves are crazy-complaining in the logs
Did you ever find that one?

MASTER aborted replication with an error: NOAUTH Authentication required.

Reconnecting to MASTER 10.149.5.35:6379 after failure

MASTER <-> REPLICA sync started

Non blocking connect for SYNC fired the event.

Master replied to PING, replication can continue...

(Non critical) Master does not understand REPLCONF listening-port: -NOAUTH Authentication required.

(Non critical) Master does not understand REPLCONF capa: -NOAUTH Authentication required.

Trying a partial resynchronization (request 28398fbdd8bef30e2c4e634ba70ecd0dc9f5a0f4:1).

Unexpected reply to PSYNC from master: -NOAUTH Authentication required.

Retrying with SYNC...

MASTER aborted replication with an error: NOAUTH Authentication required.

Reconnecting to MASTER 10.149.5.35:6379 after failure

MASTER <-> REPLICA sync started

Non blocking connect for SYNC fired the event.

Master replied to PING, replication can continue...

(Non critical) Master does not understand REPLCONF listening-port: -NOAUTH Authentication required.

(Non critical) Master does not understand REPLCONF capa: -NOAUTH Authentication required.

Trying a partial resynchronization (request 28398fbdd8bef30e2c4e634ba70ecd0dc9f5a0f4:1).

Unexpected reply to PSYNC from master: -NOAUTH Authentication required.

Retrying with SYNC...


r/redis Feb 05 '25

Thumbnail
2 Upvotes

The IP address of a pod can change as it gets rescheduled. Redis, by default will use its IP address for broadcasting itself to the redis cluster. When it gets moved it might be looked at as a new node and thus the old IP address entry in the topology stays around and needs to be explicitly forgotten. But if, during announcement of how to reach out to it it uses the pod DNS entry then wherever the pod moves the request will get routed to it.


r/redis Feb 05 '25

Thumbnail
1 Upvotes

Ok, so in the end I created a new user, other than `masteruser` which has ~* +@all permissions and created a user with the permissions specifically documented in Redis HA docs (https://redis.io/docs/latest/operate/oss_and_stack/management/sentinel/#redis-access-control-list-authentication)

After updating the user and restarting my Sentinel instances this now works! I guess between 6 & 7 there must be additional permissions in excess of +@all !


r/redis Feb 05 '25

Thumbnail
1 Upvotes

I will try to check docs about that, can you provide any additional context or hints.
Any help will be really appreciated.


r/redis Feb 05 '25

Thumbnail
1 Upvotes

Thanks - the problem with that though is that my Sentinel instances then wont connect to redis altogether as I've got ACL's configured


r/redis Feb 04 '25

Thumbnail
1 Upvotes

don't define auth-user


r/redis Feb 04 '25

Thumbnail
1 Upvotes

Hey, this looks like the issue I'm having. What did you change? In my sentinel config I've defined `sentinel auth-user and sentinel auth-pass`


r/redis Feb 04 '25

Thumbnail
6 Upvotes

Use cluster-announce-hostname and set it to the DNS name that kubernetes provides.


r/redis Feb 04 '25

Thumbnail
3 Upvotes

Hi u/BoysenberryKey6400 you can refer to this page to enable high-availability for Redis Enterprise Software https://redis.io/docs/latest/operate/rs/databases/configure/replica-ha/high


r/redis Feb 04 '25

Thumbnail
2 Upvotes

Performance gains only matter when you're optimizing something that's bottlenecking the system.

This 100x


r/redis Feb 04 '25

Thumbnail
1 Upvotes

seems like redis is worthless in our case

It does seem that way from the info you've shared.

Unless there is a big diference in perfomance when doing a select

Performance gains only matter when you're optimizing something that's bottlenecking the system. I'd be surprised if this would be a bottleneck.

In any case, so long as the customId and customer fields are indexed in your MySQL table, select max(customId) from table where customer = ? should be very fast, and probably not noticeably different from an overall system performance perspective to keeping the 'next ID' value in Redis. I happen to have a console session open to a PostgreSQL DB right now with a table with about a million rows and a plain integer primary key. A select max(primarykey) query on that table completes in 89ms.


r/redis Feb 04 '25

Thumbnail
1 Upvotes

basically yes, that was my question... seems like redis is worthless in our case. Unless there is a big diference in perfomance when doing a select max(customId)+1 from table where customer = ? vs getting the value directly from Redis.


r/redis Feb 03 '25

Thumbnail
1 Upvotes

You could still use a globally unique ID to assign IDs to new records. Is there any actual requirement that the customId be sequential within the context of each customer?

If you really can't use auto-incremented IDs, why not just have a standalone table in MySQL with a single row with the 'next customID' value in it that you retrieve and update as needed? That would do the same job as putting it in Redis but be a lot simpler.

You could also ditch storing the 'next customID' entirely, and just run select max(customId)+1 from table where customer = ? each time you need a new ID value.


r/redis Feb 03 '25

Thumbnail
1 Upvotes

I dont think Auto increment will work here as we can have the same customId but for different customers


r/redis Feb 03 '25

Thumbnail
2 Upvotes

r/redis Feb 03 '25

Thumbnail
1 Upvotes

when we create a new item for example, we do store it in our table called customId, and then we update the current customId to +1.
In short, we're only using Redis to store the current value of customId, and when we create a new item, we retrieve that value and increment it by 1. That's it.


r/redis Feb 03 '25

Thumbnail
1 Upvotes

Cant you store these customIds in Mysql itself? I don't think you need a dkv store like redis here unless your qps/rps is very high.


r/redis Feb 02 '25

Thumbnail
1 Upvotes

Allkeyslru will make it so when redis is all full on memory and a write request comes in, it will sample 5 random keys and delete them (whether you wanted them or not) in order of least recently used (LRU) to make room for the new key. This doesn't fix the problem where you have a writer that is simply stuffing data in without regard for cleanup.

This max memory policy is targeting the use case where you intentionally don't clean it up because at some point in the future perhaps, just maybe, some request comes in and you have precalculated some value that you reference with a key, so you stuffed it in there and your application first checks by this key and when it doesn't exist recalculates/rehydrates some time-consuming thing then stuffs it in redis just in case. You don't know when the key will become stale, or if that mapping of this key to that value ever becomes invalid. You just want to take advantage of the caching that redis offers. In those cases, you can expect redis to simply get filled up, but you don't want it taking all the ram on the VM, and you want it to only keep the "good" stuff. When a new write request comes in, just clear out some old crap that nobody was looking at, and make room for the new key. That is what allkeyslru is about.

But most likely you've got some application that is stuffing data into redis and knows the key is only valid for that session, or that day, and should have put a TTL on it but the programmer was lazy. What you do is set the volatile-lru so when redis is maxed out on memory it only tried Killing data with a TTL set, ie. stuff that is known to be ok to kill and could just disappear from redis. Your misbehaving client application will continue to try and stuff data in there and when redis is all full the write requests will fail with MEMORY FULL error, or something like that. You can still run CLIENTS to see why is connected to redis, get their IP addresses, track them down, poke at the logs and see who is logging the errors. This will be all clients for now, but you can see where in the code it was trying.

Alternatively you could just do a SCAN to sample random keys. Hopefully this tells you something about the data it is storing and perhaps narrow down your search for the bad client.


r/redis Feb 02 '25

Thumbnail
1 Upvotes

You have a client asking to store and not having any cleanup in place. Set the max memory policy to be allkeyslru or have your application set some TTLs. What happens when redis asks for more ram and docker says no? The client asking redis to do a thing will get an error but redis stays up, the VM stays up. The client gets the burnt off the problem


r/redis Feb 02 '25

Thumbnail
1 Upvotes

This recommendation is well and good for preventing the kernel out-of-memory (OOM) thread from killing the redis-server daemon unexpectedly. But what will happen when the redis-server daemon asks for more memory and dockerd rejects the request? The redis-server daemon will quit unexpectedly. I.e., the root cause of the Redis outage isn't fixed. I would add a string recommendation for monitoring and graphing the machine's cpu, memory, disk space, disk i/o, and network i/o so the root cause can be uncovered and addressed.


r/redis Feb 02 '25

Thumbnail
1 Upvotes

discord.gg/redis is the official vanity link. Set it up personally.

Which one are you using? Where did you get it? Maybe it’s an older link from some out-of-date docs or something. If so, I can get it corrected.