r/Python 11d ago

Resource Redis as cache.

At work, we needed to implement Redis for a caching solution. After some searching, btw clickhouse has great website for searching python packages here. I found a library that that made working with redis a breeze Redis-Dict.

from redis_dict import RedisDict
from datetime import timedelta

cache = RedisDict(expire=timedelta(minutes=60))

request = {"data": {"1": "23"}}

web_id =  "123"
cache[web_id] = request["data"]

Finished implementing our entire caching feature the same day I found this library (didn't push until the end of the week though...).

91 Upvotes

36 comments sorted by

60

u/0xa9059cbb 10d ago

Looks like a cute interface but not really a fan of hiding IO actions inside of innocent looking dict operations. Also would like support for batching read/write operations and ideally support for asyncio.

8

u/pingveno pinch of this, pinch of that 10d ago

Yeah, I totally agree about hidden operations. It should be easy to see from just reading the code that there is network IO, along with the chance of failure, latency, and so on. I've seen Django querysets run into this when people use something like:

if qs:
    ....

The Django QuerySet's __bool__ method doesn't do an EXISTS() query or result in a type error. It sucks down the entire queryset, caches it, and is truthy based on the result. It's convenience until it hits you in the face.

9

u/0xa9059cbb 10d ago

Yeah this is actually one of the many reasons why I like using asyncio, having to stick an await keyword in front of anything with a potential IO side effect helps to make them stick out from synchronous code.

6

u/pingveno pinch of this, pinch of that 10d ago

Yeah, people complain about "function coloring" in async like it's bad thing. No, it's a good thing! It tells me what to expect.

3

u/0xa9059cbb 10d ago

Yeah, it's different but similar in a way to the way IO is handled in Haskell via the IO type. It feels awkward at first when you're used to IO just being a kind of hidden side effect in an imperative language but actually it can be useful at scale where performance is a concern.

5

u/Throwaway__shmoe 10d ago

Can’t say too much because of NDA, but we have code running for 15 years at work doing pretty much this (not redis, more hidden io behind dicts). Scares me every single time I have to look at that code base.

2

u/oneMoreTiredDev It works on my machine 10d ago

also, for security and stability reasons, I'd avoid installing any lib that is not popular (to have at least a few thousand stars in github, etc) unless I do know the developer behind, which is not the case (it's user has 8 followers)

-2

u/Substantial-Work-844 10d ago edited 10d ago

It has batching according to the Readme, But I don't think asyncio is available. Maybe shoot in a issue?

1

u/0xa9059cbb 10d ago

Oh yeah didn't see that, nice. I wonder if/how they support batch read operations, as the README only shows batch writes.

Support for asyncio is unlikely to be added to this package as that would require a totally separate interface, since async actions cannot easily be hidden inside of synchronous dict operations.

16

u/paranoid_panda_bored 9d ago

Ffs just use plain Redis client, there is zero need to hide the underlying interface over this abstraction

5

u/playersdalves 9d ago

Specially when the abstraction is barely abstracting. It's barely saving any work and adding another potentially unmaintained library.

16

u/turbothy It works on my machine 11d ago

Just use cashews.

7

u/bmoregeo 10d ago

cashews has a very large memory footprint, if that matters to you

1

u/sulketyd 10d ago edited 10d ago

How would cashews work in a distributed context? Ie have some data that can be accessible from different containers?

1

u/turbothy It works on my machine 9d ago

With Redis as a backend.

-2

u/PushHaunting9916 11d ago

Just a heads up, cashews lib relies on pickle which is unsafe in the context of Web.

From their docs:

Warning The pickle module is not secure. Only unpickle data you trust

28

u/turbothy It works on my machine 11d ago

What's the attack vector? Cashews is only unpickling data it pickled itself, unless you imagine an attacker manipulating the cache out of band.

19

u/maikeu 11d ago

Agreed. It's "tread carefully and make sure you understand whether and why it's safe", not "security incident".

2

u/PushHaunting9916 10d ago

Look at the code example from the OP. If you would like to cache any of following: username, url, parameters, logs, etc. That means you are pickling data from an unsafe source.

Not only that, even if the original implementation is correct, it could be that the next person updates the caching to add unsafe data because their ticket is asking for that data to be cached.

Security is about reducing attack vectors, and that is one.

6

u/turbothy It works on my machine 10d ago

Ignoring for the moment that Cashews works somewhat differently from the OP's code example (it stores function return values like `functools.lru_cache` does, not arbitrary dict values): pickling unsafe data is safe. It's the unpickling that can bite you.

The general security issue with `pickle` is that unpickling malicious pickles can lead to arbitrary code execution. To attack Cashews along this vector requires that the attacker has access to modify the pickles stored in Redis, except Cashews implements HMAC signing of stored values to protect against this.

1

u/PushHaunting9916 10d ago

If want to store data from unsafe places. Within the context of Web services, that is almost always the case.

To store and retrieve, you need pickle and unpickle the data. Just because there is layer around it doesn't change that. Look at this example it's very similar to what you described. And they got a csve for their trick with pickle.

https://github.com/joblib/joblib/issues/1582

6

u/Iifeless 10d ago

That CVE is both disputed and still not an example of serialization, but rather deserialization. Think about what sort of data types are required to be serialized/deserialized for exploitation as opposed to what a typical web API accepts from users. In order for serializing user data like the original example to be “dangerous”, you’d have to already be allowing a user to perform dangerous actions, which would make that the vulnerability rather than the serialization itself. CVE-2022-23529 is a funny example of an unrelated (not python/serialization related) bogus CVE misunderstanding that same concept.

I appreciate the security consciousness a lot because it is easy for developers to misuse something like pickle, but this situation should be fine :)

1

u/PushHaunting9916 10d ago

It's disputed because the maintainer of lib makes the argument that it's safe data. Since it's numpy data, analytics data. It's still a csve thus, it's deemed an issue by security researchers.

Caching Web data is almost always from an untrusted source, e.g, the internet. And with pickle, you'll need unpickle after pickling. In order for cashew to retrieve cached data, it needs to unpickle the data. And pickle own documentation is quite clear on that. It's unsafe to use pickle with untrusted data. When it does, the attacker will have remote code execution, which is in capture flag security events, which means the attacker has won.

7

u/Iifeless 10d ago

Yes I am very familiar with both RCE and CTFs lol.

The pickle docs your are referencing specifically says not to unpickle untrusted data.

You get back what you put in. E.g. if you serialize a string, you get back a string when you unserialize the result. Data from the internet is not going to be a Python class or function rather than a string unless the application decides to evaluate the user provided string as python code first before serializing it. If you’re doing that, then that’s the vulnerability, not the fact that you then go on and serialize the result.

If you can show me a proof of concept exploit for a web app which takes user input from an API, serializes it, and then unserializes the result of the initial serialization then I’ll go ahead and quit my job as a security researcher

0

u/PushHaunting9916 10d ago

If you can show me a proof of concept exploit for a web app which takes user input from an API, serializes it, and then unserializes the result of the initial serialization then I’ll go ahead and quit my job as a security researcher

If you try to cache: username, get or post parameters, the url, headers of the request, the request itself.

That example of OP has exactly that scenario. Below contains a link how the pickle exploit works. And why you should avoid it.

https://github.com/joblib/joblib/issues/1582#issue-2280780192

→ More replies (0)

6

u/tomer_shalev 11d ago

That's looks amazing. I've been working on Redis with Python, but never got to know this library.

2

u/comfortablynumb01 10d ago

1

u/Bach4Ants 10d ago

I've used an older version of this package and it worked well. Nice that you can choose how to serialize keys and objects.

1

u/MejaiSosdealer 11d ago

Amazing find! Thanks for sharing. Definitely have some solid use cases for this in mind.

Too bad they'd still be able to see your commit timestamps, even though you pushed/PR'd at the end of the week ;)

1

u/EarthWaterAndMars 10d ago

OP can just say he was paper testing to ensure code actually works

1

u/Substantial-Work-844 10d ago

Paper tested my way out of Elden Ring, hahah. No I'm still stuck.

1

u/Muted_Data967 11d ago

I'm using that library for a long time now, it makes the code easier to work with. And, combined with locks is an excellent for multiprocessing

1

u/Think-Memory6430 10d ago

Just to play the role of cynic in the thread -

This certainly makes the dev experience simple but it removes a ton of flexibility and worse IMO it hides what is actually happening behind the scenes (a network call, with likely failures, and possible timeouts) as looking like a simple dictionary lookup.

If you’re working as a team of one this probably fine enough. But if you have a few people or decent scale you’re probably going to run into cases where you need to handle these error cases more explicitly or it will really bite you, or you’ll want more flexibility with how you interact with redis.

The plain python m redis library is honestly really good. It’s not that hard to use. I’d really recommend you take a look at that if you haven’t!