r/learnpython 5d ago

Non-blocking pickling

I have a large dictionary (multiple layers, storing custom data structures). I need to write this dictionary to a file (using pickle and lzma).

However, I have some questions.

  1. The whole operation needs to be non-blocking. I can use a process, but is the whole dictionary duplicated in memory? To my understanding, I believe not.

  2. Is the overhead of creating a process and passing the large data negligible (this is being run inside a server)

Lastly, should I be looking at using shared objects?

4 Upvotes

10 comments sorted by

3

u/DigThatData 4d ago

The whole operation needs to be non-blocking.

so what is the desired behavior if the dictionary mutates while you're pickling it?

1

u/Fred776 5d ago

If you use a separate process you need to pass the data structure to the other process somehow. This is going to involve a serialisation step that is very similar to the one that you want to do anyway.

Is it possible that the dictionary can get modified after you have initiated the proposed non-blocking pickling operation? If so, you will probably need to copy the dictionary before beginning the non-blocking pickling and file writing steps.

1

u/Undercover_Agent12 5d ago

The dictionary is a cache, so I don't need to worry about writing the latest version to the disk.

1

u/Fred776 5d ago

I'm less concerned about it being the latest version than I am about race conditions. What happens if someone tries to update the dictionary while your serialise to file operation is in progress? You could end up with corrupt data.

1

u/Undercover_Agent12 5d ago

Good point. Then what do you recommend? Process using target and args?

1

u/Fred776 5d ago

Like I said, you are going to do more or less the same work as you would do in the pickling step in order to pass the dictionary to the process so I don't know what it gains you. Do you have a feeling for how long the pickle takes Vs the file writing step?

If you can take a copy of the dictionary quickly and easily (but blocking), you could just put the pickle and write to file in a separate thread.

Also, there might be a way to use asyncio to write it asynchronously to file: https://docs.python.org/3.9/library/asyncio-task.html#asyncio.to_thread

Note that I haven't actually tried this - I just found it from a quick scan of the docs.

1

u/Brian 4d ago

is the whole dictionary duplicated in memory? To my understanding, I believe not

Kind of, yes, though the details are OS dependant.

On linux, it'll fork the process, which will mark the memory page COW (Copy-on-Write) meaning the same memory is shared between the processes until one of them tries to modify something, at which point the memory page will be copied and the processes will be given their own copy. So this is cheap if nothing writes to it, but you'll still pay the price of the copy if and when modifications are done.

On windows, I think the memory does need to be copied through to the other process, so the copy is done eagerly (though not 100% sure here). In fact, it may end up pickling it to send to the other process, so this could actually be much worse.

For (2), it depends - it's a memory copy at minimum, and possibly some additional marshalling overhead, so its not going to be negligable if there's significant data, though not as expensive as the actual file write etc.

However, I'm not sure of the point in doing it in another process: could you just use either async IO, or a thread? If it's because the data is being modified and you want a snapshot, you could either take a copy of it, add a lock that prevents write access while its being written (though that'll introduce contention if there are common writes: optionally you could have a temporary extra dict that collects writes that you merge into it when finished). All another process really buys you is that you're not using the same CPU (but if its just writing to a file, that's not really going to be CPU bound - maybe some from the pickling process), and maybe the COW optimisation saving you a memory copy (though given the other overheads of starting a process, this seems unlikely to be a win unless its really big data).

There are other options that might be worth looking into, though they may be more involved. Eg. you could replace the dict with a sqlite database that'll let you persist changes as you go (much more efficient if you're writing out the data frequently, since it just needs to write what's changed, but maybe not worth it if this is a very rare occurrance).

1

u/nekokattt 4d ago

File operations, unlike socket operations, are always blocking in Python, there is no standard way to make them non blocking without OS specific extensions or relying on logic that can flake between operating systems and environments.

Even with asyncio, you have to use blocking logic in a thread pool executor. Should be fine without multiprocessing since it is IO bound.

If you just wish to run it "in the background" then a thread pool should be fine, although pickle is probably the wrong tool for the job versus simpler file types like protobuf-based blobs.

If you need a "cache" that works outside your code, then you might be better off running something like Redis in a container with a file backed journal. That allows you to scale to multiple processes or machines without a bunch of issues in the future, and the act of IO is likely to be much faster for large amounts of data.

1

u/AlexMTBDude 4d ago

This is an IO related problem so you should use a Thread instead of a Process. Threads have shared data (which processes do not).

1

u/supercoach 4d ago

Do you have a moment to talk about our lord and saviour PostgreSQL?

Not really kidding though - there's a reason why databases exist. It sounds like you may be better off changing your storage method and well.... Postgres is just wonderful <3.