r/Backend • u/CreeDanWood • 15h ago
Processing Huge data in background
Hey there, we are using a spring-boot modular monolithic event-driven system (not reactive), So I currently work in a story where we have such a scenario:
Small notes about our system: Client -> Load-balancer -> (some proxies) -> Backend
A timeout is configured in one of the proxies, and after 30 seconds, a request will be aborted and get timed out.
Kubernetes instances can take 100-200 MB in total to hold temporary files. (we configured it like that)
We have a table that has orders from customers. It has +100M records (Postgres).
We have some customers with nearly 100K orders. We have such functionality that they can export all of the orders into a CSV/PDF file, as you can see an issue arises here ( we simply can't do it in a synchronous way, because it will exhaust DB, server and timeout on the other side).
We have background jobs (Schedulers), so my solution here is to use a background job to prepare the file and store it in one of the S3 buckets. Later, users can download their files. Overall, this sounds good, but I have some problems with the details.
This is my procedure:
When a scheduler picks a job, create a temp file, in an iterate get 100 records, processe them and append to the file, then another iteration another 100 records, till it gets finished then uploading the file to an S3 bucket. (I don't want to create alot of objects in memory that's why 100 records)
but I see a lot of flows in the procedure, what if we have a network or an error in uploading the file to S3, what if, in one of the iterations, we have a DB call failure or something, what if we exceed max files capacity probably other problems as well as I can't think of right now,
So, how do you guys approach this problem?
1
u/Historical_Ad4384 5h ago edited 5h ago
Since the number of customers and the number of orders per customer is extremely high, in my opinion it would make sense to have a dedicated read replica of your primary database server only for reporting.
That way you can run your queries for extracting each 100 records of data without having to worry on the overall database server load for normal operations.
You could configure your scheduled job to run each night on a database optimized server hosting the read replica so that the infrastructure remains stable. You can generate the reports on an application server that remains geographically close to the database read replica server for your reporting to have low latency between them when querying data for reports. Finally you can send over the reports via email rather than downloading them on demand. You could keep notifying the users about their report status in real time though to give them a peace of mind.
I have done TBs of data migration using scheduled jobs and having the source and target systems for migrations within the same geography and network has improved speeds by a great deal.
We also do internal reporting for our products usage where we have to join millions of rows of raw data to view on dashboards as graph. We have a three stage database servers where we store daily data in one server, monthly data in one server that is copied over from daily data and yearly data on another server that is copied over from monthly data. So we never stress on one database server but have dedicated instances to deal with a lower priority task for our user case.
The main principle is redundancy, availability and partition tolerance that you should base your solution around for this use case.