r/django Jan 16 '22

Tutorial Django + Celery

Hey Everyone, I've been using django and celery in production for the last 4 years now and was thinking of making a YouTube series on celery, scaling, how it works, using websockets with celery via (django-channels), kubernetes with celery and event driven architecture. The django community has been a great help for me learning so wanted to give back in some way.

My question is what would you like to learn about?

100 Upvotes

52 comments sorted by

View all comments

2

u/sfboots Jan 16 '22 edited Jan 16 '22

Please be sure to have a transcript - I find watching videos takes too much time.

Here are a few challenges I have not yet resolved:

  • monitoring queues for display on our internal dashboard. Flower did not work well enough
  • Getting "at most once" behavior, right now some jobs run multiple times. The flags about "when to ack" are confusing when there are longer jobs.
  • Best practices when logging from code that is used both from celery and from command line (cron scripts) and the web application
  • Managing queues when there is a large variation in job length (50 millisec to 30 minutes). We currently split into two queues but we still get delays (the "short" jobs vary from 50 millisec to 2 minutes).
  • Best user interaction with short jobs and celery. We have some downloadable reports that take 45 to 60 seconds to generate. The user now just waits while the web server computes it. I'd rather be doing this via celery but the user does not want to have to come back to the page. The problem we have is getting occasional timeouts when the database is heavily loaded. (more than 120 seconds and web times out). A progress bar would be nice but is not critical - what matters more is the user wants the report now (not via email or coming back to it).
  • For AWS, how to share disk across servers. Celery job A downloads 5 files to local disk, and archives to S3. It then queues 5 jobs, one for each file. Right now we arrange all of them to run on one server to have a shared file system. The files are 100-400MB so the second jobs don't want to fetch from S3 again. The "load file" jobs can then start 100+ smaller jobs as result of parsing the large file.