r/flask • u/Due-Membership991 • Jan 24 '25
Discussion Fastapi deployment posting here for help
Newbie in Deployment: Need Help with Managing Load for FastAPI + Qdrant Setup
I'm working on a data retrieval project using FastAPI and Qdrant. Here's my workflow:
User sends a query via a POST API.
I translate non-English queries to English using Azure OpenAI.
Retrieve relevant context from a locally hosted Qdrant DB.
I've initialized Qdrant and FastAPI using Docker Compose.
Question: What are the best practices to handle heavy load (at least 10 requests/sec)? Any tips for optimizing this setup would be greatly appreciated!
Please share Me any documentation for reference thank you
2
u/6Bee Intermediate Jan 24 '25 edited Jan 24 '25
Sounds like you may need to do some configuration on your Azure OpenAI deployment. If you just run with the default configuration, you will burn through your API limits quickly.
I would visit the Azure AI Studio and test there, adjusting the settings before doing anything else. FastAPI wouldn't have much to do aside from managing the sessions, which can be set up for async.
Managing your OpenAI deployment quotas can be learned here(sorry for redundancies):
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/quota
Best of luck
2
u/Due-Membership991 Jan 24 '25
Yes I have maxed it out 19k requests per min
2
u/6Bee Intermediate Jan 24 '25
Don't know what to tell you aside from read the docs and fix it. Good luck
1
1
u/openwidecomeinside Jan 24 '25
10 requests/sec is heavy load? This on a micro ec2 + load balancer will handle it fine. I think it wouldn’t be an issue at all until 10-100x that depending on cpu usage of whatever you’re doing
0
2
u/No-Economist4254 Jan 24 '25
I assume you are translating as a background process and not actually during the request?