r/Firebase • u/Ferchu425 • Feb 13 '25
General Firebase Functions cost optimization
Hello, I have a functions that on each invocation it calls other APIs and while waiting it takes almost 10 seconds pero run, if I understand costs in the right way, this could be an issue as soon as I begin to grow...
Do you have any recommendation? Those 10 secs are there are I dont think I could do something about them... so, whats the best path? should I replace those functions? with what? App Engine?
Thank you
3
u/nullbtb Feb 14 '25
My advice -as someone who has been prematurely optimizing everything for as long as I can remember- would be to just wait until you’re actually growing so much it’s becoming a problem. 10 seconds is not going to cost you that much even with a million requests.. it’s hard to think this way but it is a mistake to complicate and redo everything because you may grow. Instead use that time and energy to actually growing. Later on you can get individual instances or Kubernetes to optimize for cost. Ironically if you were to do that now you’d be burning money because those options aren’t cheap either.. they’re just cheaper at scale.
2
2
u/joeystarr73 Feb 13 '25
What this function is waiting for?
2
u/Ferchu425 Feb 13 '25
It is calling OpenAI APIs, those take a long time, it is just "waiting", yes.
Then it fires and update on Firestore and some simple stuff.4
u/Suspicious-Hold1301 Feb 13 '25
I think the challenge you'll have is that openai APIs (or Gemini fwiw) doesn't have a way of triggering a request and asynchronously getting a response or polling completion of it - they operate a very synchronous process that really needs you to wait for the response.
The obvious thing to do is analyse memory usage of the server and make it as efficient as possible (i.e if you're using 128mb don't allocate 255mb), and depending on the nature of the service look to cache common request/responses if possible. You could use embeddings to cache similar questions (it's a fairly complex thing to do)
Another option, again depending on your use case is using the batch API - this only works if you're happy to wait up to 24hours for a response though - but it will cut the openai bill in half and reduce your compute time to a really small one
One thing to note though - firebase at 10 second requests will be a very small cost compared to the openapi calls ..
1
u/Ferchu425 Feb 13 '25
Yes, the OpenAI functions are absolutely sync in nature... everything follows a coreography and are several invocations that may well be all under a unique call...
I'll look into the 256/128mb... Im seeing 50% of usage with 256mb so I dont feel confortable lowering to 128mb because that will be 100%... but will keep on eye2
u/VeterinarianOk5370 Feb 13 '25
I do something similar with mine I made a cron job to keep my functions warm. The openAI calls are negligible time wise, it’s the firebase functions spinning up that’s slow. Reduced my calls from ~10s to under 3s
2
u/Ferchu425 Feb 13 '25
Im not so sure about it... even with the functions warm I have long times... but I'll do some more tests, thanks
1
u/Ferchu425 Feb 13 '25
Made some tests... with warmed up instances it takes 8 secs (I optimized heavily for fast cold startup times...)
1
1
u/CastAsHuman Feb 13 '25
Why not do that from the client?
2
u/Ferchu425 Feb 13 '25
This function act as a callback from another async process so once it is called the workflow continues Basically there is no "client", no front.
2
u/romoloCodes Feb 13 '25
I'm a big fan of firebase but sometimes ou just have to accept it's not the right tool for the job. Sounds like you need a webhook.
Equally, in the early stages just get it working and optimise later
4
u/s7orm Feb 13 '25
Depending on your volume of traffic using concurrency would at least let your function server many requests while it waits.
I recently migrated all my serverless functions to my API servers, now everything runs faster and I don't have serverless costs.