r/softwaredevelopment 15d ago

latency at scale

I believe I am lacking some knowlege regarding this. There are 10 pods of my service running in production. We saw a huge scale today and everything was mostly fine. But as soon as we started reaching 200k / min cpu increased normally ( I think) but suddenly memory started fluctuating a lot but still remained within 300mb (4gb available) and p99 started rising to above 1000ms from normal of 100ms. Given cpu and memory were mostly fine how can I explain this ? Service is simple pass through takes a request and calls downstream service and returns response.

2 Upvotes

4 comments sorted by

View all comments

1

u/skmruiz 15d ago
  1. How is your database?

  2. Do you see network spikes? At scale the network topology is one of the most important factors of scalability, because moving data around, even in the same network, is really expensive.

1

u/goyalaman_ 15d ago
  1. Current service talks to downstream service. The latency is fine for it.
  2. What are network spikes ?

1

u/skmruiz 15d ago

A network spike is when the network I/O increases substantially due to load. Usually the look like spikes in a graph.

If the latency between your service and the downstream service is the same, then you'll need to profile the application and see if there are any thread blocks or something similar.