I'm working on Remote-Procedure Call (RPC) server that processes incoming streams of data from clients, and looking for a way to set a limit on the number of simultaneous streams allowed.
There is some bottleneck in the system (such as CPU time or NIC bandwidth), but it's hard to predict up front how many streams are necessary to saturate it, because each stream might have a different demand on the limiting resource.
I want to allow enough streams to saturate the bottleneck so I'm making good use of my hardware, but not too many because then each individual stream will get poor service.
Because per-stream overheads like RAM for receive buffers are also eventually a bottleneck, I expect that the graph of throughput versus number of live streams might look something like the following. As the number of streams grows our throughput increases more or less linearly until we hit the bottleneck, after which point things eventually start to get worse.
What I'd like to do is find some control loop that lets me sit near the top of that graph. I can use a temporal search to alter the limit on stream count over time and look at server throughput to measure points on the graph, but I don't know how to converge to near the maximum, and stay there even as conditions change.
A control system like a PID loop requires a set point, but that doesn't make sense here because I don't know a priori what max throughput the server is capable of achieving, and that may change over time. It's almost as if I want to run a PID loop on the derivative of the curve with a set point of zero to find the maximum, but who knows what the dynamics of that are.
Are there any well-known control loops that attempt to maximize a process variable?
