In convergence analysis of federated learning (FL), usually, we have an assumption that the loss function is strongly convex. However, when the loss function model is non-convex, e.g., using DNN, I rarely find the convergence analysis for this model. To this end, I have some questions:
- Is it possible to use the strongly-convex assumption for DNN loss function?
- Is there any way to analyze the convergence rate of the global loss gap on the FL when the DNN model is used? What kind of assumption can be adopted in this case?