I am currently reading this paper [1] and [2].
The author makes the claim that their Federated Learning scheme is similar to Model-Agnostic Meta-Learning?
They state:
Interestingly, FFL is similar to Model-Agnostic Meta-Learning (MAML) in three aspects: (i) in FFL, we have workers who possess their own datasets (with different distributions) while in MAML, there are tasks with their corresponding datasets. (ii) In FFL, we have local updates that improve performances (loss/accuracy) of individual workers whereas in MAML, there are inner-loop updates that do so for individual tasks. (iii) In FFL, in each round, we apply global gradient to weights, which improves the overall performance of all workers while in MAML, the outer-loop update does so for all of tasks.
Are they right in proposing such a claim?