Idea of improving FL using NAS

References:

NAS for Better Federated Learning Models in Different FL Topologies?

(Another direction might also be enticing: using FL to improve NAS.)

In Mi Zhang’s FL benchmark paper ArXiv’20: FedML: A Research Library and Benchmark for Federated Learning, I found several important features that an FL can have, such as computing paradigms, topology, exchanged information, and training procedures.

It also mentions FedNAS as one category of FL algorithm (in Section 4.1). However, based on a survey paper about Fed + NAS, I am thinking there can be many different ways to improve FL using NAS. Recalling Song Han’s Once for All paper, what he do is to adopt NAS to find the best model for machine constraints. Similarly, we probably can use NAS to find the best model for FL constraints, and these constraints seems more complicated than just the machine resource constraints – FL has multiple constraints (or features?), such as computing paradigms, topology, exchanged information and training procedures.

So tracking down this direction, let’s see whether there are anything has been done in improving FL models using NAS.

New ideas:

  • Where do we search? How about at the server level instead of at the client level?
    • Need a new training method without data (since data is only available at client)
    • Can we use only the trained $w$ as the data for training in the NAS?
  • What do we search for? For optimization goals (or search objectives) of the NAS, we might have more choices than just the accuracy or device resource constraints:
    • Can we search by balancing more factors, such as computing paradigms, topology, or communication cost, or training procedure?

More

Created Nov 24, 2020 // Last Updated Aug 31, 2021

If you could revise
the fundmental principles of
computer system design
to improve security...

... what would you change?