NeurIPS’20: Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?
Overview: This proposed a method to avoid bias in NAS.
Problem: jointly learn architecture representations (Training) and optimize search (Search) could introduce bias.
Solution: Decouple training and search, realized by unsupervised learning:
Search space used in NAS best practices checklist. From three works:
Evaluated two aspects:
Lele: No. For me, this contains only pure
ML concepts, which most of them I do not understand, both the novel method (the new autoencoder) and the steps in evaluation strategy.
Lele: It seems like another way to decouple
the two NAS stage: search space training and searching. From this aspect, it is similar to what we have read from Song Han’s paper once for all.
But the goal of once-for-all is different with this paper. Once-for-all is to use NAS to search for “small” enough networks in size; But this paper aims to improve the NAS in general way, without considering the actual application scenarios of the network.
For potential new ideas, we can have two directions here:
If you could revise
the fundmental principles of
computer system design
to improve security...
... what would you change?