What is MACs?
Resolution?
Kernel?
Depth?
Width? Channel?
Problem formalization
References:
The future will be populated with many IoT devices that are AI-capable. AI will surround our lives at much lower cost, lower latency, and higher accuracy. There will be more powerful AI applications running on tiny edge devices, which requires extremely compact models and efficient chips. At the same time, privacy will become increasingly important. On-device AI will be popular thanks to the privacy and latency advantages. Model compression and efficient architecture design techniques will enable on-device AI, making it more capable.
ICLR’20: Once-for-all: Train one network and specialize it for efficient deployment.
Progressive Shrinking
Q&A ofa_net is always called with pretrained=True, which means it will works without training. But how to train the super network ? References: reference # file: # ofa/model_zoo.py def ofa_net(net_id, pretrained=True): if net_id == 'ofa_proxyless_d234_e346_k357_w1.3': net = OFAProxylessNASNets( dropout_rate=0, width_mult=1.3, ks_list=[3, 5, 7], expand_ratio_list=[3, 4, 6], depth_list=[2, 3, 4], ) elif net_id == 'ofa_mbv3_d234_e346_k357_w1.0': net = OFAMobileNetV3( dropout_rate=0, width_mult=1.0, ks_list=[3, 5, 7], expand_ratio_list=[3, 4, 6], depth_list=[2, 3, 4], ) elif net_id == 'ofa_mbv3_d234_e346_k357_w1.
References: Hands-on Tutorial of Once-for-All Network, See tutorial/ofa.ipynb How to Get Your Specialized Neural Networks on ImageNet in Minutes With OFA Networks In this notebook, we will demonstrate - how to use pretrained specialized OFA sub-networks for efficient inference on diverse hardware platforms - how to get new specialized neural networks on ImageNet with the OFA network within minutes. Once-for-All (OFA) is an efficient AutoML technique that decouples training from search.
If you could revise
the fundmental principles of
computer system design
to improve security...
... what would you change?