NeurIPS’20: Tiny Transfer Learning: Towards Memory-Efficient On-Device Learning
Three benchmark datasets: Cars, Flowers, Aircraft
Devices: Raspberry Pi 1. 256MB of memory.
ICLR’20: Once-for-all: Train one network and specialize it for efficient deployment.
NeurIPS’19: Deep Leakage from Gradients.
NeurIPS’20: Differentiable Augmentation for Data-Efficient GAN Training
Q&A ———————– References: ICLR’19: ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. By Han Cai, Ligeng Zhu, and Song Han. NeurIPS’20: MCUNet: Tiny Deep Learning on IoT Devices. By Ji Lin, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, Song Han. More
Q&A What is the search space? What is mobile search space? ? c42 Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. MnasNet: Platform-Aware Neural Architecture Search for Mobile. In CVPR, 2019 What is a model? What is the system part and model part in the system-model codesign? What is one-shot architecture search? ? [c4] Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le.
Q&A What is MACs? Resolution? Kernel? Depth? Width? Channel? L1 norm of channel’s weight? Problem formalization ${min_{W_o}{\sum}{_a{_i}}}L_v(C(W_o, a_i))$ References: Once-for-all: Qualcomm News The future will be populated with many IoT devices that are AI-capable. AI will surround our lives at much lower cost, lower latency, and higher accuracy. There will be more powerful AI applications running on tiny edge devices, which requires extremely compact models and efficient chips.
If you could revise
the fundmental principles of
computer system design
to improve security...
... what would you change?