Reading Notes on Dr. Mi Zhang's Publications

References:

Publications in 2020

Mutual Net

ECCV’20: MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution

Distream

SenSys’20: Distream: Scaling Live Video Analytics with Workload-Adaptive Distributed Edge Intelligence

WiFi

SenSys’20: WiFi See It All: Generative Adversarial Network-augmented Versatile WiFi Imaging.

SecWIR

MobiSys’20: SecWIR: Securing Smart Home IoT Communications via WiFi Routers with Embedded Intelligence.

FlexDNN

SEC’20: FlexDNN: Input-Adaptive On-Device Deep Learning for Efficient Mobile Vision.

Adaptive

IEEE Pervasive’20: Adaptive On-Device Deep Learning: A New Frontier of Mobile Vision.

DL in IoT

Book in Fog Computing’20: Deep Learning in the Era of Edge Computing: Challenges and Opportunities.

More

  • Idea of improving FL using NAS
  • References: reference NAS for Better Federated Learning Models in Different FL Topologies? (Another direction might also be enticing: using FL to improve NAS.) In Mi Zhang’s FL benchmark paper ArXiv’20: FedML: A Research Library and Benchmark for Federated Learning, I found several important features that an FL can have, such as computing paradigms, topology, exchanged information, and training procedures. It also mentions FedNAS as one category of FL algorithm (in Section 4.

  • 2020 Fedml
  • ArXiv’20: FedML: A Research Library and Benchmark for Federated Learning Overview Problem: Existing Federated Learning libraries: Cannot adequately support diverse algorithm development; Lack of Diverse FL computing paradigms: TensorFlow-Federated, PySyft, LEAF, only support centralized topology-based FL algorithms; FATE, PaddleFL, does not support new algorithms; Lack of diverse FL configurations FL is diverse in network topology, exchanged information, and training procedures. These diversity is not supported in exisitng FL lib.

  • 2020 Scylla
  • INFOCOM’20: SCYLLA: QoE-aware Continuous Mobile Vision with FPGA-based Dynamic Deep Neural Network Reconfiguration. Overview In one sentence: Use FPGA for fast switching between different neural network models. Problem: multiple neural network models are hard to be made efficient to run on the same device concurrently. Heterogeneous multi-tenacy GPU (SIMT) requires several seconds to switch from one neural network model to another; ASIC AI chips, not designed for concurrency and heterogeneity.

  • 2020 Neurips
  • NeurIPS’20 Unsupervised Learning NeurIPS’20: Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Overview: This proposed a method to avoid bias in NAS. Problem: jointly learn architecture representations (Training) and optimize search (Search) could introduce bias. Solution: Decouple training and search, realized by unsupervised learning: First, The Training: use unsupervised training to find a set of architecture representationss in latent space; Unsupervised training captures structural information of architectures; These architectures cluster better and distribute more smoothly in the latent space, which facilitates the downstream architecture search (the next step).

Created Nov 17, 2020 // Last Updated Aug 31, 2021

If you could revise
the fundmental principles of
computer system design
to improve security...

... what would you change?