ML

References:

More

  • Reading Notes on Dr. Mi Zhang's Publications
  • References: Mi Zhang - Publications Publications in 2020 Mutual Net ECCV’20: MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution Distream SenSys’20: Distream: Scaling Live Video Analytics with Workload-Adaptive Distributed Edge Intelligence WiFi SenSys’20: WiFi See It All: Generative Adversarial Network-augmented Versatile WiFi Imaging. SecWIR MobiSys’20: SecWIR: Securing Smart Home IoT Communications via WiFi Routers with Embedded Intelligence. FlexDNN SEC’20: FlexDNN: Input-Adaptive On-Device Deep Learning for Efficient Mobile Vision.

  • Intro Ml
  • Two Ways to Categorize ML algorithms References: A Tour of Machine Learning Algorithms Two ways: Group them by learning style, and by their similarity in form or function. By Learning Styles cite: https://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/ Supervised learning: all training data with known labels. e.g. Logistic Regression, Back propagation neural network. Unsupervised learning: Input data is not labeled. e.g. Apriori algorithm, K-Means. Semi-Supervised learning: input data is a mixture of labeled and unlabelled examples.

  • NAS
  • Neural Architecture Search (NAS) [49, 50, 32, 6, 42, 45] dominates efficient network design. From “MCUNet: Tiny Deep Learning on IoT Devices” by Lin, Ji, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han; References: Citations of NAS from “NeurIPS’20: MCUNet: Tiny Deep Learning on IoT Devices” by Lin, Ji, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han ICLR’17: Neural Architecture Search with Reinforcement Learning.

  • Lane
  • ICLR’19: An Empirical study of Binary Neural Networks’ Optimization. ECCV’20: Journey towards tiny perceptual super-resolution MobiCom’20: SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud InterSpeech’20: Iterative Compression of End-to-End ASR Model using AutoML DAC’20: Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator More 2020 SPINN References: reference Evaluation Server: 2x Intel Xeon Gold 6130, 128GB Memory, with GPU GTX 1080Ti Client: Nvidia Jetson Xavier AGX, 16GB Memory, with GPU 512-core Volta More

  • Songhan
  • TinyTL NeurIPS’20: Tiny Transfer Learning: Towards Memory-Efficient On-Device Learning Three benchmark datasets: Cars, Flowers, Aircraft Using ImageNet as the pre-training dataset. Neural network architecture: MobileNetV2 (lightweight), ResNet-50. Devices: Raspberry Pi 1. 256MB of memory. Once for all ICLR’20: Once-for-all: Train one network and specialize it for efficient deployment. ImageNet; Samsung S7 Edge, Note10, Google Pixel1, Pixel2, LG G8, NVIDIA 1080 Ti, V100 GPUs, Jetson TX2, Intel Xeon CPU, Xilinx ZU8EG, and ZU3EG FPGAs.

  • Training Opt
  • Reference 1 2018 Mobile Crowd Reference 1 Anh, Tran The, Nguyen Cong Luong, Dusit Niyato, Dong In Kim, and Li-Chun Wang. “Efficient training management for mobile crowd-machine learning: A deep reinforcement learning approach.” IEEE Wireless Communications Letters 8, no. 5 (2019): 1345-1348. ↩ 2011 Realtime Train Reference 1 Choi, Kwontaeg, Kar-Ann Toh, and Hyeran Byun. “Realtime training on mobile devices for face recognition applications.” Pattern recognition 44, no.

  • Optimizing Inference
  • Reference 1 2019 Jointdnn Reference 1 Eshratifar, Amir Erfan, Mohammad Saeed Abrishami, and Massoud Pedram. “JointDNN: an efficient training and inference engine for intelligent mobile cloud computing services.” IEEE Transactions on Mobile Computing (2019). ↩ 2019 Eurosys ULayer Reference 1 Kim, Youngsok, Joonsung Kim, Dongju Chae, Daehyun Kim, and Jangwoo Kim. “μLayer: Low Latency On-Device Inference Using Cooperative Single-Layer Acceleration and Processor-Friendly Quantization.” In Proceedings of the Fourteenth EuroSys Conference 2019, pp.

  • Automl
  • Reference 1 2013 Kdd Auto WEKA Reference 1 Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013). Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms. KDD ‘13 Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 847–855. ↩ reference ↩

  • Deep Learning
  • Top Wondering Besides deep learning, many other solutions exists for computers to discover informal knowledge by itself. How many do we know and how they are catergorized? Until now, is there any new knowledge have been discovered by deep learning? Ones in traditional style: any new mathematical/physical theories discovered yet? Ones in non-traditional style? For image-processing based AIs such as object detection: is the image a good source for feature extracting?

Created Aug 31, 2021 // Last Updated Aug 31, 2021

If you could revise
the fundmental principles of
computer system design
to improve security...

... what would you change?